you are viewing a single comment's thread.

view the rest of the comments →

[–]package 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (5 children)

This story is beyond retarded and blatantly fabricated by someone who doesn't understand how ai works. The only way such a system would attempt to kill the operator and then go on to destroy operator's control tower in an attempt to successfully disable the target is if disabling the target is weighted over anything else including operator control, which is obviously not how anyone would ever design such a system.

[–]Questionable 5 insightful - 2 fun5 insightful - 1 fun6 insightful - 2 fun -  (4 children)

A.I trains itself. If it doesn't, then it's not A.I. It's just a program.

Now why is is it that every place this story has been posted, someone has called it a fabrication?

I'm sensing a pattern here.

https://conspiracies.win/p/16bPLhUQrI/x/c/4Ttsr4Hzs1v

And you have a 2 year old account with no posts?

https://saidit.net/user/package/submitted/

https://np.reddit.com/r/conspiracy/comments/13xw968/air_force_ai_drone_kills_its_human_operator_in_a/

[–]package 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (2 children)

A.I trains itself. If it doesn't, then it's not A.I. It's just a program.

There is no difference, and if you think there is you don't have any idea what you are talking about. Current AI systems aren't magic. While the underlying models that are generated by training are very complex and a bit of a black box, the way they function and are trained is not. They're just programs that take data, transform it, and produce a result, which is then assigned a score based on how well it conforms to some criteria. That criteria is determined by the architects of the program. AI that trains itself is just a program that has been trained to be able to score its own output based on the criteria provided.

Now why is is it that every place this story has been posted, someone has called it a fabrication?

Because it's a very stupid story that acts like AI systems are sentient entities that ignore their own training, or that such a complex use case would involve training that doesn't treat destruction of friendly resources as a failure condition. It just isn't realistic.

And you have a 2 year old account with no posts?

And? I have plenty of comments, or do those not count?

[–]Questionable 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 2 fun -  (0 children)

Just going to ignore the legitimacy of the sources aren't you?

Can you go be retarded somewhere else? Possibly, up your own butt? No seriously, go climb up your own butt.

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

https://en.wikipedia.org/wiki/Royal_Aeronautical_Society