AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test
submitted 10 months ago by iamonlyoneman from (vice.com)
view the rest of the comments →
[–]package 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - 10 months ago (5 children)
This story is beyond retarded and blatantly fabricated by someone who doesn't understand how ai works. The only way such a system would attempt to kill the operator and then go on to destroy operator's control tower in an attempt to successfully disable the target is if disabling the target is weighted over anything else including operator control, which is obviously not how anyone would ever design such a system.
[–]Questionable 5 insightful - 2 fun5 insightful - 1 fun6 insightful - 1 fun6 insightful - 2 fun - 10 months ago* (4 children)
A.I trains itself. If it doesn't, then it's not A.I. It's just a program.
Now why is is it that every place this story has been posted, someone has called it a fabrication?
I'm sensing a pattern here.
https://conspiracies.win/p/16bPLhUQrI/x/c/4Ttsr4Hzs1v
And you have a 2 year old account with no posts?
https://saidit.net/user/package/submitted/
https://np.reddit.com/r/conspiracy/comments/13xw968/air_force_ai_drone_kills_its_human_operator_in_a/
[–]package 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - 10 months ago* (2 children)
There is no difference, and if you think there is you don't have any idea what you are talking about. Current AI systems aren't magic. While the underlying models that are generated by training are very complex and a bit of a black box, the way they function and are trained is not. They're just programs that take data, transform it, and produce a result, which is then assigned a score based on how well it conforms to some criteria. That criteria is determined by the architects of the program. AI that trains itself is just a program that has been trained to be able to score its own output based on the criteria provided.
Because it's a very stupid story that acts like AI systems are sentient entities that ignore their own training, or that such a complex use case would involve training that doesn't treat destruction of friendly resources as a failure condition. It just isn't realistic.
And? I have plenty of comments, or do those not count?
[–]Questionable 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 1 fun2 insightful - 2 fun - 10 months ago* (0 children)
Just going to ignore the legitimacy of the sources aren't you?
Can you go be retarded somewhere else? Possibly, up your own butt? No seriously, go climb up your own butt.
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
https://en.wikipedia.org/wiki/Royal_Aeronautical_Society
use the following search parameters to narrow your results:
e.g. sub:pics site:imgur.com dog
sub:pics site:imgur.com dog
advanced search: by author, sub...
~5 users here now
Technology and related articles and discussion
view the rest of the comments →
[–]package 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - (5 children)
[–]Questionable 5 insightful - 2 fun5 insightful - 1 fun6 insightful - 1 fun6 insightful - 2 fun - (4 children)
[–]package 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 0 fun4 insightful - 1 fun - (2 children)
[–]Questionable 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 1 fun2 insightful - 2 fun - (0 children)