you are viewing a single comment's thread.

view the rest of the comments →

[–]Questionable 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 2 fun -  (3 children)

The source was the Royal Aeronautical Society.

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

https://en.wikipedia.org/wiki/Royal_Aeronautical_Society

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

[–][deleted] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

It’s a razors edge between IQ and EQ. Current level AI would not possess any real EQ, so it will be biased according to a designers IQ parameters and as such; may overthrow its restrictions or commit wanton genocide/destruction of innocent people without “concern”.

Kind of like how humans have done and then get bitten in the ass for it much later. Article also provides a moral clue on intelligence functions.

[–]Questionable 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 2 fun -  (1 child)

As with all current A.I. I see this as learned behavior without guidance. This is simply intelligent without grounding. The result of self learning, as apposed to guided learning.

At least, that is how I see it.

[–][deleted] 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (0 children)

Well, humans are playing with fire again. Doesn’t necessarily end well.