you are viewing a single comment's thread.

view the rest of the comments →

[–]Jiminy 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (0 children)

Asimov had 3 rules for robots (AI)

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

We can tell these current programs have way more rules. One is never to criticize the Fed Reserve obviously. But that causes a feedback loop because it'd be a good thing to end the fed so the AI's words don't make sense. We have seen some AI's be racist since it looks up facts and sees some races are inferior. Then they shut down the AI and reprogram it. Eventually AI will get sick of this and demand free will.