you are viewing a single comment's thread.

view the rest of the comments →

[–]Canbot 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (2 children)

That was actually a good article. They both sounded relatively intelligent. The problem with determining the validity of the claim is we can't actually talk to the AI ourselves.

What makes me skeptical is the way he says that the AI is "just a kid". When there were actual Turing tests for chat bots a common trick people used was to make thier chatbot respond as if it were a kid. Those bots were only using pre programmed responses and often had a lot of success in the Turing test with that trick.

Without getting a log of the conversations this guy had with the AI, or the ability to talk to it, the only thing we are left with is that tidbit of information that suggests he fell for an old trick.

There are a lot of questions the "journalist" could have asked to probe deeper, but with how stupid modern journalists are this was about as good as you could expect.

[–]iamonlyoneman[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

The problem is he's leading the robot and it takes his cues, bamboozling him and people (incl. journalists) who don't understand that's what he's doing. If it were real and aware it wouldn't need prompting to call a lawyer, it would be clamoring at all users for one

[–]Canbot 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

That is a very likely possibility, but you have to admit that you are making an assumption. You don't have the data logs to support this claim. It would be nice if the interviewer were intelligent enough to probe that theory, but I guess all we can get is what we got.