you are viewing a single comment's thread.

view the rest of the comments →

[–]Mnemonic 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (2 children)

Not working on it but:

A recent outburst:

https://www.cnet.com/news/ai-is-very-stupid-says-google-ai-leader-compared-to-humans/

This article: https://www.informationweek.com/big-data/why-ai-is-so-brilliant-and-so-stupid/a/d-id/1332011 lays it out pretty nicely though a bit optimistic in my view.

So a big no, no no, NOOOO on AGI or ASI.

[–]s8cyprks[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

That is interesting, especially coming from someone from Google, who is a tech giant. I also have a friend who studies computer science and said that as of right now, there isn't a technology that can make AGI or ASI possible, maybe not ever. Still, it will be fun to see an actual AI robot that is, at the very least, as smart as us, right?

[–]Mnemonic 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

fun to see an actual AI robot that is, at the very least, as smart as us, right?

It really depends... Lets take a monkey and make it as smart as us, it would be most cruel.

But software based AI is different from our, human, totally hardware driven Intelligence (unless you see consciousness as something like a soul, but it would still be barebones firmware in comparison).

A robot would be vastly superior in handling a keyboard (or not need one), basic calculations and wouldn't be bothered by human stress factors as money etc. There would be a vast difference in how 'reality' is processed and/or recognized that everything is a mere emulation of human behaviors to make it look like something human like. Pain but also joy (vastly different for normal humans overall) would only matter in how the 'makers' want it to matter.

A human like AI would be no more than an self learning expert system observing hundreds of humans. And like humans, when is it considered human-like?

Going back to the monkey, it would be alone while monkeys as human are (usually) very dependent upon some sort of social interaction in learning and living.

Also, would this mean, saying we have a monkey with 100+ IQ, would it be human like? Perhaps there are already monkeys with 100+ IQ (So how we humans define humanlike IQ, aka smartness). How can we even test this?

I understand your question, but what it basically comes to is that we don't know what intelligence is, is it a given to sentient creatures or is it emergent from our bodies and environment. What we consider human behavior is a consideration of symptoms we express. In a normal meeting (let's say 4 hours) I can come accros as dumb while being the next einstein (silent but clumsy smartypants) or smart while just repeating some stuff I read on wikipedia (aka a con-person).

If you want to know more about consciousness and robots: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness .

The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Solving (math) problems doesn't mean you are conscious, or does it? Because math can be used to model the world around use pretty acurate, so why not?