all 9 comments

[–]binaryblob 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 1 fun -  (2 children)

Don't call it AI. Just calls them LLMs. LLMs are good at text summarization and combining linear combinations of whatever. They don't work for anything else.

I tried a couple of different major LLMs.

It's possible to have them write somewhat useful computer programs, but they need a lot of handholding. It is absolutely impossible to make them do anything that would require extensive knowledge or expertise.

Basic math as used in physics calculations is way beyond these LLMs already.

For domains in which there is a lot of text available with a linear progression and a smooth function. Take for example "tell me the hair color of the main characters of the book 'The LOTR'" would be something I would expect these systems to work for.

The complexity of the functions that LLMs can solve is low.

I have never seen a LLM solve any problem of interest; having said that, almost no human does anything of interest either.

[–]Hematomato 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (1 child)

LLMs are just one of many kinds of neural networks, which is what AI is.

Leela, for example, is a neural network and therefore AI. No human being can beat it at chess. Stockfish still outperforms Leela, but only barely.

Stable-Diffusion is another piece of software based on neural networks that is not an LLM.

LLMs are neural networks dedicated to writing plausible-human like answers to prompts. It's just a piece of a puzzle, but it's an important one. Eventually it'll be married to neural networks coded for other tasks.

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

I agree that you can automate the majority of human labor with all neural networks currently existing; it just wouldn't be working like "The Computer" or "Data" in Star Trek. Data was able to work like a combination of an engineer and a scientist, pulling data from widely different fields and combining that knowledge into highly advanced deductions. No AI can do that nor is there any reason to assume that these things will happen, because the things Data did are actually difficult. Generating speech is, judging by the fact a neural network can do it, apparently easy. In fact, almost everything people often do consists of mathematically simple functions; some people even suspect our entire universe doesn't do information theoretically complex things.

No matter how smart these AIs become, they won't ever solve cryptographic problems beyond a fairly small size.

The class of functions that LLMs can solve, is novel, and it will change the way we interact with computers forever, but they are not some AI we would have to fear (because that fake news story is just there to destroy the competition). It is entirely possible to make AI with goals such as "destroy the world in 80 days", although no AI would complete that goal in 80 days. I'd imagine that a machine given that goal would take trillions of years to complete that task (at which time the planet already stopped existing of natural causes). The only way AI would become dangerous, is if we were to find out how to calculate a googol times faster or something like that. AI equipped with weapons from the start like killer drone swarms would be the more short term problem.

The presumed godlike AIs are certainly a possibility, but not without figuring out how to compute many orders of magnitude cheaper. It's not at all obvious that our universe is capable of such levels of computation; in fact, it has been computed that it isn't based on our understanding of the Standard Model. So, whoever these people are fearmongering, ignore them.

[–]DerpDerp3001 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

You know that neanderthals were likely of similar intelligence to homo sapiens.

[–]LarrySwinger2 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (0 children)

Hey x0x7 where did all your other posts go?

[–]Mcheetah 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

No. Cause AI doesn't have emotions and isn't truly self-aware.

[–]IMissPorn 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Interesting thought experiment, but kind of hampered by the fact that I've never met a neanderthal. Perhaps a more useful comparison would be a dog. AIs clearly beat dogs at language and art, but I think dogs are a lot more "general". To pick an obvious example, would you trust an AI (equipped with a robot body) to defend your home, with deadly force if needed, while leaving you and your family unharmed? I sure wouldn't.

[–]winkot 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

What a horrific use of the word "intelligence" to apply it to narrow AI.

[–]TaseAFeminist4Jesus 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

No. Zero general intelligence