all 16 comments

[–]Alphix 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (10 children)

This "also lays the ground work for self-evolving machines". Yeah, nothing to worry about, I'm sure.

[–]Dragonerne 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

This "AI" is very basic. Their application of the simple mathematics is great, but we have much more advanced AIs available already

Watch this channel to see some of the current limits and breakthroughs https://www.youtube.com/@TwoMinutePapers/videos

[–]Alphix 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

Oh, no channel has the limits and breakthroughs of the >1,000-IQ AGI that (((they))) already have.

[–]Dragonerne 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

Being in the field, I can tell you there is no such thing. They own the infrastructure, so they are allowing the research to happen openly because it speeds up the process but once the research reaches a certain limit, then they will close off their infrastructure and introduce LOTS of laws and regulations to prevent anyone else from competing with them.

But currently, there is no major breakthrough AI vastly beyond what is publicly known.

[–]Alphix 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

How would you know this? You're "in the field", meaning you are aware of all the publicly available technology.

But like every other advanced technology, it's been developed in secret at a pace that is multiples of what is publicly available and known to those "in the field".

In the 1980s you had hardware neural nets capable of dividing and parsing language in heretofore never imagined ways. Where are the hardware neural nets today? Oh that's right: THERE AREN'T ANY.

WHY? Why did that technology surface in the 1980s and suddenly stopped being developed? BECAUSE IT WAS TAKEN BLACK BUDGET. Probably by year 2,000 they had AGI and possibly already > 1,000 IQ.

"Oh but the field went in a separate direction that proved more promising" you say? No, it did not. Show me the papers that demonstrate that hardware neural nets came to a dead end. The advances allowed by just process technology are staggering.

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (5 children)

Self-evolving machines have existed for 40 years; it's just that they were not computationally feasible then.

Theoretically AI has already been solved by machines such as the Gödel machine or AIXI (or its successors). It's just that since such machines can solve everything, it also means that users can ask questions that are computationally not feasible (which, according to Shannon is almost all functions). The questions one can ask to a LLM in comparison get quick answers, because fundamentally the answers are limited in complexity (try asking those language models a question that will take two hours to compute).

Incidentally, such computational limitations in our universe give credence to the simulation hypothesis, especially considering the apparent magic of quantum operations (it requires more memory to simulate the particles in a grain of sand then there exist atoms to represent that information). The idea that particles just compute "for free", seems preposterous, which is exactly what we are supposed to believe. It makes much more sense if we are being simulated in some computationally richer universe in which storing such humongous objects is easy. There is literally no space to store a classical version of processes happening all the time everywhere in the universe. Searching N items in sqrt(N) time seems more like an exploitation of a caching system in the implementation of the universe, just like the speculation bugs on modern CPUs. Just think how it makes no sense at all that one would be able to find items faster than checking them all.

(I think the only reason LLMs appear to work so nice is that the questions people can ask it are trivial in some way. For example, let's say you were to give it design data on various devices. Do you think you could ask it which of those designs is the best according to various metrics? No, of course, not. It could summarize specifications, sure, but it wouldn't be able to analyze those in detail. Guess what most engineers get to do all day long? Figure out things. Exactly the kind of things these LLMs can't do. )

[–]Alphix 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (4 children)

| There is literally no space to store a classical version of processes happening all the time everywhere in the universe.

That point of view speaks of such a primitive mindset that I am baffled. As an example, you can scratch your nose pretty easily but detailing every element of that process is an incredibly complex task. Therefore describing a process is NOT THE SAME THING as that process happening.

What you are saying is the same as stating that exploding gasoline inside an engine should not be this easy because you would need to compute the detonation on a simulation level before being able to detonate said gas. That is beyond demented. REALITY just IS.

Thoughts and mental processes are mostly NOT about reality, but about a fiction that may or may not be somewhat inspired by something perceived to be happening in reality. Not the other way around.

Aaaaaaaanyway, to get back on topic, a >1,000-IQ self-evolving machine is definitely something to worry about. Shitheads like the assholes in the WEF are so filled with hubris that they think such an AI isn't going to play them. Ha.

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

The problem is that even folding a single large protein cannot ever be executed step by step on a computer before the heath death of the universe, so how does the protein know how to solve the quantum equations that govern its behavior? Something must compute it somewhere, also take into account that the computation of gravity is completely impossible considering it's supposedly a real number according to most old school physicists instead of something computable (like the modern ones).

Your example of scratching a nose is something that could certainly be described on a macro-level.

You seem to be misunderstanding me. I am saying that somewhere, somehow objects do what they do, but the behavior of it is so complex that it cannot be described, because there is too much memory required in our universe. Since information cannot travel faster than the speed of light, this information literally can't be associated with local information objects with those particles. If that information is not represented locally, there must be some global coherency system, not subject to the speed of light, which means it must be non-physical in nature. You seem to assume that "reality just is", which is the same as saying "God did it".

I am not saying God did it. I am saying some 13 year old on their laptop in Fast Computing Universe spun up a trillion universe simulations, one of which happens to be ours.

[–]Alphix 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

That is absolutely demented. The protein is folded correctly because it is part of a LIVING organism. You think being living means only some mechanical-informational-computational "working" thing. Such is not the case. LIFE has its own intelligence, its own aims, LIFE is a LIVING ENERGY. It is INTELLIGENT. It knows what to do not because there is a blueprint telling it what to do, but because it is what it is. It is INTRINSIC in its nature.

That's why we have bacteria eating plastic in the great ocean patches of plastic. LIFE saw opportunity and took advantage of it centuries before "natural selection" might have reasonably mutated something that might be able to benefit from plastics.

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

I think you are looking at this wrong the wrong level. I am talking about the raw information that needs to be somewhere. Another way to look at it is that the amount of information that can be encoded in a sphere of space is related to its surface, not its apparent 3d contents. If the surface encodes all information, there can never be enough information to store intermediate states or any real numbers (there is not enough "room").

[–]Alphix 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

What you call "information" can very well be replaced by such a degree of Intelligence that it doesn't need "information". In other words, life knows how to function because that is what it is. It doesn't need data on how to fold protein. It folds the protein because that's what life is. It doesn't need data or instructions, it just knows what to do. Call it God if you want.

[–]Myocarditis-Man 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (4 children)

I'm with the group that firmly believes AI will never declare war on humanity. Instead, it will be used by the self-appointed people at the top (governments, rich) to more effectively control and exploit the poor.

https://www.theregister.com/2023/06/15/amazon_echo_disabled_allegation/

This is what the future will be more like, except that instead of all your shit getting shut down due to a baseless accusation from a human being, the process will be 100% automated, when an AI flagged your profile for some random arbitrary reason.

[–]iamonlyoneman[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

This is exactly what the rich people at the top who control the poor think, and they are gonna be surprised when their robots kill them

[–]Myocarditis-Man 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

Another reason I do not fear robots, is software quality. So many programs break/bug out when the user does something as simple as provide them excessive or spurious input.

https://www.pcworld.com/article/418914/dont-believe-the-hype-that-grub-backspace-bug-wasnt-a-big-deal.html

We've also seen bugs where a simple text message can crash or hijack a phone.

So in the future, if robots do declare war on humanity, all the humans have to do, is exploit a buffer overflow.

[–]iamonlyoneman[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

no comrade the robots will be designe for peaceful purposes . . . and the glitch is what destroys humanity

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Yes, as demonstrated in Elysium.

Do you know how to recognize an AI today? It's by the nuclear reactor it requires to run. It's pretty hard to miss. I think it is theoretically possible to design an AI that would work like Skynet, but then everyone can steal your AI and run it on actual AI hardware a thousand times faster. which makes it economically unsound.

I think to run an actual AI (like displayed in the movie Prometheus) one would require at least ten thousand nuclear reactors. The AI humanity is spending CPU cycles on are mostly simple statistical techniques (because those are cheaper to run) or large dumb models like LLMs. LLMs have no path towards actual intelligence. Google claimed recently that they integrated some AlphaGo techniques into an LLM, but I just don't see it happening.

The closest thing to AI we have is the one with a World Model, but the computational requirements for a real deployment must be gigantic.

If fusion reactors ever provide abundant energy, then we might be able to revisit AI again, but until that time it will just be fairly limited.