you are viewing a single comment's thread.

view the rest of the comments →

[–]Zapped[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

Good thoughts. Is it possible for AI to evolve on its own, or is it always limited to its initial programming? I've been told that the amount of resources it takes for human-like consciousness limits what can be duplicated artificially, adn that the human brain is at its limit because of overheating.

[–]Alan_Crowe 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

People like to talk about an AI recursively improving itself.

Here is an example of the idea: you write an optimizing compiler for the C computer programming language. You write it in C. Then you compile it with an existing C compiler, such as gcc. Now you have an executable for C. You recompile some of your application programs, and they run faster. That is nice, but your optimizing compiler is slow. So you get it to compile itself. Now it runs faster :-)

The usual ideas about how to write an optimizing compiler are such that if you repeat the process you get no further gains. One alternative idea is that the optimizing compiler searches for optimizations, and it does it the same way that a chess playing program does, with one eye on the clock, so that it abandons the search if it is taking too long. If you follow up this idea, there is a chance that after the first round of self application, the search runs faster, and gets deeper into the search tree, finding new optimizations. Using the compiler to recompile the compiler gives you several rounds of improvement.

This works mysteriously badly. You get tiny improvements that peter out. In 2022 we have no idea how to get recursive self-improvement to take off. Today it is limited to being technobabble for science fiction. But I don't think that we even know where to look for ideas about how to get recursive self-improvement to take off; I don't think that anything will have changed in twenty years time.

I don't have any feel for the far future. I just think that we are heading for a rough patch, where AI's cause disaster by being stupid in surprising new ways. More accurately: humans cause disaster by over-estimating AI and thinking that their is intelligence is more general and more able to cope with the unexpected than it really is.

The latest excitement is doing statistical learning on a large corpus. That is a great way to get excellent results on the central examples in the training data, (the commuter is basically copying humans). But we gravitate towards seeing this as the computer thinking its way through the problem, rather than it having "seen it before" in the sense that it is interpolating the training data. We set ourselves up to to believe that the computer can extrapolate from the training data to overcome new challenges. We know from playing with polynomials that the usual story is that interpolation works just fine, and extrapolation is a disaster. But we ignore that and over estimate the computer.

One further pattern in human behavior is that we love to talk in absolutes: this is possible, that is impossible. John McCarthy noticed that we turn this into an implicit belief that if something is possible, then it should be reasonably easy. I think that doesn't apply to artificial intelligence. We are heading towards a situation in which creating artificial intelligence proves to be too hard for humans in the next say one hundred years, but we cope by pretending and lying and believing our own lies. We put stupid AI's in charge of important things and suffer for because of it. And the underlying error is about dividing into two teams, team NO says AI is impossible, team YES says we've managed to create it. But that division, into YES or NO erases NOT-YET. We forget to guard against AI that looks clever to us, but is actually faking it and is doomed to screw up big time.

[–]Zapped[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Who's to say that AI self-evolution won't be a more efficient than human intelligence and not understood by humans? I remember an experiment with two AI's talking to each other. They developed their own language and the controllers shut the experiment down. It was found that they weren't trying to hide the transfer of data, only making it more efficient. In the end, they were still doing only what they were programmed to do, even if they found a better way.