all 8 comments

[–]thefirststone 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

The point is for it to learn itself based on external stimuli.

Its peculiar initial structure allows it to modify itself. Its inscrutable final structure comes from it finding solutions by random walks. Obfuscation is a side effect of automation that only requires declarative requirements, rather than true programming.

That they are inscrutable replacements for human labor is mostly a happy accident, though inevitable.

What they call "AI" and "ML" in the media is just this:

[–]Zapped 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Wouldn't there need to be some sort of "evolution" event, and not just a learning curve, that allows programming to become sapient, and therefore use imagination as a tool for advanced thought and problem solving? I was just having a conversation today with a medical doctor about this very subject. He brought up the fact that computers are able to do so much more computations because they are able to use so much more energy than the human brain. The brain is capped at 2.5 watts due to overheating.

[–]yoke 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

well now it's more than GP though...

[–]saidittwice 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

'to learn itself' is not quite accurate, as the developer has to supply the judgement of right or wrong (correct/incorrect) to the lessons being learned (programmed). That means lessons have to be composed (training data) with many examples - which requires human interaction at some level within the knowledge domain the NN is being built for.

The more complex the knowlsdge is, the more the NN machine can be confused and have to be carefully guided so that it will operate correctly. This also includes the shape of the Neural Net - (the layers and neuron counts) which are estimated for the kind of inputs and outputs of the knowledge task the NN is supposed to do.


[–]yoke 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

yeah chomsky used to talk about this that AI is no understanding. but...symbolic regression changed all that (basically guessing formulas based on data points).

neural networks basically approximate very very complex function reasonably well. a function has inputs and outputs, and neural nets accept a very wide range of those inputs and outputs, and produce something similar to what humans expects.

one such function might be: given a line of hint, produce a coherent article. another might be: given various images, identify what's in them and where are those objects.

[–]fschmidt 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

I looked at AI a long time ago. There is no known branch of statistics/probability that describes neural networks or the brain. Neural networks aren't really "brain-like" but they are the best that AI has right now. And we should keep it that way because real AI would be the end of humanity.

[–]trident765[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

I really hope that there are some laws of the universe that make it physically impossible for AI to ever surpass human intelligence.

[–]infocom6502 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

If you have a simpler way to solve these problems tackled by AI via other methods using stats you should definitely put them out there or use them.

I don't know much about them, but there are various different types of NN's. I wrote a simple one decades ago, which would un-learn (weaken connections) by stimulating it with random noise input. Kind of what we do when sleeping / dreaming. It is a vast field now, and not easy for the layman to follow in greater detail.