you are viewing a single comment's thread.

view the rest of the comments →

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (4 children)

would bulb it up more if I could. Great explanation, didn't think of MRI photo's or stuff, was more thinking about medical textbooks in an advanced complex if...then... system. Explains why we don't know the reason why the AI thought that way (in the good and bad situations).

[–]magnora7[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

Thanks, I worked on this type of research for several years so I'm very familiar with how it works.

You're exactly right that the crazy thing with neural nets is we cannot know why it does anything. They cannot be debugged or solved. They're just a black box that tweaks itself through feedback until you get it where you want it, then you lock down the connection strengths between the neurons when you're done training, then you have a black box that spits out accurate predictions.

That's also how self-driving cars work. So many computer functions are now becoming these trained neural nets. It's awesome but also weird because sometimes the neural net can get in to a state where it does something unexpected. Most of the self-driving car crashes are caused by this.

And I expect we'll probably begin to see something similar in the medical field as machine learning becomes used more to make predictions about patient outcomes. The overall accuracy will improve, but there will be these fluke predictions that no one will really understand...flukes that may not be detectable until it's too late.

Definitely a double-edged sword. We're creating technology that is "evolved" through feedback like a living being, rather than programmed. It's a wild time to be alive.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (2 children)

It's a wild time to be alive.

From unreadable undocumented/uncommented code spaghetti nightmares to "This works, but might someday destroy the earth if some unknown and very specific parameters are reached."

I never liked neural networks because of the maths (don't like that) and the limited reasoning about the outcome. Though I could see a future where the "reasoning" is outputted in a human comprehensible manner. Not the blackbox itself (might be a whole library) but about the individual decisions. This would probably mean nodes (or clusters of them) would be labeled by means of the examples it's learning on. This way you could ask questions like "Why not this?" and it could go 'I dismissed that diagnosis because of [detail] and in 68% of the cases that made it have nothing to do with the whole area' So people can find out 'what went wrong' if something went wrong.

I can't find the study, but as an example we got a system that was trained to see penguins in photos in their natural habitat. Once it was done and working very well on stuff outside it's learning examples it was shown random pictures and it went well with a few misses until they came to lion pictures and it detect penguins almost every time. I was waiting for the explanation, but they didn't have it except some hypothesis that it might the mountains in the background or the lion's noses...

[–]magnora7[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

That's a very interesting take. I have been recently saying something related about the direction that the AI field is going to take. Right now one of the biggest problems keeping it from having human-level intelligence is the lack of compartmentalization. It's always trying to solve everything simultaneously. Neural nets simply lack the behavior to focus on perfecting one specific sub-task, like how you might master throwing a ball before attempting to play baseball. The computer just tries to learn baseball, and walking, and throwing, all at once, as if it's one big problem.

Any smart human would see "I do not understand how this works, so let's break the task in to smaller pieces and work on those skills individually then come back to the full task". Neural networks cannot do this, and I think it's the key thing holding AI back right now.

So my proposed solution to this, is to work on developing compartmentalized tasks. If you can solve the "throw a ball" algorithm, then you can lock that in place and then access that locked-in neural net when you play baseball, when you need to throw a ball. So the computer must develop 3 skills:

  1. Recognizing when it doesn't know something

  2. Breaking that task in to smaller sub-tasks

  3. Practicing each sub-task until it can assemble them together to complete the larger task

If neural nets could learn to do these 3 things, I think AI tech would move forward 20 years. This is very similar to what you're saying, in a way. By breaking each task in to a compartmentalized separate neural net, then having a meta-neural net that controls how those smaller ones all connect to each other, not only would the intelligence perform better, but like you said it would be possible to "query" parts of the intelligence. And say "Why did you do this" and be able to actually investigate that question because of the ability to answer sub-questions about choices made. Instead of the whole thing being essentially one giant thousand-part equation that we cannot really predict or understand.

The tough question is how exactly to do this... I think you could do 1 and 2 by hand at the start. Skill 3 is where the next advancement should be. Making a neural net of neural nets, perhaps...

[–]Mnemonic 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

The tough question is how exactly to do this... I think you could do 1 and 2 by hand at the start. Skill 3 is where the next advancement should be. Making a neural net of neural nets, perhaps...

There you might (in step 3) run into real complex problems. Throwing while running might to too different to assemble from throwing and running that you would get absurd behavior like running, dropping like a brick, stand up and throw a perfect ball (kinda like children do).

Some interaction within the learning process by humans might not only speed things up but also prevent some comically disastrous solutions. But this won't be cheap and there is the "out of the box" solutions the system may come up with that are never reached because the humans interrupt it when it wants do to a flip {still the baseball example}.

Learning amongst humans isn't that well understood so hopefully these two problems wholesomely help to solve eachother.

And if it all works like we want it to, in a time far far away, some ***bag would go "Can we MKUltra this system?".