you are viewing a single comment's thread.

view the rest of the comments →

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

would bulb it up more if I could. Great explanation, didn't think of MRI photo's or stuff, was more thinking about medical textbooks in an advanced complex if...then... system. Explains why we don't know the reason why the AI thought that way (in the good and bad situations).

[–]magnora7[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (2 children)

Thanks, I worked on this type of research for several years so I'm very familiar with how it works.

You're exactly right that the crazy thing with neural nets is we cannot know why it does anything. They cannot be debugged or solved. They're just a black box that tweaks itself through feedback until you get it where you want it, then you lock down the connection strengths between the neurons when you're done training, then you have a black box that spits out accurate predictions.

That's also how self-driving cars work. So many computer functions are now becoming these trained neural nets. It's awesome but also weird because sometimes the neural net can get in to a state where it does something unexpected. Most of the self-driving car crashes are caused by this.

And I expect we'll probably begin to see something similar in the medical field as machine learning becomes used more to make predictions about patient outcomes. The overall accuracy will improve, but there will be these fluke predictions that no one will really understand...flukes that may not be detectable until it's too late.

Definitely a double-edged sword. We're creating technology that is "evolved" through feedback like a living being, rather than programmed. It's a wild time to be alive.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

It's a wild time to be alive.

From unreadable undocumented/uncommented code spaghetti nightmares to "This works, but might someday destroy the earth if some unknown and very specific parameters are reached."

I never liked neural networks because of the maths (don't like that) and the limited reasoning about the outcome. Though I could see a future where the "reasoning" is outputted in a human comprehensible manner. Not the blackbox itself (might be a whole library) but about the individual decisions. This would probably mean nodes (or clusters of them) would be labeled by means of the examples it's learning on. This way you could ask questions like "Why not this?" and it could go 'I dismissed that diagnosis because of [detail] and in 68% of the cases that made it have nothing to do with the whole area' So people can find out 'what went wrong' if something went wrong.

I can't find the study, but as an example we got a system that was trained to see penguins in photos in their natural habitat. Once it was done and working very well on stuff outside it's learning examples it was shown random pictures and it went well with a few misses until they came to lion pictures and it detect penguins almost every time. I was waiting for the explanation, but they didn't have it except some hypothesis that it might the mountains in the background or the lion's noses...

[–][deleted] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

Speaking as a programmer, neural networks will never be suitable for most purposes. For example, it is impossible to make a neural network which will give perfectly accurate results for simple integer addition or subtraction, merely one which calculates close approximations. However, for certain fields, such as image recognition, neural networks are the only way to do it well.

Edit: "This works, but might someday destroy the earth if some unknown and very specific parameters are reached." won't ever be a thing, because we can use maths to find whether certain outputs are possible. Neural networks merely take inputs and produce corresponding outputs once frozen.