Popular Post

Search This Blog

Wednesday, April 27, 2016

Neural Network in simple words

"This both confuses what a neural network actual is, and makes some people question their merits because they expect them to act like brains, when they are really a fancy type of function.
The best way to understand a neural net is to move past the name. Don't think of it as a model of a brain... its not... this was the intention in the 1960s but its 2011 and they are used all the time for machine learning and classification.
A neural network is actually just a mathematical function. You enter a vector of values, those values get multiplied by other values, and a value or vector of values is output. That is all it is.
They are very useful in problem domains where there is no known function for approximating the given features (or inputs) to their outputs (classification or regression). One example would be the weather - there are lots of features to the weather - type, temperature, movement, cloud cover, past events, etc - but nobody can say exactly how to calculate what the weather will be 2 days from now. A neural network is a function that is structured in a way that makes it easy to alter its parameters to approximate weather predication based on features.
Thats the thing... its a function and has a nice structure suited to "learning". One would take the past five years of weather data - complete with the features of the weather and the condition of the weather 2 days in the future, for every day in the past five years. The network weights (multiplying factors which reside in the edges) are generated randomly, and the data is run through. For each prediction, the NN will output values that are incorrect. Using a learning algorithm based in calculus, such as back-propogation, one can use the output error values to update all the weights in the network. After enough runs through the data, the error levels will reach some lowest point (there is more to that, but I won't get into it here - most important is over fitting). The goal is to stop the learning algorithm when error levels are at a best point. The network is then fixed and at this point it is just a mathematical function that maps input values into output values just like any old equation. You feed new data in and trust that the output values are a good approximation.
To those who claim they are failed: they aren't. They are extremely useful in many domains. How do you think researchers figure out correlations between genes and diseases? NNs, as well as other learning algorithms, are used in bioinformatics and other areas. They have been shown to produce extremely good results. NASA now uses them for space station routines, like predicting battery life. Some people will say that support vector machines, etc are better... but there is no evidence of that, other algorithms are just newer.
It is really too bad people still make this claim that neural networks are failed because they are much simpler than the human brain --- neural networks are no longer used to model brains --- that was 50 years ago. .."

Source - http://programmers.stackexchange.com/questions/72093/what-is-a-neural-network-in-simple-words