It’s always the paper work like non-disclosure agreements or the greed of publishing of research papers and patents behind every project of mine restrains me from posting about it here. But the basics can always shared and explained. Currently I am working on rebuilding and improvising the state-of-the-art convolutional neural networks that can be trained with less constraints and be used for pattern recognition. First, lets get to the basics of the topic and the project details me be shared if there’s some breakthrough in it.
These chapters are specially for a very dear friend of mine who was more than curious and excited, being a medical student, the moment I mentioned the name neural networks. In this chapter I will get to the very basic understanding of what it actually is 😉 . Then will get to the computational and mathematical point of view of the algorithms, followed by network implementations in c++ for image processing in my next chapters. I struggled to lot to clearly understand the basic functionality in artificial neural networks, read blogs, books but there were no clear explanations, everywhere it was a mathematical approach presented in the most horrifying way possible. This understanding became more clearer after I went though this blog: http://www.ai-junkie.com/ann/evolved/nnt1.html Well lets get to the technicalities of the network 🙂
A lot of research has already been done in this field, from perceptrons to convolutional networks to extreme deep learning, and there’s a lot more potential for further development. An artificial neural neuron is an imitation of an actual neuron. Consider Fig.1, we have dendrites to receive signals, nucleus to process it, axon for activation and synapses to pass the information further to connected neurons. Forgive me for such an engineering based explanation to it, but there’s a reason behind it.
Now consider fig2,
(Fig 2: Image Courtesy: article.sapub.org)
Forget about the mathematics mentioned in the figure, can you find some resemblance between the two? Now focus on the engineering explanation mentioned earlier and observe Fig 3
Dendrites capture the information provided. This information has an importance specific to its nature. The importance is characterized by weights and is multiplied over with the particular input. Say we have an input x1, and its importance for this particular neuron is w1, this the equivalent input becomes x1*w1. The approach to derive a suitable value for this importance(weights) will become lucid later, when I will introduce the mathematical view . The nucleus is a summing function, a block to get all the equivalent inputs and sum them up. Say we have 4 inputs, x1, x2, x3, x4 with their respective weights w1, w2, w3 ,w4, giving me x1*w1+x2*w2+x3*w3+x4*w4 at the end of the nucleus. The axon now takes this and passes it through a transfer function(activation function) to get an output. The unit also has sub-unit which acts as threshold for the summed up value coming from the nucleus. To simplify, see the example below
x1 = 2, x2 = 3, x3 = 5, x4 = 1 and w1 = 0.1, w2 = 0.3, w3 = 0.9, w4 = 0.5, so we get the output at nucleus as 6.1 , and the designed transfer function is such that,
if the sum > threshold, output is 1 otherwise output is 0. And the threshold of the network was set as 7, so the output in our case would be 1. Many such transfer functions exit and many algorithms have been developed to set the threshold value.( will be discussed in later chapters)
Now, this neuron is a fundamental unit of what is called as a neural network. A neural network is a multi-dimensional array of set of neurons connected in a well defined fashion. Have a look at fig.4, a single layer two dimensional network
I like to implement networks, where the input layer consists of just the information and does no further processing. The output layer here, on the other hand is a set of four neurons. Each neuron takes four inputs, processes it according to the way mentioned above and gives an output. Now imagine a neural network with multiple layers, each functioning in their own way, spread over every known dimension…… get the complexity? But why these nets and why struggle for the obtaining this complexity? The reason is, that these networks help in solving lot of image processing and computer vision problems like image enhancement and optimization algorithms, object recognition, pattern understanding, robotic navigation, medical image image segmentation, etc. Neural networks for object recognition are trained, technically the weights are trained based on an already known image dataset of the the object. Say I have 50 images of character numeral “0” and 50 images of character numeral “1”. The output layer has two neurons and I want, when I insert an image into the network, the first neuron to give a value greater than second if the input is a “0” and the other way round for image “1”. These training methodologies are complex and nee to be studied and implemented carefully. Just see the images below, I trained a small neural network for this application and got results like this(click on the images to enlarge them):
The input is “0” and as you can observe the terminal, it says “Trained output for image 0.957776 0.048380”, the first output is way greater than the second.
Now the input is “1” and as you can observe the terminal, it says “Trained output for image 0.124002 0.851732”, the first output is lesser than the second.
This is a very small example of the numerous and humongous tasks a good neural network can perform.
Next Chapter will include understanding the neuron, its attributes, history of neural nets followed by the state-of-the-art networks.
Thank you 🙂