Op-Economica, 14-2-2016 — Neural network in plain English.

** So, what exactly is a Neural Network?**

A neural network is mans crude way of trying to simulate the brain electronically. So to understand how a neural net works we first must have a look at how the old grey matter does its business…

Our brains are made up of about 100 billion tiny units called *neurons*. Each neuron is connected to thousands of other neurons and communicates with them via electro-chemical signals. Signals coming into the neuron are received via junctions called *synapses*, these in turn are located at the end of branches of the neuron cell called *dendrites*. The neuron continuously receives signals from these inputs and then performs a little bit of magic. What the neuron does (this is over simplified I might add) is sum up the inputs to itself in some way and then, if the end result is greater than some threshold value, the neuron fires. It generates a voltage and outputs a signal along something called an *axon*. Don’t worry too much about remembering all these new words as we won’t be using many of them from this moment onwards, just have a good look at the illustration and try to picture what is happening within this simple little cell.

Neural networks are made up of many artificial neurons. An artificial neuron is simply an electronically modelled biological neuron. How many neurons are used depends on the task at hand. It could be as few as three or as many as several thousand. One optimistic researcher has even hard wired 2 million neurons together in the hope he can come up with something as intelligent as a cat although most people in the AI community doubt he will be successful (*Update: he wasn’t!*). There are many different ways of connecting artificial neurons together to create a neural network but I shall be concentrating on the most common which is called a *feedforward* network. So, I guess you are asking yourself, *what does an artificial neuron look like?* Well here you go:

Each input into the neuron has its own weight associated with it illustrated by the red circle. A weight is simply a floating point number and it’s these we adjust when we eventually come to train the network. The weights in most neural nets can be both negative and positive, therefore providing excitory or inhibitory influences to each input. As each input enters the nucleus (blue circle) it’s multiplied by its weight. The nucleus then sums all these new input values which gives us the *activation* (again a floating point number which can be negative or positive). If the activation is greater than a threshold value – lets use the number 1 as an example – the neuron outputs a signal. If the activation is less than 1 the neuron outputs zero. This is typically called a* step* function (take a peek at the following diagram and have a guess why).

** Now for some maths**

I now have to introduce you to some equations. I’m going to try to keep the maths down to an absolute minimum but it will be useful for you to learn some notation. I’ll feed you the maths little by little and introduce new concepts when we get to the relevant sections. This way I hope your mind can absorb all the ideas a little more comfortably and you’ll be able to see how the maths are put to work at each stage in the development of a neural net.

A neuron can have any number of inputs from one to n, where n is the total number of inputs. The inputs may be represented therefore as x_{1}, x_{2}, x_{3}… x_{n}. And the corresponding weights for the inputs as w_{1}, w_{2}, w_{3}… w_{n.} Now, the summation of the weights multiplied by the inputs we talked about above can be written as x_{1}w_{1} + x_{2}w_{2} + x_{3}w_{3} …. + x_{n}w_{n, } which I hope you remember is the activation value. So…

* a = x _{1}w_{1}+x_{2}w_{2}+x_{3}w_{3}… +x_{n}w_{n}*

In short: .

Now remember that if the activation > threshold we output a 1 and if activation < threshold we output a 0.

Let’s illustrate everything so far with a diagram.

…**but how do you actually use an artificial neuron?**

Well, we have to link several of these neurons up in some way. One way of doing this is by organising the neurons into a design called a *feedforward network*. It gets its name from the way the neurons in each layer feed their output forward to the next layer until we get the final output from the neural network. This is what a very simple feedforward network looks like:

Each input is sent to every neuron in the hidden layer and then each hidden layer’s neuron’s output is connected to every neuron in the next layer. There can be any number of hidden layers within a feedforward network but one is usually enough to suffice for most problems you will tackle. Also the number of neurons I’ve chosen for the above diagram was completely arbitrary. There can be any number of neurons in each layer, it all depends on the problem. By now you may be feeling a little dazed by all this information so I think the best thing I can do at this point would be to give you a real world example of how a neural net can be used in the hope that I can get your very own brain’s neurons firing!

You probably know already that a popular use for neural nets is character recognition. So let’s design a neural network that will detect the number ‘4’. Given a panel made up of a grid of lights which can be either on or off, we want our neural net to let us know whenever it thinks it sees the character ‘4’. The panel is eight cells square and looks like this:

We would like to design a neural net that will accept the state of the panel as an input and will output either a 1 or zero. A 1 to indicate that it thinks the character ‘4’ is being displayed and 0 if it thinks it’s not being displayed. Therefore the neural net will have 64 inputs, each one representing a particular cell in the panel and a hidden layer consisting of a number of neurons (more on this later) all feeding their output into just one neuron in the output layer. I hope you can picture this in your head because the thought of drawing all those little circles and lines for you is not a happy one <smile>.

Once the neural network has been created it needs to be trained. One way of doing this is initialize the neural net with random weights and then feed it a series of inputs which represent, in this example, the different panel configurations. For each configuration we check to see what its output is and adjust the weights accordingly so that whenever it sees something looking like a number 4 it outputs a 1 and for everything else it outputs a zero. This type of training is called * supervised learning *and the data we feed it is called a *training set*. There are many different ways of adjusting the weights, the most common for this type of problem is called *backpropagation*. I will not be going into backprop in this tutorial as I will be showing you a completely different way of training neural nets which requires no supervision whatsoever (and hardly any maths – woohoo!)

If you think about it, you could increase the outputs of this neural net to 10. This way the network can be trained to recognize all the digits 0 through to 9. Increase them further and it could be trained to recognize the alphabet too!

Are you starting to get a feel for neural nets now? I hope so. But even if you’re not all that will hopefully change in a moment when you start to see some code.

** So, what’s our project going to be fup?**

We are going to evolve virtual minesweepers to find and collect land-mines scattered about a very simple 2D world. This is a screenshot of the application:

As you can see it’s a very simple display. The minesweepers are the things that look like tanks and the land-mines are represented by the green dots. Whenever a minesweeper finds a mine it is removed and another mine is randomly positioned somewhere else in the world, thereby ensuring there is always a constant amount of land-mines on display. The minesweepers drawn in red are the best performing minesweepers the program has evolved so far.

How is a neural net going to control the movement of a minesweeper? Well, just like the control of a real tank, the minesweepers are controlled by adjusting the speed of a left track and a right track. By applying various forces to the left and right side of a minesweeper we can give it a full range of movement. So the network requires two outputs, one to designate the speed of the left track, and the other to designate the speed of the right track.

The more thoughtful of you may be wondering how on earth we can apply varying forces when all we’ve discussed so far are binary networks outputting 1’s and 0’s. The secret to this is that instead of using a simple step (threshold) activation function we use one which softens the output of each neuron to produce a symmetrical curve. There are several functions which will do this and we are going to use one called the *sigmoid* function. (sigmoid, or sigmoidal is just a posh way of saying something is S shaped)

.

This equation may look intimidating to some of you but it’s very simple really. The *e* is a mathematical constant which approximates to 2.7183, the *a* is the activation into the neuron and *p* is a number which controls the shape of the curve. *p* is usually set to 1.0.

This function is terrific and comes in handy for all sorts of different uses because it produces an output like this:

The lower the value of *p* the more the curve begins to look like a step function. Also please note this curve is always centred around 0.5. Negative activation values produce a result less than 0.5, positive activation values produce a result greater than 0.5.

Therefore, to obtain a continuously graded output between 0 and 1 from our neurons we just have to put the sum of all the inputs x weights through the sigmoid function and Bob’s your uncle! So that’s our outputs dealt with, what about the inputs?

I have chosen to have four inputs. Two of them represent a vector pointing to the closest land-mine and the other two represent the direction the minesweeper is pointing. I call this vector, the minesweepers look-at vector. These four inputs give the minesweeper’s brain – its neural network – everything it needs to know to figure out how to orient itself towards the mines.

Now we have defined our inputs and our outputs what about the hidden layer/s? How do we decide how many layers we should have and how many neurons we should have in each layer? Well, this is a matter of guesswork and something you will develop a ‘feel’ for. There is no known rule of thumb although plenty of researchers have tried to come up with one. By default the simulation uses one hidden layer that contains six neurons although please spend some time experimenting with different numbers to see what effect they may have.

*Sciences & Education, Statistical Methods / Analysis*

Posted on February 14, 2016byGià Bản0