I want you all to consider something for a second- what role do patterns play in your life? The questions seems deceiving simple, and I promise you it is. Without your brain’s ability to recognize patterns you would not be the thinking, breathing, and bipedal walking person you are today.

So this week let’s teach computers how to do it.

An Artificial Neural Network is nothing more then a simulation of an actual biological network, although several thousand times smaller. Let’s imagine the following situation:

Here you have a single neuron with two inputs (folks that did well in bio can think of these as dendrites) and one output (again, an axon). What is in the middle right now is a little bit of a black box, but imagine it takes some *n* number of inputs and delivers it to some *m *other neurons by some equation we haven’t defined nor particularly care about right now. In computer science we talk about this as either a *0* or a *1*, or in biology the creation (or lack there of) of an action potential. Additionally some neurons inputs can be worth more then others, and their output can be weighted depending on their reliability. Now lets collect a couple layers of these things:

So here we have created a little more complex of a network (a collection of nodes/neurons). Here we have three neurons taking two inputs each, delivering them to four *hidden* (in the middle) neurons which in turn collect into three neurons which will collapse the single and give it to one output node to give out. In short, we give it six inputs, and it gives us one output.

So let’s break it down to an example, lets imagine we wanted to build a network that wanted to predict your score on a test based on:

- Number of Hours studied
- Number of hours slept
- Last test score
- Second to last test score
- Third to last test score
- Forth to last test score

We could of course do this with an equation and do a linear or quadratic regression and get an equation in terms of x, y, z, w.. and so on. However this requires us committing to the fact that this data is in fact linear or quadratic which in real life often isn’t. This is the exact kind of situation where a neural network can be helpful.

So to start let’s randomize the weights of each nodes (i.e. the threshold of any n inputs that make the neuron ‘fire’). We can give it an input set, for example {2, 12, 94, 92, 91, 45} and have it give us an output based on our random weights. While this isn’t particularly useful in itself with the random weights if we have enough data to know based on some input what the output should be we can find how incorrect the random output is and correct to be closer to it. We can give the network inputs and correct the output by shifting the weights node by node until we have an accurate prediction.

This is to say with enough data we can actually train the computer to *guess* that based on {x1, x2 … xn} an almost exact approximation of what our output should be. As we increase the amount of data we train the network with we can get this approximation increasingly precise. This process of input, output, and correction is called backpropagation.

Now lets imagine instead of ten neurons, we simulate and train hundreds of neurons with thousands of inputs and connections. Assuming we have enough data to train this set we can get it to make informed guesses at problems we cannot possibly dream of deriving equations for.

This is an incredible power to possess in nearly every application. In following weeks we will be diving into how we used a 100-node neural network to improve the lives of Parkinson’s patients, but in the mean time here are some of the most amazing applications:

Here’s a neural network that can show you how computers ‘see’

Here’s a neural network that can turn a photo into a Picasso painting

Here’s a neural network that can outperform human traders

Here’s a neural network that can predict Notre Dame football game scores

Here’s a neural network that can colorize Black and White photos

rickyyu1999Interesting theory. It was cool to see that you broke down the term “thinking” into different inputs and outputs, but isn’t thinking unique to humans because we don’t necessarily need data to formulate our thoughts from?

willmanidisPost authorThere are definitely more qualified people to answer this – but the opinion most studying cognition right now is thought is just data processing. Nothing your brain does it would be doing unless that particularly set of stimuli triggered it. There is no ‘you’ without that input data.