Making Computers Think | Will

muti-1_1x

I want you all to consider something for a second- what role do patterns play in your life? The questions seems deceiving simple, and I promise you it is. Without your brain’s ability to recognize patterns you would not be the thinking, breathing, and bipedal walking person you are today.

So this week let’s teach computers how to do it.


An Artificial Neural Network is nothing more then a simulation of an actual biological network, although several thousand times smaller. Let’s imagine the following situation:

Slice 1.pngHere you have a single neuron with two inputs (folks that did well in bio can think of these as dendrites) and one output (again, an axon). What is in the middle right now is a little bit of a black box, but imagine it takes some n number of inputs and delivers it to some other neurons by some equation we haven’t defined nor particularly care about right now. In computer science we talk about this as either a 0 or a 1, or in biology the creation (or lack there of) of an action potential. Additionally some neurons inputs can be worth more then others, and their output can be weighted depending on their reliability. Now lets collect a couple layers of these things:

Slice 2.png

So here we have created a little more complex of a network (a collection of nodes/neurons). Here we have three neurons taking two inputs each, delivering them to four hidden (in the middle) neurons which in turn collect into three neurons which will collapse the single and give it to one output node to give out. In short, we give it six inputs, and it gives us one output.

So let’s break it down to an example, lets imagine we wanted to build a network that wanted to predict your score on a test based on:

  1. Number of Hours studied
  2. Number of hours slept
  3. Last test score
  4. Second to last test score
  5. Third to last test score
  6. Forth to last test score

We could of course do this with an equation and do a linear or quadratic regression and get an equation in terms of x, y, z, w.. and so on. However this requires us committing to the fact that this data is in fact linear or quadratic which in real life often isn’t. This is the exact kind of situation where a neural network can be helpful.

So to start let’s randomize the weights of each nodes (i.e. the threshold of any n inputs that make the neuron ‘fire’). We can give it an input set, for example {2, 12, 94, 92, 91, 45} and have it give us an output based on our random weights. While this isn’t particularly useful in itself with the random weights if we have enough data to know based on some input what the output should be we can find how incorrect the random output is and correct to be closer to it. We can give the network inputs and correct the output by shifting the weights node by node until we have an accurate prediction.

This is to say with enough data we can actually train the computer to guess that based on {x1, x2 … xn} an almost exact approximation of what our output should be. As we increase the amount of data we train the network with we can get this approximation increasingly precise. This process of input, output, and correction is called backpropagation.

tikz41

Now lets imagine instead of ten neurons, we simulate and train hundreds of neurons with thousands of inputs and connections. Assuming we have enough data to train this set we can get it to make informed guesses at problems we cannot possibly dream of deriving equations for.

This is an incredible power to possess in nearly every application. In following weeks we will be diving into how we used a 100-node neural network to improve the lives of Parkinson’s patients, but in the mean time here are some of the most amazing applications:

Here’s a neural network that can show you how computers ‘see’

Here’s a neural network that can turn a photo into a Picasso painting

Here’s a neural network that can outperform human traders

Here’s a neural network that can predict Notre Dame football game scores

Here’s a neural network that can colorize Black and White photos

Here’s a neural network that can add sound to silent movies

2 thoughts on “Making Computers Think | Will

  1. rickyyu1999

    Interesting theory. It was cool to see that you broke down the term “thinking” into different inputs and outputs, but isn’t thinking unique to humans because we don’t necessarily need data to formulate our thoughts from?

    Reply
    1. willmanidis Post author

      There are definitely more qualified people to answer this – but the opinion most studying cognition right now is thought is just data processing. Nothing your brain does it would be doing unless that particularly set of stimuli triggered it. There is no ‘you’ without that input data.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s