I have spent a large part of the last few months haunting the same couple restaurants in blind meeting after blind meeting. It’s part of playing the game– shaking hands and networking occasionally plays off. I often find myself explaining what I do for TrackYourDisease, and after months of hushed explanations over the same Baja tacos, I have given up on any truly satisfactory answer besides:
“I make computers think for me”
Like most things discussed over bland ethic food, the response ignores the complexity and meaning that at least I like to assume the work I am doing has. However, it isn’t wrong.
We have been for months trying to solve the same problem- How do we make a computer do the same thing a human doctor could, if not better. This isn’t to say that our model is better then a trained Physician (which is definitely isn’t) or that we assume that our platform can replace regular doctors visits (which it also cannot do); instead we just hope to fill in the cracks between visits.
Our particular brand of model relies heavily on these little statistical contraptions called artificial neural networks which essentially grant computers the ability to preform some elementary thought through a process mirroring natural selection.
In the right application these ANN’s can be be frame shattering, however outside of application they can be obtuse and confusing. This week we will be diving into an actual problem and using an ANN to solve it.
Tetris with Cyborgs
Tetris, in some form or another, has been adapted to be played on almost every single computing device man has ever built. It is a game, that not for lack of trying, I am
admittedly horrible at. It also happens to be a game that neural networks can solve relatively easily.
Let’s assume we have a pretty generic Tetris board of say ten by sixteen squares where pieces drop from the top, lines clear the board, and reaching the top is a loss.
In order to apply a neural network to the board we are going to have to find a way to feed it into the network. In order to do this lets place an input note in each of these squares where the node returns a value corresponding to if the square is occupied, and if it is how high it is. If there is a hole (meaning an empty node surrounded by filled nodes) we will assign it some arbitrary value corresponding to not good.
If you remember from the introduction to ANN’s a few weeks ago, we have now build the input layer. These are our sensory neurons, and their only function is to take inputs. Next we need to set up some deep learning, so we can establish two deep learning layers. Normally these would be depicted in 3D, but I think it is much easier if we visualize these as projects onto a 2D plane.
So here is our network, remember each node is connected to every other node by edges and each currently has a randomized weight. Set this aside for a second and we are going to come back to it.
We need to define a fitness function in order to train our network. Just for the sake of simplicity let’s define our algorithm as:
where c1, c2, c3 are nonzero real number weights where c2,c3 are negative and c1 is positive
Now that we have our fitness algorithm we can go back to our network. Let’s begin by assigning random weights to each node and letting it play out say ten games, and then figuring out an average score for those ten games. Next let’s shift some of the weights around and do the same test, and observe how this changes our average score.
We have begun to train our network, all that is left to do is preform these steps a couple hundred thousands of times. For a network of complexity similar to the one defined above on our infrastructure can take a few days to train. For a network of the complexity we are using for TrackYourDisease we have been training for weeks now.
This concludes Part 1 of the Tetris exercise. Next week, once the training is done we are going to dive into how well our algorithm did and consider how it informs the work happening at TrackYourDisease.
Until then here are some particularly interesting implementations of ANN games: