I am a horrendous cook- despite classes, Anthony Bourdain’s entire body of work, and repeated attempts at cooking family dinner, I am still banned from cooking at home. However, if Julia Child taught me anything, it’s that following a recipe can get you an (almost) edible meal. In Computer Science we talk about *algorithms *as our recipe cards, sets of instructions (often equations) that take a bunch of ingredients and give us our meal.

Just as in a recipe card has a prep-time, algorithms have Big-O Notation which is a measure of how long it takes for an algorithm to work through *n *inputs. Big-O notation can be as simple as O(n) where for every input it takes exactly one processor command (*just think of this as some small amount of time*), or as complex as O(log(n) + n/2) where for every input we will need Log(n) + n/2 instructions to get our output. This, in itself, is really just a measure of complexity. Baked Chicken is going to be a whole lot less complicated then Chicken Francesca.

Additionally we can use this neat little trick to group algorithms. It turns out that algorithms that are of the same big-O notation we can generalize as one equation. So if we can prove that some equation *f(n)* is O(n/2) and another equation *k(n*) is also O(n/2) we can assume they act in similar ways. In Julia’s terms; if you can cook a hen you can probably cook a pheasant.

Now Julia Child’s estate was worth around 38 million at the time of her death, I am not suggesting you could make that as a tenured mathematician (although David Cheriton seemes to do okay), but if you follow the instructions next couple paragraphs you can at least make 1/38 of it.

**Making Money with Math – The Millennium Prizes**

The Clay Mathematics Institute of Oxford funds what for many is the highest honor in the field of mathematics; the Millennium Prize Problems. These are seven (now six) famously unsolved problems in mathematics, and for each a million dollar award. Infamously, only one has been solved, the Poincaré conjecture, by brilliant and troubled Russian Mathematician Grigori Perelman. Perelman, who is also a Fields Medal laureate (*the other most prestigious prize*), has shunned publicity and refused to accept any of his accolades.

However the six remaining problems are left unsolved, and the most infamous *P vs. NP* continues to taunt mathematician all over the world. To up the stakes further, it just so happens that *P vs. N*P is the thing that makes all cryptography viable. If it was solved, all your secrets would be just a click away.

Most formally the problem is defined in hundreds of pages of axioms and equations, however it is pretty initiative. We can group problems as being *polynomial *and *not- polynomial *meaning they either exist in O(x^n) where n is some number, or they do not. We talked about this above, but to loop it back around essentially every problem in the world either does exist in P or not in P (thus NP).

To generalize either further, NP problems tend to be dramatically harder to compute and include operations like factoring. P problems are quick, just polynomial equations, and processors can tear through these very quickly.

Your bank information is likely protected by a huge prime number and where its two factors are the keys. This encryption scheme, RSA encryption, is based on the fact that factoring this giant number would take impossibly long (as it is NP) and thus the only way to access it is to actually have the numbers. On the flip-side, checking if any two numbers are the factors is easy, just multiply (a P operation). This means it’s almost impossible to factor (break the lock) but very easy to check if the key is correct (unlock it).

This scheme would be incredibly secure if P and NP problems are totally separate, it would be almost totally unbreakable. However we have yet to prove this, it might be true that every single NP problem can be solved in polynomial time. While the mathematics community has yet to find an NP problem that is solvable in P time, if you can do it, say hello to your new million dollars and goodbye to your modern security.

**The Burden of Proof and why Proof Matters**

It’s pretty easy to conceptualize the solution to this- just find all the NP problems and prove they are not solvable in P. However we are not even close to a solution. All the methods we use to solve all other problems simply do not work here, we are impossibly far from a solution.

A classic technique from complexity theory is this concept called diagnonazliation. Essentially we create a matrix and rearrange it until we can definitely prove there is a key element missing that from the set, and thus is unsolvable. Infamously Turning used this to solve the halting problem, and Godel to prove the Indescribable Cardinality of integers. Diagnonazliation, for reasons outside of the scope of this post, simply does not work for P=NP, nor does any tool we currently have used.

Every so often papers, mostly from unknowns, come out claiming to have finally proved P does or does not equal NP however they are quickly disproven. Most famously Vinay Deolalikar, a prominent researcher out of HP Labs, published a sprawling sixty-six page proof that P!=NP. It was quickly disproved.

All it would take would be a single NP-complete problem solved in P time, a single counter-example to solve this problem that has stood unconquered since 1956. To put it in Julia’s language: figure out how to make Boeuf Bourguignon in the same time as a Caesar Salad and the prize is yours.

**Unsolved Problems and Traveling Salesman **

Admittedly this is all a little abstract; pure mathematics often tends to be. However in the past few weeks this has become my life. As part of our algorithm for TrackYourDisease we are working intensely with networks, connections of objects called nodes and distances called edges between them. In order to preform some calculations we need to make our way through the network, and in the interest of efficiency we are trying to find the shortest past.

This is called the Travelling salesman problem, and it is a famous example of NP-complexity. It can understood through example:

Imagine a German Salesman is needs to visit the fifteen largest cities in his country to go to visit a bunch of clients. As he is driving from city to city he would like to minimize the distance he has to drive, and it doesn’t matter what order he visits the cities in.

This problem exists in NP. There is no way to calculate the shortest route between nodes without calculation all possible combinations of routes and just find the shortest. This isn’t that bad when it’s just a few cities; for ten cities we are comparing 3628800 routes. However in real application we are often working with networks much larger then ten nodes, let’s say a thousand or a million nodes. For one million nodes, the possible combinations are larger then the number of atoms in the universe. We do have some neat little math tricks to bring this number somewhat lower, namely the Nearest Neighbor Algorithm which reduces it a small portion by trying likely permutations first.

However as of now there is no way to simple input our network and in a finite sized number operations find our optimal path. It’s problems like this why the continued work to solve these great solved problems is justified.

These problems, while when first described are totally abstract, are quickly coming to have application as our understanding of our world increases. P vs. NP for most of it’s existence did not have application until RSA encryption came along.

There are many bright minds working through these kinds of questions and enabling people like myself to use the work in application. If you are interested in becoming your own Julia Child of unsolved math problems, I would recommend a cursory glance through what kinds of questions we are still trying to solve. If you are an aspiring computer scientist looking to improve your algorithmic work, Knuth’s The Art of Computer Programming is still your best bet.

If you are none of the above, or otherwise scared of numbers, or have blocked out all memory of high school calculus, while still being interested in the history of thinking around these kinds of big picture questions, the following reading list is filled with wonderful and intriguing works. As someone who isn’t fantastic at math, these are fantastic (and approachable) introductions to the subject:

- Gödel, Escher, Bach: An Eternal Golden Braid
- Flatland: A Romance of Many Dimensions
- The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography

Susan WaterhouseIt seems to me that P vs NP is, as you mentioned, at the core of our data driven culture. If fast heuristics for traveling through paths with large data sets don’t exist, then there will continue to be a need for ever larger processors and continued strategies for modeling with very large data sets, as you have been exploring. It seems that most mathematicians think that P and NP are different, but like the other clay institute problems, this is more easily understood than shown. A nice resource on these problems is “The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles Of Our Time” by Keith J. Devlin.