We Could Solve The Biggest Problem in Maths in The Next Decade

P is not NP? That is the question

One of the biggest open problems in mathematics may be solved within the next decade, according to a poll of computer scientists. A solution to the so-called P versus NP problem is worth $1 million and could have a profound effect on computing, and perhaps even the entire world.

The problem is a question about how long algorithms take to run and whether some hard mathematical problems are actually easy to solve.

P and NP both represent groups of mathematical problems, but it isn’t known if these groups are actually identical.

P, which stands for polynomial time, consists of problems that can be solved by an algorithm in a relatively short time. NP, which stands for nondeterministic polynomial time, comprises the problems that are easy to check if you have the right answer given a potential candidate, although actually finding an answer in the first place might be difficult.

NP problems include a number of important real-world tasks, such as the travelling salesman problem, which involves finding a route between a list of cities that is shorter than a certain limit. Given such a route, you can easily check if it fits the limit, but finding one might be more difficult.

Equal or not

The P versus NP problem asks whether these two collections of problems are actually the same. If they are, and P = NP, the implications are potentially world-changing, because it could become much easier to solve these tasks. If they aren’t, and P doesn’t equal NP, or P ≠ NP, a proof would still answer fundamental questions about the nature of computation.

The problem was first stated in 1971 and has since become one of the most important open questions in mathematics – anyone who can find the answer either way will receive $1 million from the Clay Mathematics Institute in Cambridge, Massachusetts.

William Gasarch, a computer scientist at the University of Maryland in College Park, conducts polls of his fellow researchers to gauge the current state of the problem. His first poll, in 2002, found that just 61 per cent of respondents thought P ≠ NP. In 2012, that rose to 83 per cent, and now in 2019 it has slightly increased to 88 per cent. Support for P = NP has also risen, however, from 9 per cent in 2002 to 12 per cent in 2019, because the 2002 poll had a large number of “don’t knows”.

Confidence that we might soon have an answer is also rising. In 2002, just 5 per cent thought the problem would be resolved in the next decade, falling to 1 per cent in 2012, but now the figure sits at 22 per cent. “This is very surprising since there has not been any progress on it,” says Gasarch. “If anything, I think that as the problem remains open longer, it seems harder.” More broadly, 66 per cent believe it will be solved before the end of the century.

There was little agreement on the kind of mathematics that would ultimately be used to solve the problem, although a number of respondents suggested that artificial intelligence, not humans, could play a significant role.

“I can see this happening to some extent, but the new idea needed will, I think, come from a human,” says Gasarch. “I hope so, not for any reason of philosophy, but just because if a computer did it we might know that (say) P ≠ NP, but not really know why.”

Neil Immerman at the University of Massachusetts Amherst thinks that this kind of polling is interesting, but ultimately can’t tell us much about the P versus NP problem.

“As this poll demonstrates, there is no consensus on how this problem will be eventually solved,” he says. “For that reason, it is hard to measure the progress we have made since 1971 when the question was first stated.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


What The Mathematics of Knots Reveals About The Shape of The Universe

Knot theory is linked to many other branches of science, including those that tell us about the cosmos.

The mathematical study of knots started with a mistake. In the 1800s, mathematician and physicist William Thomson, also known as Lord Kelvin, suggested that the elemental building blocks of matter were knotted vortices in the ether: invisible microscopic currents in the background material of the universe. His theory dropped by the wayside fairly quickly, but this first attempt to classify how curves could be knotted grew into the modern mathematical field of knot theory. Today, knot theory is not only connected to many branches of theoretical mathematics but also to other parts of science, like physics and molecular biology. It’s not obvious what your shoelace has to do with the shape of the universe, but the two may be more closely related than you think.

As it turns out, a tangled necklace offers a better model of a knot than a shoelace: to a mathematician, a knot is a loop in three-dimensional space rather than a string with loose ends. Just as a physical loop of string can stretch and twist and rotate, so can a mathematical knot – these loops are floppy rather than fixed. If we studied strings with free ends, they could wiggle around and untie themselves, but a loop stays knotted unless it’s cut.

Most questions in knot theory come in two varieties: sorting knots into classes and using knots to study other mathematical objects. I’ll try to give a flavour of both, starting with the simplest possible example: the unknot.

Draw a circle on a piece of paper. Congratulations, you’ve just constructed an unknot! This is the name for any loop in three-dimensional space that is the boundary of a disc. When you draw a circle on a piece of paper, you can see this disc as the space inside the circle, and your curve continues to be an unknot if you crumple the paper up, toss it through the air, flatten it out and then do some origami. As long as the disc is intact, no matter how distorted, the boundary is always an unknot.

Things get more interesting when you start with just the curve. How can you tell if it’s an unknot? There may secretly be a disc that can fill in the loop, but with no limits on how deformed the disc could be, it’s not clear how you can figure this out.

Two unknots

It turns out that this question is both hard and important: the first step in studying complicated objects is distinguishing them from simple ones. It’s also a question that gets answered inside certain bacterial cells each time they replicate. In the nuclei of these cells, the DNA forms a loop, rather than a strand with loose ends, and sometimes these loops end up knotted. However, the DNA can replicate only when the loop is an unknot, so the basic life processes of the cell require a process for turning a potentially complicated loop into an unknotted one.

A class of proteins called topoisomerases unknot tangled loops of DNA by cutting a strand, moving the free ends and then reattaching them. In a mathematical context, this operation is called a “crossing change”, and it’s known that any loop can be turned into the unknot by some number of crossing changes. However, there’s a puzzle in this process, since random crossing changes are unlikely to simplify a knot. Each topoisomerase operates locally, but collectively they’re able to reliably unknot the DNA for replication. Topoisomerases were discovered more than 50 years ago, but biologists are still studying how they unknot DNA so effectively.

When mathematicians want to identify a knot, they don’t turn to a protein to unknot it for them.  Instead, they rely on invariants, mathematical objects associated with knots. Some invariants are familiar things like numbers, while others are elaborate algebraic structures. The best invariants have two properties: they’re practical to compute, given the input of a specific knot, and they distinguish many different classes of knots from each other. It’s easy to define an invariant with only one of these properties, but a computable and effective knot invariant is a rare find.

The modern era of knot theory began with the introduction of an invariant called the Jones Polynomial in the 1980s. Vaughan Jones was studying statistical mechanics when he discovered a process that assigns a polynomial – a type of simple algebraic expression – to any knot. The method he used was technical, but the essential feature is that no amount of wiggling, stretching or twisting changes the output. The Jones Polynomial of an unknot is 1, no matter how complicated the associated disc might be.

Jones’s discovery caught the attention of other researchers, who found simpler techniques for computing the same polynomial. The result was an invariant that satisfies both the conditions listed above: the Jones Polynomial can be computed from a drawing of a knot on paper, and many thousands of knots can be distinguished by the fact that they have different Jones Polynomials.

However, there are still many things we don’t know about the Jones Polynomial, and one of the most tantalising questions is which knots it can detect. Most invariants distinguish some knots while lumping others together, and we say an invariant detects a knot if all the examples sharing a certain value are actually deformations of each other. There are certainly pairs of distinct knots with the same Jones Polynomial, but after decades of study, we still don’t know whether any knot besides the unknot has the polynomial 1. With computer assistance, experts have examined nearly 60 trillion examples of distinct knots without finding any new knots whose Jones Polynomials equal 1.

The Jones Polynomial has applications beyond knot detection. To see this, let’s return to the definition of an unknot as a loop that bounds a disc. In fact, every knot is the boundary of some surface – what distinguishes an unknot is that this surface is particularly simple. There’s a precise way to rank the complexity of surfaces, and we can use this to rank the complexity of knots. In this classification, the simplest knot is the unknot, and the second simplest is the trefoil, which is shown below.

Trefoil knot

To build a surface with a trefoil boundary, start with a strip of paper. Twist it three times and then glue the ends together. This is more complicated than a disc, but still pretty simple. It also gives us a new question to investigate: given an arbitrary knot, where does it fit in the ranking of knot complexity? What’s the simplest surface it can bound? Starting with a curve and then hunting for a surface may seem backwards, but in some settings, the Jones Polynomial answers this question: the coefficients of the knot polynomial can be used to estimate the complexity of the surfaces it bounds.

Joan Licata

Knots also help us classify other mathematical objects. We can visually distinguish the two-dimensional surface of sphere from the surface a torus (the shape of a ring donut), but an ant walking on one of these surfaces might need knot theory to tell them apart. On the surface of a torus, there are loops that can’t be pulled any tighter, while any loop lying on a sphere can contract to a point.

We live inside a universe of three physical dimensions, so like the ant on a surface, we lack a bird’s eye view that could help us identify its global shape. However, we can ask the analogous question: can each loop we encounter shrink without breaking, or is there a shortest representative? Mathematicians can classify three-dimensional spaces by the existence of the shortest knots they contain. Presently, we don’t know if some knots twisting through the universe are unfathomably long or if every knot can be made as small as one of Lord Kelvin’s knotted vortices.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Joan Licata*


Deepmind Created a Maths AI That Can Add Up To 6 But Gets 7 Wrong

Artificial intelligence firm DeepMind has tackled games like Go and Starcraft, but now it is turning its attention to more sober affairs: how to solve school-level maths problems.

Researchers at the company tasked an AI with teaching itself to solve arithmetic, algebra and probability problems, among others. It didn’t do a very good job: when the neural network was tested on a maths exam taken by 16-year-olds in the UK, it got just 14 out of 40 questions correct, or the equivalent of an E grade.

There were also strange quirks in the AI’s ability. For example, it could successfully add up 1+1+1+1+1+1 to make 6, but failed when an extra 1 was added. On the other hand, it gave the correct answer for longer sequences and much bigger numbers.

Other oddities included the ability to correctly answer 68 to the question “calculate 17×4.”, but when the full stop was removed, the answer came out at 69.

Puzzling behaviour

The DeepMind researchers concede they don’t have a good explanation for this behaviour. “At the moment, learning systems like neural networks are quite bad at doing ‘algebraic reasoning’,” says David Saxton, one of the team behind the work.

Despite this, it is still worth trying to teach a machine to solve maths problems, says Marcus du Sautoy, a mathematician at the University of Oxford.

“There are already algorithms out there to do these problems much faster, much better than machine-learning algorithms, but that’s not the point,” says du Sautoy. “They are setting themselves a different target – we want to start from nothing, by being told whether you got that one wrong, that one right, whether it can build up how to do this itself. Which is fascinating.”

An AI capable of solving advanced mathematics problems could put him out of a job, says du Sautoy. “That’s my fear. It may not take too much for an AI to get maturity in this world, whereas a maturity in the musical or visual or language world might be much harder for it. So I do think my subject is vulnerable.”

However, he takes some comfort that machine learning’s general weakness in remaining coherent over a long form – such as a novel, rather than a poem – will keep mathematicians safe for now. Creating mathematical proofs, rather than solving maths problems for 16-year-olds, will be difficult for machines, he says.

Noel Sharkey at the University of Sheffield, UK, says the research is more about finding the limits of machine-learning techniques, rather than promoting advancements in mathematics.

The interesting thing, he says, will be to see how the neural networks can adapt to challenges outside of those they were trained on. “The big question is to ask how well they can generalise to novel examples that were not in the training set. This has the potential to demonstrate formal limits to what this type of learning is capable of.”

Saxton says training a neural network on maths problems could help provide AI with reasoning skills for other applications.

“Humans are good at maths, but they are using general reasoning skills that current artificial learning systems don’t possess,” he says. “If we can develop models that are good at solving these problems, then these models would likely be using general skills that would be good at solving other hard problems in AI as well.”

He hopes the work could make a small contribution towards more general mathematical AIs that could tackle things such as proving theorems.

The DeepMind team has published its data set of maths questions, and encouraged people to train their own AI.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Adam Vaughan*


Mathematicians Have Found a New Way to Multiply Two Numbers Together

It’s a bit more complicated than this

Forget your times tables – mathematicians have found a new, faster way to multiply two numbers together. The method, which works only for whole numbers, is a landmark result in computer science. “This is big news,” says Joshua Cooper at the University of South Carolina.

To understand the new technique, which was devised by David Harvey at the University of New South Wales, Australia, and Joris van der Hoeven at the Ecole Polytechnique near Paris, France, it helps to think back to the longhand multiplication you learned at school.

We write down two numbers, one on top of the other, and then painstakingly multiply each digit of one by each digit of the other, before adding all the results together. “This is an ancient algorithm,” says Cooper.

If your two numbers each have n digits, this way of multiplying will require roughly n2 individual calculations. “The question is, can you do better?” says Cooper.

Lots of logs

Starting in the 1960s, mathematicians began to prove that they could. First Anatoly Karatsuba found an algorithm that could turn out an answer in no more than n1.58 steps, and in 1971, Arnold Schönhage and Volker Strassen found a way to peg the number of steps to the complicated expression n*(log(n))*log(log(n)) – here “log” is short for logarithm.

These advances had a major impact on computing. Whereas a computer using the longhand multiplication method would take about six months to multiply two billion-digit numbers together, says Harvey, the Schönhage-Strassen algorithm can do it in 26 seconds.

The landmark 1971 paper also suggested a possible improvement, a tantalising prediction that multiplication might one day be possible in no more than n*log(n) steps. Now Harvey and van der Hoeven appear to have proved this is the case. “It finally appears to be possible,” says Cooper. “It passes the smell test.”

“If the result is correct, it’s a major achievement in computational complexity theory,” says Fredrik Johansson at INRIA, the French research institute for digital sciences, in Bordeaux. “The new ideas in this work are likely to inspire further research and could lead to practical improvements down the road.”

Cooper also praises the originality of the research, although stresses the complexity of the mathematics involved. “You think, jeez, I’m just multiplying two integers, how complicated can it get?” says Cooper. “But boy, it gets complicated.”

So, will this make calculating your tax returns any easier? “For human beings working with pencil and paper, absolutely not,” says Harvey. Indeed, their version of the proof only works for numbers with more than 10 to the power of 200 trillion trillion trillion digits. “The word ‘astronomical’ falls comically short in trying to describe this number,” says Harvey.

While future improvements to the algorithm may extend the proof to more humdrum numbers only a few trillion digits long, Cooper thinks its real value lies elsewhere. From a theoretical perspective, he says, this work allows programmers to provide a definitive guarantee of how long a certain algorithm will take. “We are optimistic that our new paper will allow us to achieve further practical speed-ups,” says van der Hoeven.

Harvey thinks this may well be the end of the story, with no future algorithm capable of beating n*log(n). “I would be extremely surprised if this turned out to be wrong,” he says, “but stranger things have happened.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*