Decades-Old Mathematical Mystery About The Game Of Life Finally Solved

A mathematical game governed by simple rules throws up patterns of seemingly infinite complexity – and now a question that has puzzled hobbyists for decades has a solution.

A pattern in the Game of Life that repeats after every 19 steps

A long-standing mystery about repeating patterns in a two-dimensional mathematical game has been solved after more than 50 years with the discovery of two final pieces in the puzzle.

The result is believed to have no practical application whatsoever, but will satisfy the curiosity of the coterie of hobbyists obsessed with the Game of Life.

Invented by mathematician John Conway in 1970, the Game of Life is a cellular automaton – a simplistic world simulation that consists of a grid of “live” cells and “dead” cells. Players create a starting pattern as an input and the pattern is updated generation after generation according to simple rules.

A live cell with fewer than two neighbouring live cells is dead in the next generation; a live cell with two or three neighbouring live cells remains live; and a live cell with more than three neighbouring live cells dies. A dead cell with exactly three neighbouring live cells becomes live in the next generation. Otherwise, it remains dead.

These rules create evolving patterns of seemingly infinite complexity that throw up three types of shape: static objects that don’t change; “oscillators”, which form a repeating but stationary pattern; and “spaceships”, which repeat but also move across the grid.

One of the enduring problems in Game of Life research is whether there are oscillators with every “period”: ones that repeat every two steps, every three steps and so on, to infinity. There was a strong clue that this would be true when mathematician David Buckingham designed a technique that could create oscillators with any period above 57, but there were still missing oscillators for some smaller numbers.

Now, a team of hobbyists has filled those last remaining gaps by publishing a paper that describes oscillators with periods of 19 and 41 – the final missing shapes.

One member of the team, Mitchell Riley at New York University Abu Dhabi, works on the problem as a hobby alongside his research in a quantum computing group. He says there are lots of methods to generate new oscillators, but no way has been found to create them with specific periods, meaning that research in this area is a game of chance. “It’s just like playing darts – we’ve just never hit 19, and we’ve never hit 41,” he says.

Riley had been scouring lists of known shapes that consist of two parts, a hassler and a catalyst. Game of Life enthusiasts coined these terms for static shapes – catalysts – that contain a changing shape inside – a hassler. The interior reacts to the exterior, but leaves it unchanged, and together they form an oscillator of a certain period. Riley’s contribution was writing a computer program to discover potentially useful catalysts.

“The stars have to align,” he says. “You need the reaction in the middle to not destroy the thing on the outside, and the reaction in the middle, just by chance, to return to its original state in one of these new periods.”

Riley says that there are no applications known for this research and that he was drawn to the problem by “pure curiosity”.

Susan Stepney at the University of York, UK, says the work demonstrates some “extremely clever and creative techniques”, but it certainly isn’t the final conclusion of research on Conway’s creation.

“I don’t think work on Game of Life will ever be complete,” says Stepney. “The system is computationally universal, so there is always more behaviour to find, and it is seemingly so simple to describe, but so complex in its behaviour, that it remains fascinating to many.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Dividing Polynomials – The Grid Method

Students generally learn to divide polynomials using  long division or synthetic division. This post is about another method for dividing polynomials,  the “grid” method. Although this method is used at some schools, I have not found it in any textbook (if you know of any, please let me know).

The grid method makes polynomial division a fun calculation – an almost SUDOKU-like process. The recreational challenge here is to master the method, and to convince yourself that it actually works in all cases (which cases does it work well for, and which are more difficult?).

Before tackling polynomial grid division, you need to be familiar with polynomial grid multiplication. It is the symbolic analog to the familiar “algebra tiles” manipulative, so if you have worked with these, it should be reasonably familiar.

Polynomial Grid Multiplication

In polynomial grid multiplication, the two factors are arranged on the edges of a grid – the number of terms of the factors determine the number of horizontal and vertical lines that make up the grid.

The overall product is found by filling in the cells of the grid with the product of the terms for the row and column, and then summing up all the contents of the interior of the grid.

What polynomial grid multiplication does for us is provide an explicit way to keep track of the distributive property: each term-by-term product gets its own cell.

In the example below, the two factors are placed along the edges of the grid (1). One factor provides the row headings, the other provides the column headings. Then each cell is filled in with the product of the terms from the row and column (2). Finally, the cells are added together (like terms are combined) to find the final product (3). If the terms of the factors are placed according to the same order (descending powers of x, in our example) and there are no missing terms, then like-terms of the product are found along the diagonals of the grid.

Polynomial Grid Division

Polynomial grid division works the same way as polynomial grid multiplication, but in reverse – we start by knowing one of the factors (placed along the edge of the grid), and by knowing what we want the product to be (without knowing exactly how it is ‘split up’ in the grid). Using this knowledge we work backwards, filling in the grid and the top edge one cell at a time until we are done.

Consider the example below. In (1) we create an empty grid with the denominator (divisor) playing the role of one of the factors. Since this question involves a degree 3 polynomial divided by a degree 1 polynomial, we know that the other factor (the quotient) must be degree 2. This allows us to create a grid with the correct size. In step (2) we use the highest power of the dividend (the numerator) to begin to fill in the grid – we know that 27x^3 must be in the top left. This in turn tells us that the first column entry must be 9x^2 (so that the row and column multiply to 27x^3). In step (3) we use this to fill in all of the first column, multiplying 9x^2 by the terms of the row entries.

In step (3) we now have a quadratic term -18x^2. But we know from looking at the dividend (numerator) that in the final answer we actually want 9x^2. Consequently, the other entry on the quadratic diagonal must be 27x^2, so that the whole diagonal sums to 9x^2 .  Filling this in for step (4) tells us what all the entries in the second column should be (step 5). Now that we have a linear entry -18x, we know that we need to add in a 15x so that the overall sum gives a -3x (step 6).

Having a 15x tells us that the top entry must be 5 (the product of 5 and 3x gives us 15x). Filling this in in step 7 allows us to complete the table, and we see that our final constant entry is -10, as hoped for. Now that the grid has been filled in and it matches the dividend, we can read the answer off the top – the factor that we have uncovered is the quotient we were hoping to calculate.

This method is actually easier than it seems at first, and when all steps are carried out on the same grid, is quite compact.

Here is another example for you to try it out on.

In these two examples, the division worked well – there was no remainder. In the case where we are dividing f/g and g is not a factor of f, and the degree of g is less than the degree of f, there is polynomial remainder whose degree is strictly less than that of g. So, for example, when g is a linear function (degree 1), f/g can have a constant remainder (degree 0).

In this case we proceed as above, attempting to fill in the grid with the numerator (dividend). However, if when we are done the grid does not match the numerator, we have a remainder. The remainder is the additional amount that we have to add to the grid in order to arrive at the numerator.

Consider a division question almost identical to the first one that we looked at, except here we change the numerator slightly so that it doesn’t factor well.

Following the same steps as before, we end up with a grid sum that does not match our desired answer: we have -10 instead of -9 for the final constant term. This tells us that we have a remainder of +1, that we choose to write next to the grid. In our final answer, the remainder tells us the “remaining fractional part” that we have to add at the end.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


These 7 Mathematical Facts Will Blow Your Mind

What better way is there to celebrate pi day than with a slice of mathematics? Here are 7 mathematical facts to enjoy.

There’s a mathematical trick to get out of any maze

It will soon be 14 March and that means pi day. We like to mark this annual celebration of the great mathematical constant at New Scientist by remembering some of our favourite recent stories from the world of mathematics. We have extracted a list of surprising facts from them to whet your appetite, but for the full pi day feast click through for the entire articles. These are normally only available to subscribers but to honour the world’s circumferences and diameters we have decided to make them free for a limited time.

The world’s best kitchen tile

There is a shape called “the hat” that can completely cover a surface without ever creating a repeating pattern. For decades, mathematicians had wondered whether a single tile existed that could do such a thing. Roger Penrose discovered pairs of tiles in the 1970s that could do the job but nobody could find a single tile that when laid out would have the same effect. That changed when the hat was discovered last year.

Why you’re so unique

You are one in a million. Or really, it should be 1 in a 1010^68.  This number, dubbed the doppelgängion by mathematician Antonio Padilla, is so large it is hard to wrap your head around. It is 1 followed by 100 million trillion trillion trillion trillion trillion zeroes and relates to the chances of finding an exact you somewhere else in the universe. Imagining a number of that size is so difficult that the quantum physics required to calculate it seems almost easy in comparison. There are only a finite number of quantum states that can exist in a you-sized portion of space. You reach the doppelgängion by adding them all up. Padilla also wrote about four other mind-blowing numbers for New Scientist. Here they all are.

An amazing trick

There is a simple mathematical trick that will get you out of any maze: always turn right. No matter how complicated the maze, how many twists, turns and dead ends there are, the method always works. Now you know the trick, can you work out why it always leads to success?

And the next number is

There is a sequence of numbers so difficult to calculate that mathematicians have only just found the ninth in the series and it may be impossible to calculate the tenth. These numbers are called Dedekind numbers after mathematician Richard Dedekind and describe the number of possible ways a set of logical operations can be combined. When the set contains just a few elements, calculating the corresponding Dedekind number is relatively straightforward, but as the number of elements increases, the Dedekind number grows at “double exponential speed”. Number nine in the series is 42 digits long and took a month of calculation to find.

Can’t see the forest for the TREE(3)

There is a number so big that in can’t fit in the universe. TREE(3) comes from a simple mathematical game. The game involves generating a forest of trees using different combinations of seeds according to a few simple rules. If you have one type of seed, the largest forest allowed can have one tree. For two types of seed, the largest forest is three trees. But for three types of seed, well, the largest forest has TREE(3) trees, a number that is just too big for the universe.

The language of the universe

There is a system of eight-dimensional numbers called octonions that physicists are trying to use to mathematically describe the universe. The best way to understand octonions is first to consider taking the square root of -1. There is no such number that is the result of that calculation among the real numbers (which includes all the counting numbers, fractions, numbers like pi, etc.), so mathematicians add another called i. When combined with the real numbers, this gives a system called the complex numbers, which consist of a real part and an “imaginary part”, such as 3+7i. In other words, it is two-dimensional. Octonions arise by continuing to build up the system until you get to eight dimensions. It isn’t just mathematical fun and games though – there is reason to believe that octonions may be the number system we need to understand the laws of nature.

So many new solutions

Mathematicians went looking for solutions to the three-body problem and found 12,000 of them. The three-body problem is a classic astronomy problem of how three objects can form a stable orbit around each other. Such an arrangement is described by Isaac Newton’s laws of motion but actually finding permissible solutions is incredibly difficult. In 2007, mathematicians managed to find 1223 new solutions to the problem but this was greatly surpassed last year when a team found more than 12,000 more.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Timothy Revell*


Mathematicians Invent New Way to Slice Pizza into Exotic Shapes

Here’s one thing to impress your friends with the next time you order a takeaway: new and exotic ways to slice a pizza.

Most of us divide a pizza using straight cuts that all meet in the middle. But what if the centre of the pizza has a topping that some people would rather avoid, while others desperately want crust for dipping?

Mathematicians had previously come up with a recipe for slicing – formally known as a monohedral disc tiling – that gives you 12 identically shaped pieces, six of which form a star extending out from the centre, while the other six divide up the crusty remainder. You start by cutting curved three-sided slices across the pizza, then dividing these slices in two to get the inside and outside groups, as shown below.

Now Joel Haddley and Stephen Worsley of the University of Liverpool, UK, have generalised the technique to create even more ways to slice. The pair have proved you can create similar tilings from curved pieces with any odd number of sides – known as 5-gons, 7-gons and so on (shaded below) – then dividing them in two as before. “Mathematically there is no limit whatsoever,” says Haddley, though you might find it impractical to carry out the scheme beyond 9-gon pieces.

Haddley and Worsley went one further by cutting wedges in the corners of their shapes, creating bizarre, spikey pieces that still form a circle (the image below shows this happening with 5-gons). “It’s really surprising,” says Haddley.

 

As with many mathematical results, its usefulness isn’t immediately obvious. The same is true of another pizza theorem, which looks at what happens when a pizza is haphazardly cut off-centre.

“I’ve no idea whether there are any applications at all to our work outside of pizza-cutting,” says Haddley, who has actually tried slicing a pizza in this way for real (see below). But the results are “interesting mathematically, and you can produce some nice pictures”.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


How The History of Maths is Much More Diverse Than You Think

There is more to where maths came from than the ancient Greeks. From calculus to the theorem we credit to Pythagoras, so much of our knowledge comes from other places, including ancient China, India and the Arabian peninsula, says Kate Kitagawa.

The history of mathematics has an image problem. It is often presented as a meeting of minds among ancient Greeks who became masters of logic. Pythagoras, Euclid and their pals honed the tools for proving theorems and that led them to the biggest results of ancient times. Eventually, other European greats like Leonhard Euler and Isaac Newton came along and made maths modern, which is how we got to where we are today.

But, of course, this telling is greatly distorted. The history of maths is far richer, more chaotic and more diverse than it is given credit for. So much of what is now incorporated into our global knowledge comes from other places, including ancient China, India and the Arabian peninsula.

Take “Pythagoras’s” theorem. This is the one that says that in right-angled triangles, the square of the longest side is the sum of the square of the other two sides. The ancient Greeks certainly knew about this theorem, but so too did mathematicians in ancient Babylonia, Egypt, India and China.

In fact, in the 3rd century AD, Chinese mathematician Liu Hui added a proof of the theorem to the already old and influential book The Nine Chapters on the Mathematical Art. His version includes the earliest written statement of the theorem that we know of. So perhaps we should really call it Liu’s theorem or the gougu theorem as it was known in China.

The history of maths is filled with tales like this. Ideas have sprung up in multiple places at multiple times, leaving room for interpretation as to who should get the credit. As if credit is something that can’t be split.

As a researcher on the history of maths, I had come across examples of distorted views, but it was only when working on a new book, The Secret Lives of Numbers, that I found out just how pervasive they are. Along with my co-author, New Scientist‘s Timothy Revell, we found that the further we dug, the more of the true history of maths there was to uncover.

Another example is the origins of calculus. This is often presented as a battle between Newton and Gottfried Wilhelm Leibniz, two great 17th-century European mathematicians. They both independently developed extensive theories of calculus, but missing from the story is how an incredible school in Kerala, India, led by the mathematician Mādhava, hit upon some of the same ideas 300 years before.

The idea that the European way of doing things is superior didn’t originate in maths – it came from centuries of Western imperialism – but it has infiltrated it. Maths outside ancient Greece has often been put to one side as “ethnomathematics”, as if it were a side story to the real history.

In some cases, history has also distorted legacies. Sophie Kowalevski, who was born in Moscow in 1850, is now a relatively well-known figure. She was a fantastic mathematician, known for tackling a problem she dubbed a “mathematical mermaid” for its allure. The challenge was to describe mathematically how a spinning top moves, and she made breakthroughs where others had faltered.

During her life, she was constantly discouraged from pursuing maths and often had to work for free, collecting tuition money from her students in order to survive. After her death, biographers then tainted her life, painting her as a femme fatale who relied on her looks, implying she effectively passed off others’ work as her own. There is next to no evidence this is true.

Thankfully, historians of mathematics are re-examining and correcting the biases and stereotypes that have plagued the field. This is an ongoing process, but by embracing its diverse and chaotic roots, the next chapters for maths could be the best yet.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Kate Kitagawa *


Why Science Relies Too Much On Mathematics

Mathematics is at the heart of modern science but we shouldn’t forget other ways to reason, says author and researcher Roland Ennos.

“Science is written in the language of mathematics,” proclaimed Galileo in 1623. And over the past few centuries science has become ever more mathematical. Nowadays, mathematics seems to hold total hegemony, particularly in the fields of quantum physics and relativity – the teaching of modern physics seems to involve deriving an endless series of equations.

But though it is an important tool, mathematical analysis is not the only way of approaching scientific enquiry. Scientists also need to develop concepts on which to build the mathematics and carry out experiments to test and demonstrate their ideas. And they also need to translate the equations back into physical concepts and verbal explanations to make them comprehensible. These other aspects have long been undervalued – in both the teaching and practice of physics – and this has damaged and is continuing to damage our understanding of the world around us.

Nowhere is this better exemplified than in the science of rotation and spin, which might at first glance appear to be a shining example of the triumph of mathematics. In his 1687 magnum opus Principia, Isaac Newton laid out the mathematical workings of our solar system: he showed how the laws of motion and gravity explain how the planets orbit around the sun, and how the spin of the earth causes it to bulge, drives the tides and makes its tilted axis slowly wobble. Over the next hundred years, Newton’s analysis was extended and translated into modern mathematical language. All the problems of cosmology appeared to have been solved, the first of many occasions when scientists have mistakenly thought they had uncovered all the secrets of the universe.

Yet Newton’s triumph was only made possible by his more down-to-earth contemporary Robert Hooke. It was Hooke who made the conceptual leap that an object moving in a circle is travelling at a constant speed but is also accelerating at right angles towards the centre of the circle. He also went on to show experimentally how a universal gravity could provide the force that causes the planets to orbit around the sun and the moon around Earth. He hung a large ball, representing Earth, from the ceiling and a small ball, representing the moon, from the large ball, before pulling them away from vertical and setting them moving. The tension in the ropes, representing gravity, provided the inward force that kept them travelling around in a circle.

Unfortunately, Newton, who came to dominate world science, had little time for such conceptual and experimental approaches, insisting that equations were the only way to describe physical reality. His influence impeded further conceptual advances in mechanics and consequently progress in cosmology. For instance, it delayed our understanding of how the solar system was created.

The accepted model – the nebular hypothesis – was put forward in the 18th century by such luminaries as the philosopher Immanuel Kant and the mathematician Pierre-Simon Laplace. The hypothesis proposed that the solar system formed from a spinning ball of dust and gas. Gravity flattened the ball into a disc before the attraction between the particles pulled them together into planets and moons, all orbiting in the same plane and in the same direction.

All seemed well until the 1850s when engineers such as William Rankine finally developed a new mechanical concept – the conservation of angular momentum – 150 years after the conservation of linear momentum had been accepted. This new concept revealed a potential flaw in the nebular hypothesis that had remained hidden in Newton’s equations. To have shrunk to its size and to spin so slowly, the sun must have lost almost all its angular momentum, something that seemed to break this new law of nature.

It was only 40 years ago that a convincing explanation was proposed about how the sun lost its angular momentum. The charged particles shot out by the sun in the solar wind are channelled within magnetic fields before being flung out slowing the spin of the material that remained and allowing gravity to draw it inwards. It was only two years ago that this explanation was finally verified by the Parker Solar Probe, which found that the solar particles were channelled up to 32 million kilometres outwards before being released. And only in October 2023 did the James Webb Space Telescope reveal the same process occurring in the newly forming solar system of the star HH212.

The overreliance on mathematics also delayed our understanding of how the spin of Earth makes it habitable. By the end of the 18th century, Laplace had derived equations describing how Earth’s spin deflects bodies of water moving over its surface. However, even he failed to observe that it would also affect solid objects and gases, so his work was ignored by the early meteorologists.

This only changed in 1851, when the French physicist Jean Foucault produced a free-hanging pendulum that demonstrated Laplace’s forces in action. The forces diverted the bob to the right during each sweep so that its plane of swing gradually rotated, like a Spirograph drawing. Not only did this prove the spin of Earth to a sceptical public, but it showed schoolteacher William Ferrel that Laplace’s forces would also deflect air masses moving around Earth’s surface. This would explain how global air currents are deflected east and west to form the three convection cells that cover each hemisphere and create the world’s climate zones, and how they divert winds into rotating weather systems, creating depressions, hurricanes and anticyclones. Modern meteorology was born.

In 1835, the French engineer Gaspard-Gustave de Coriolis produced more general equations describing the forces on bodies moving within a rotating reference frame. However, since these were in a paper examining the efficiency of water wheels, his work was largely ignored by scientists. Instead, it was a simple experiment that enabled geophysicists to understand how Earth’s spin diverts fluid movements in its interior and produces its magnetic field.

In 1911, the British physicist G. I. Taylor investigated how beakers of water behave when they are set spinning. The water quickly spins with the beaker and its surface rises in a parabola until the extra pressure counters the centrifugal force on the water. What’s interesting is how the water behaves when it is disturbed. Its movement changes the centrifugal force on it, as Coriolis’s equations predicted, so that when heated from below, it moves not in huge convection currents but up and down in narrow rotating columns. This discovery led the geophysicists Walter Elsasser and Edward Bullard to realise that the same forces would deflect convection currents in Earth’s metal outer core that are driven by radioactive decay. They are diverted into north-to-south columns of rotating metal that act like self-excited dynamos, producing the magnetic field that shields Earth from charged particles. A simple laboratory demonstration had illuminated events in Earth’s core that had been hidden in Coriolis’s equations.

Today, perhaps the most damaging failure to translate the mathematics of spin into easy-to-grasp concepts is in the fields of biomechanics and sports science. Our bodies are complex systems of rotating joints, but despite the sophistication of modern motion analysis software, few researchers realise that accelerating our joints can produce torques that actively accelerate our limbs. Biomechanics researchers are only starting to realise that accelerating our bodies upwards at the start of each step swings our arms and legs when we walk, and that a sling action straightens them at the end of each step.

In the same way, when we throw things, we use a multi-stage sling action; rotating our shoulders accelerates first our upper arm, then our forearm and finally our hands. And the reason we can wield heavy sledgehammers and swing wooden clubs to smash golf balls down the fairway is that their handles act as further sling elements; they accelerate forwards due to the centrifugal forces on them without us having to flex our wrists. Failing to articulate these simple mechanical concepts has made biomechanics ill-equipped to communicate with and help physiotherapists, sports coaches and roboticists.

And there is still confusion about the simplest aspects of rotation among physicists. Even Richard Feynman, for instance, was unable to explain the so-called Dzhanibekov effect – why spinning wing nuts on the International Space Station flip every few seconds. This was despite the fact that the mathematician Leonhard Euler had shown this should happen almost 300 years ago. The same is also true of more down-to-earth events: how children power playground swings and how cats land on their feet, for example.

The truth is that the basics of physics, despite involving simple mathematics, are harder to grasp than we tend to think. It took me two years, for instance, to master just the science of spin and rotation for my latest book. We need to spend more time thinking about, visualising and demonstrating basic physical concepts. If we do, we could produce a generation of physicists who can communicate better with everyone else and discover more about the world around us. The answers are probably already there, hidden in the equations.

The Science of Spin by Roland Ennos is out now.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Roland Ennos*


Mathematicians Shocked to Find Pattern in ‘Random’ Prime Numbers

Mathematicians are stunned by the discovery that prime numbers are pickier than previously thought. The find suggests number theorists need to be a little more careful when exploring the vast infinity of primes.

Primes, the numbers divisible only by themselves and 1, are the building blocks from which the rest of the number line is constructed, as all other numbers are created by multiplying primes together. That makes deciphering their mysteries key to understanding the fundamentals of arithmetic.

Although whether a number is prime or not is pre-determined, mathematicians don’t have a way to predict which numbers are prime, and so tend to treat them as if they occur randomly. Now Kannan Soundararajan and Robert Lemke Oliver of Stanford University in California have discovered that isn’t quite right.

“It was very weird,” says Soundararajan. “It’s like some painting you are very familiar with, and then suddenly you realise there is a figure in the painting you’ve never seen before.”

Surprising order

So just what has got mathematicians spooked? Apart from 2 and 5, all prime numbers end in 1, 3, 7 or 9 – they have to, else they would be divisible by 2 or 5 – and each of the four endings is equally likely. But while searching through the primes, the pair noticed that primes ending in 1 were less likely to be followed by another prime ending in 1. That shouldn’t happen if the primes were truly random – consecutive primes shouldn’t care about their neighbour’s digits.

“In ignorance, we thought things would be roughly equal,” says Andrew Granville of the University of Montreal, Canada. “One certainly believed that in a question like this we had a very strong understanding of what was going on.”

The pair found that in the first hundred million primes, a prime ending in 1 is followed by another ending in 1 just 18.5 per cent of the time. If the primes were distributed randomly, you’d expect to see two 1s next to each other 25 per cent of the time. Primes ending in 3 and 7 take up the slack, each following a 1 in 30 per cent of primes, while a 9 follows a 1 in around 22 per cent of occurrences.

Similar patterns showed up for the other combinations of endings, all deviating from the expected random values. The pair also found them in other bases, where numbers are counted in units other than 10s. That means the patterns aren’t a result of our base-10 numbering system, but something inherent to the primes themselves. The patterns become more in line with randomness as you count higher – the pair have checked up to a few trillion – but still persists.

“I was very surprised,” says James Maynard of the University of Oxford, UK, who on hearing of the work immediately performed his own calculations to check the pattern was there. “I somehow needed to see it for myself to really believe it.”

Stretching to infinity

Thankfully, Soundararajan and Lemke Oliver think they have an explanation. Much of the modern research into primes is underpinned G H Hardy and John Littlewood, two mathematicians who worked together at the University of Cambridge in the early 20th century. They came up with a way to estimate how often pairs, triples and larger grouping of primes will appear, known as the k-tuple conjecture.

Just as Einstein’s theory of relativity is an advance on Newton’s theory of gravity, the Hardy-Littlewood conjecture is essentially a more complicated version of the assumption that primes are random – and this latest find demonstrates how the two assumptions differ. “Mathematicians go around assuming primes are random, and 99 per cent of the time this is correct, but you need to remember the 1 per cent of the time it isn’t,” says Maynard.

The pair used Hardy and Littlewood’s work to show that the groupings given by the conjecture are responsible for introducing this last-digit pattern, as they place restrictions on where the last digit of each prime can fall. What’s more, as the primes stretch to infinity, they do eventually shake off the pattern and give the random distribution mathematicians are used to expecting.

“Our initial thought was if there was an explanation to be found, we have to find it using the k-tuple conjecture,” says Soundararajan. “We felt that we would be able to understand it, but it was a real puzzle to figure out.”

The k-tuple conjecture is yet to be proven, but mathematicians strongly suspect it is correct because it is so useful in predicting the behaviour of the primes. “It is the most accurate conjecture we have, it passes every single test with flying colours,” says Maynard. “If anything I view this result as even more confirmation of the k-tuple conjecture.”

Although the new result won’t have any immediate applications to long-standing problems about primes like the twin-prime conjecture or the Riemann hypothesis, it has given the field a bit of a shake-up. “It gives us more of an understanding, every little bit helps,” says Granville. “If what you take for granted is wrong, that makes you rethink some other things you know.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


The Mathematician Who Worked Out How To Time Travel

Mathematics suggested that time travel is physically possible – and Kurt Gödel proved it. Mathematician Karl Sigmund explains how the polymath did it.

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or mathematician to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

There may be no better way to get truly lost in space-time than to travel to the past and fiddle around with causality. Polymath Kurt Gödel suggested that you could, for instance, land near your younger self and “do something” to that person. If your action was drastic enough, like murder (or is it suicide?), then you could neither have embarked on your time trip, nor perpetrated the dark deed. But then no one would have stopped you from going back in time and so you can commit your crime after all. You are lost in a loop. It’s no longer where you are, but whether you are.

Gödel was the first to prove that, according to general relativity, this sort of time travel can be done. While logically impossible, the equations say it is physically possible. How can that actually be the case?

Widely hailed as “the greatest logician since Aristotle”, Gödel is mainly known for his mathematical and philosophical work. By age 25, while at the University of Vienna, he developed his notorious incompleteness theorems. These basically say that there is no finite set of assumptions that can underpin all of mathematics. This was quickly perceived as a turning point in the subject.

In 1934, Gödel, now 28, was among the first to be invited to the newly founded Institute for Advanced Study in Princeton, New Jersey. During the following years, he commuted between Princeton and Vienna.

After a traumatic journey around a war-torn globe, Gödel settled in Princeton for good in 1940. This is when his friendship with Albert Einstein developed. Their daily walks became legendary. Einstein quipped: “I come to my office just for the privilege to escort Gödel back home.”  The two strollers seemed eerily out of their time. The atomic bomb was built without Einstein, and the computer without Gödel.

When Einstein’s 70th birthday approached, Gödel was asked to contribute to the impending Festschrift a philosophical chapter on German philosopher Immanuel Kant and relativity – a well-grazed field. To his mother, he wrote: “I was asked to write a paper for a volume on the philosophical meaning of Einstein and his theory; of course, I could not very well refuse.”

Gödel began to reflect on Kant’s view that time was not, as Newton would have it, an absolute, objective part of the world, but an a priori form of intuition constraining our cognition. As Kant said: “What we represent ourselves as changes would, in beings with other forms of cognition, give rise to a perception in which… change would not occur at all.” Such “beings” would experience the world as timeless.

In his special relativity, Einstein had famously shown that different observers can have different notions of “now”. Hence, no absolute time. (“Newton, forgive me!” sighed Einstein.) However, this theory does not include gravitation. Add mass, and a kind of absolute time seems to sneak back! At least, it does so in the standard model of cosmology. There, the overall flow of matter works as a universal clock. Space-time is sliced in an infinity of layers, each representing a “now”, one succeeding another. Is this a necessary feature of general relativity? Gödel had found a mathematical kernel in a philosophical problem. That was his trademark.

At this stage, according to cosmologist Wolfgang Rindler, serendipity stepped in: Gödel stumbled across a letter to the journal Nature by physicist George Gamow, entitled “Rotating universe?”. It points out that apparently most objects in the sky spin like tops. Stars do it, planets do it, even spiral galaxies do it. They rotate. But why?

Gamow suggested that the whole universe rotates, and that this rotation trickles down, so to speak, to smaller and smaller structures: from universe to galaxies, from galaxies to stars, from stars to planets. The idea was ingenious, but extremely vague. No equations, no measurements. However, the paper ended with a friendly nudge for someone to start calculating.

With typical thoroughness, Gödel took up the gauntlet. He had always been a hard worker, who used an alarm clock not for waking up but for going to bed. He confided to his mother that his cosmology absorbed him so much that even when he tried to listen to the radio or to movies, he could do so “only with half an ear”. Eventually, Gödel discovered exact solutions of Einstein’s equations, which described a rotating universe.

However, while Gamow had imagined that the centre of rotation of our world is somewhere far away, beyond the reach of the strongest telescopes, Gödel’s universe rotates in every point. This does not solve Gamow’s quest for the cause of galactic rotations, but yields another, amazing result. In contrast to all then-known cosmological models, Gödel’s findings showed that there is no “now” that’s valid everywhere. This was exactly what he had set out to achieve: vindicate Kant (and Einstein) by showing that there is no absolute time.

“Talked a lot with Gödel,” wrote his friend Oskar Morgenstern, the economist who, together with John von Neumann, had founded game theory. He knew Gödel from former Viennese days and reported all their meetings in his diary. “His cosmological work makes good progress. Now one can travel into the past, or reach arbitrarily distant places in arbitrarily short time. This will cause a nice stir.” Time travel had been invented.

In Gödel’s universe, you don’t have to flip the arrow of time to go back to the past. Your time runs as usual. No need to shift entropy in return gear. You just step into a rocket and take off, to fly in a very wide curve (very wide!) at a very high speed (but less than the speed of light). The rocket’s trajectory weaves between light cones, never leaving them but exploiting the fact that in a rotating universe, they are not arrayed in parallel. The trip would consume an awful amount of energy.

Gödel just managed to meet the editorial timeline. On his 70th birthday, Einstein got Gödel’s manuscript for a present (and a sweater knitted by Kurt’s wife Adele). He thanked him for the gifts and confessed that the spectre of time travel had worried him for decades. Now the spectre had materialised. Einstein declared Gödel’s paper “one of the most important since my own”, and stated his hope that time travel could be excluded by some as yet unknown physical law. Soon after, Gödel received the first Albert Einstein award. It went with a modest amount of money which Gödel, as it turned out, could use well.

Next, according to philosopher Palle Yourgrau, “something extraordinary happened: nothing”.

For several decades, the mind-bending discovery of Gödel, far from causing “a nice stir”, got very little attention. When Harry Woolf, the director of the Institute for Advanced Study, arranged the eulogies to be given at Gödel’s funeral in 1978, he listed the topics to be covered: set theory and logic, followed by relativity, which he noted was “not worth a talk”.

Only by and by did eminent cosmologists, such as Stephen Hawking, Kip Thorne or John Barrow, convey an area of respectability to the field. Today, it is mainstream. With time, it transpired that, years before Gödel’s breakthrough, several other cosmological models had exhibited both rotation and the possibility of time travel. However, this aspect had never been noticed, not even by the engineers of these universes.

Many physicists are happy to leave the paradoxical aspects of time travel to philosophers. They invoke a “chronology protection law” that would step in to prevent the worst. It sounds like whistling in the dark but helps to overcome the problem of haunting your own present as a revenant from the future.

And does our universe rotate? Gödel was equivocal on that issue. Sometimes he claimed that his model only served as a thought experiment, to display the illusionary character of time, which cannot depend on accidental features of the place we happen to inhabit. Cosmologist Freeman Dyson, however, reported that Gödel, near the end of his life, had shown dismay when told that evidence for a rotating universe is lacking.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karl Sigmund*


Win $12k By Rediscovering The Secret Phrases That Secure The Internet

Five secret phrases used to create the encryption algorithms that secure everything from online banking to email have been lost to history – but now cryptographers are offering a bounty to rediscover them.

Could you solve a cryptography mystery?

Secret phrases that lie at the heart of modern data encryption standards were accidentally forgotten decades ago – but now cryptographers are offering a cash bounty for anyone who can figure them out. While this won’t allow anyone to break these encryption methods, it could solve a long-standing puzzle in the history of cryptography.

“This thing is used everywhere, and it’s an interesting question; what’s the full story? Where did they come from?” says cryptographer Filippo Valsorda. “Let’s help the trust in this important tool of cryptography, and let’s fill out this page of history that got torn off.”

The tool in question is a set of widely-used encryption algorithms that rely on mathematical objects called elliptic curves. In theory, any of an infinite number of curves can be used in the algorithms, but in the late 1990s the US National Security Agency (NSA), which is devoted to protecting domestic communications and cracking foreign transmissions, chose five specific curves it recommended for use. These were then included in official US encryption standards laid down in 2000, which are still used worldwide today.

Exactly why the NSA chose these particular curves is unclear, with the agency saying only that they were chosen at random. This led some people to believe that the NSA had secretly selected curves that were weak in some way, allowing the agency to crack them. Although there is no evidence that the elliptic curves in use today have been cracked, the story persists.

In the intervening years, it has been confirmed that the curves were chosen by an NSA cryptographer named Jerry Solinas, who died earlier this year. Anonymous sources have suggested that Solinas chose the curves by transforming English phrases into a string of numbers, or hashes, that served as a parameter in the curves.

It is thought the phrases were along the lines of “Jerry deserves a raise”. But rumours suggest Solinas’s computer was replaced shortly after making the choice, and keeping no record of them, he couldn’t figure out the specific phrases that produced the hashes used in the curves. Turning a phrase into a hash is a one-way process, meaning that recovering them was impossible with the computing power available at the time.

Dustin Moody at the US National Institute of Standards and Technology, which sets US encryption standards, confirmed the stories to New Scientist: “I asked Jerry Solinas once, and he said he didn’t remember what they were. Jerry did seem to wish he remembered, as he could tell it would be useful for people to know exactly how the generation had gone. I think that when they were created, nobody [thought] that the provenance was a big deal.”

Now, Valsorda and other backers have offered a $12,288 bounty for cracking these five hashes – which will be tripled if the recipient chooses to donate it to charity. Half of the sum will go to the person who finds the first seed phrase, and the other half to whoever can find the remaining four.

Valsorda says that finding the hashes won’t weaken elliptic curve cryptography – because it is the nature of the curves that protects data, not the mathematical description of those curves – but that doing so will “help fill in a page of cryptographic history”. He believes that nobody in the 1990s considered that the phrases would be of interest in the future, and that the NSA couldn’t have released them anyway once they discovered that they were jokey phrases about one of their staff wanting a raise.

There are two main ways someone could claim the prize. The first is brute force – simply trying vast numbers of possible seeds, and checking the values created by hashing them against the known curves, which is more feasible than in the 1990s because of advances in computing power. 

But Valsorda says someone may already have the phrases written down. “Some of the people who did this work, or were in the same office as the people who did this work, probably are still around and remember some details,” he says. “The people who are involved in history sometimes don’t realise the importance of what they remember. But I’m not actually suggesting anybody, like, goes stalking NSA analysts.”

Keith Martin at Royal Holloway, University of London, says that the NSA itself would be best-equipped to crack the problem, but probably has other priorities, and anybody else will struggle to find the resources.

“I would be surprised if they’re successful,” he says. “But on the other hand, I can’t say for sure what hardware is out there and what hardware will be devoted to this problem. If someone does find the [phrases], what would be really interesting is how did they do it, rather than that they’ve done it.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Mathematicians Calculate 42-Digit Number After Decades Of Trying

Dedekind numbers describe the number of ways sets of logical operations can be combined, and are fiendishly difficult to calculate, with only eight known since 1991 – and now mathematicians have calculated the ninth in the series.

The ninth Dedekind number was calculated using the Noctua 2 supercomputer at Paderborn University in Germany

A 42-digit-long number that mathematicians have been hunting for decades, thanks to its sheer difficulty to calculate, has suddenly been found by two separate groups at the same time. This ninth Dedekind number, as it is known, may be the last in the sequence that is feasible to discover.

Dedekind numbers describe the number of ways a set of logical operations can be combined. For sets of just two or three elements, the total number is easy to calculate by hand, but for larger sets it rapidly becomes impossible because the number grows so quickly, at what is known as a double exponential speed.

“You’ve got two to the power two to the power n, as a very rough estimate of the complexity of this system,” says Patrick de Causmaecker at KU Leuven in Belgium. “If you want to find the Dedekind numbers, that is the kind of magnitude of counting that you will have to face.”

The challenge of calculating higher Dedekind numbers has attracted researchers in many disciplines, from pure mathematicians to computer scientists, over the years. “It’s an old, famous problem and, because it’s hard to crack, it’s interesting,” says Christian Jäkel at Dresden University of Technology in Germany.

In 1991, mathematician Doug Wiedemann found the eighth Dedekind number using 200 hours of number crunching on the Cray-2 supercomputer, one of the most powerful machines at the time. No one could do any better, until now.

After working on the problem on and off for six years, Jäkel published his calculation for the ninth Dedekind number in early April. Coincidently, Causmaecker and Lennart van Hirtum, also at KU Leuven, published their work three days later, having produced the same result. Both groups were unaware of one another. “I was shocked, I didn’t know about their work. I thought it would take at least 10 years or whatever to recompute it,” says Jäkel.

The resulting number is 286,386,577,668,298,411,128,469,151,667,598,498,812,366, which is 42 digits long.

Jäkel’s calculation took 28 days on eight graphical processing units (GPUs). To reduce the number of calculations required, he multiplied together elements from the much smaller fifth Dedekind number.

Causmaecker and van Hirtum instead used a processor called a field-programmable gate array (FPGA) for their work. Unlike a CPU or a GPU, these can perform many different kinds of interrelated calculations at the same time. “In an FPGA, everything is always happening all at once,” says van Hirtum. “You can compare it to a car assembly line.”

Like Jäkel, the team used elements from a smaller Dedekind number, in their case the sixth, but this still required 5.5 quadrillion operations and more than four months of computing time using the Noctua 2 supercomputer at Paderborn University, says van Hirtum.

People are divided on whether another Dedekind number will ever be found. “The tenth Dedekind number will be in the realm of 10 to the power of 82, which puts you at the number of atoms in the visible universe, so you can imagine you need something big in technical advancement that also grows exponentially,” says Jakel.

Van Hirtum also thinks the amount of computing power becomes impractical for the next number, requiring trillions more computations which would require capturing the power output of the entire sun. “This jump in complexity remains absolutely astronomical,” he says.

Causmaecker, however, is more positive, as he thinks new ways of calculating could bring that requirement down. “The combination of exponential growth of computing power, and the power of the mathematical algorithms, will go together and maybe in 20 or 30 years we can compute [Dedekind number] 10.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*