Dividing Polynomials – The Grid Method

Students generally learn to divide polynomials using  long division or synthetic division. This post is about another method for dividing polynomials,  the “grid” method. Although this method is used at some schools, I have not found it in any textbook (if you know of any, please let me know).

The grid method makes polynomial division a fun calculation – an almost SUDOKU-like process. The recreational challenge here is to master the method, and to convince yourself that it actually works in all cases (which cases does it work well for, and which are more difficult?).

Before tackling polynomial grid division, you need to be familiar with polynomial grid multiplication. It is the symbolic analog to the familiar “algebra tiles” manipulative, so if you have worked with these, it should be reasonably familiar.

Polynomial Grid Multiplication

In polynomial grid multiplication, the two factors are arranged on the edges of a grid – the number of terms of the factors determine the number of horizontal and vertical lines that make up the grid.

The overall product is found by filling in the cells of the grid with the product of the terms for the row and column, and then summing up all the contents of the interior of the grid.

What polynomial grid multiplication does for us is provide an explicit way to keep track of the distributive property: each term-by-term product gets its own cell.

In the example below, the two factors are placed along the edges of the grid (1). One factor provides the row headings, the other provides the column headings. Then each cell is filled in with the product of the terms from the row and column (2). Finally, the cells are added together (like terms are combined) to find the final product (3). If the terms of the factors are placed according to the same order (descending powers of x, in our example) and there are no missing terms, then like-terms of the product are found along the diagonals of the grid.

Polynomial Grid Division

Polynomial grid division works the same way as polynomial grid multiplication, but in reverse – we start by knowing one of the factors (placed along the edge of the grid), and by knowing what we want the product to be (without knowing exactly how it is ‘split up’ in the grid). Using this knowledge we work backwards, filling in the grid and the top edge one cell at a time until we are done.

Consider the example below. In (1) we create an empty grid with the denominator (divisor) playing the role of one of the factors. Since this question involves a degree 3 polynomial divided by a degree 1 polynomial, we know that the other factor (the quotient) must be degree 2. This allows us to create a grid with the correct size. In step (2) we use the highest power of the dividend (the numerator) to begin to fill in the grid – we know that 27x^3 must be in the top left. This in turn tells us that the first column entry must be 9x^2 (so that the row and column multiply to 27x^3). In step (3) we use this to fill in all of the first column, multiplying 9x^2 by the terms of the row entries.

In step (3) we now have a quadratic term -18x^2. But we know from looking at the dividend (numerator) that in the final answer we actually want 9x^2. Consequently, the other entry on the quadratic diagonal must be 27x^2, so that the whole diagonal sums to 9x^2 .  Filling this in for step (4) tells us what all the entries in the second column should be (step 5). Now that we have a linear entry -18x, we know that we need to add in a 15x so that the overall sum gives a -3x (step 6).

Having a 15x tells us that the top entry must be 5 (the product of 5 and 3x gives us 15x). Filling this in in step 7 allows us to complete the table, and we see that our final constant entry is -10, as hoped for. Now that the grid has been filled in and it matches the dividend, we can read the answer off the top – the factor that we have uncovered is the quotient we were hoping to calculate.

This method is actually easier than it seems at first, and when all steps are carried out on the same grid, is quite compact.

Here is another example for you to try it out on.

In these two examples, the division worked well – there was no remainder. In the case where we are dividing f/g and g is not a factor of f, and the degree of g is less than the degree of f, there is polynomial remainder whose degree is strictly less than that of g. So, for example, when g is a linear function (degree 1), f/g can have a constant remainder (degree 0).

In this case we proceed as above, attempting to fill in the grid with the numerator (dividend). However, if when we are done the grid does not match the numerator, we have a remainder. The remainder is the additional amount that we have to add to the grid in order to arrive at the numerator.

Consider a division question almost identical to the first one that we looked at, except here we change the numerator slightly so that it doesn’t factor well.

Following the same steps as before, we end up with a grid sum that does not match our desired answer: we have -10 instead of -9 for the final constant term. This tells us that we have a remainder of +1, that we choose to write next to the grid. In our final answer, the remainder tells us the “remaining fractional part” that we have to add at the end.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


These 7 Mathematical Facts Will Blow Your Mind

What better way is there to celebrate pi day than with a slice of mathematics? Here are 7 mathematical facts to enjoy.

There’s a mathematical trick to get out of any maze

It will soon be 14 March and that means pi day. We like to mark this annual celebration of the great mathematical constant at New Scientist by remembering some of our favourite recent stories from the world of mathematics. We have extracted a list of surprising facts from them to whet your appetite, but for the full pi day feast click through for the entire articles. These are normally only available to subscribers but to honour the world’s circumferences and diameters we have decided to make them free for a limited time.

The world’s best kitchen tile

There is a shape called “the hat” that can completely cover a surface without ever creating a repeating pattern. For decades, mathematicians had wondered whether a single tile existed that could do such a thing. Roger Penrose discovered pairs of tiles in the 1970s that could do the job but nobody could find a single tile that when laid out would have the same effect. That changed when the hat was discovered last year.

Why you’re so unique

You are one in a million. Or really, it should be 1 in a 1010^68.  This number, dubbed the doppelgängion by mathematician Antonio Padilla, is so large it is hard to wrap your head around. It is 1 followed by 100 million trillion trillion trillion trillion trillion zeroes and relates to the chances of finding an exact you somewhere else in the universe. Imagining a number of that size is so difficult that the quantum physics required to calculate it seems almost easy in comparison. There are only a finite number of quantum states that can exist in a you-sized portion of space. You reach the doppelgängion by adding them all up. Padilla also wrote about four other mind-blowing numbers for New Scientist. Here they all are.

An amazing trick

There is a simple mathematical trick that will get you out of any maze: always turn right. No matter how complicated the maze, how many twists, turns and dead ends there are, the method always works. Now you know the trick, can you work out why it always leads to success?

And the next number is

There is a sequence of numbers so difficult to calculate that mathematicians have only just found the ninth in the series and it may be impossible to calculate the tenth. These numbers are called Dedekind numbers after mathematician Richard Dedekind and describe the number of possible ways a set of logical operations can be combined. When the set contains just a few elements, calculating the corresponding Dedekind number is relatively straightforward, but as the number of elements increases, the Dedekind number grows at “double exponential speed”. Number nine in the series is 42 digits long and took a month of calculation to find.

Can’t see the forest for the TREE(3)

There is a number so big that in can’t fit in the universe. TREE(3) comes from a simple mathematical game. The game involves generating a forest of trees using different combinations of seeds according to a few simple rules. If you have one type of seed, the largest forest allowed can have one tree. For two types of seed, the largest forest is three trees. But for three types of seed, well, the largest forest has TREE(3) trees, a number that is just too big for the universe.

The language of the universe

There is a system of eight-dimensional numbers called octonions that physicists are trying to use to mathematically describe the universe. The best way to understand octonions is first to consider taking the square root of -1. There is no such number that is the result of that calculation among the real numbers (which includes all the counting numbers, fractions, numbers like pi, etc.), so mathematicians add another called i. When combined with the real numbers, this gives a system called the complex numbers, which consist of a real part and an “imaginary part”, such as 3+7i. In other words, it is two-dimensional. Octonions arise by continuing to build up the system until you get to eight dimensions. It isn’t just mathematical fun and games though – there is reason to believe that octonions may be the number system we need to understand the laws of nature.

So many new solutions

Mathematicians went looking for solutions to the three-body problem and found 12,000 of them. The three-body problem is a classic astronomy problem of how three objects can form a stable orbit around each other. Such an arrangement is described by Isaac Newton’s laws of motion but actually finding permissible solutions is incredibly difficult. In 2007, mathematicians managed to find 1223 new solutions to the problem but this was greatly surpassed last year when a team found more than 12,000 more.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Timothy Revell*


Polygonal Number Formulas

Polygonal numbers are a mainstay of recreational and school mathematics, providing a nice bridge between numbers and shapes. The diagrams above show some of the hexagonal numbers.

Some examples of two-dimensional polygonal numbers are:

the triangular numbers: 1, 3, 6, 10, 15, …
the square numbers: 1, 4, 9, 16, 25, …
the pentagonal numbers: 1, 5, 12, 22, 35,…
the hexagonal numbers: 1, 6, 15, 28, 45, …

Comparing the listing for the hexagonal numbers with the diagrams above, you can see how the sequences are built diagrammatically. In general, beginning with a single dot, k-sided polygons are built by adding layers (called gnomons) consisting of k-2 segments, with each segment of the gnomon having one more dot than the segments of the previous layer. In this way, the nth gnomon consists of segments each n dots long, but with k-3 dots shared by adjoining segments (the corners).

The description above can lead you to a recursive formula for k-polygonals, writing p_k,n for the nth k-polygonal number:

Unwinding the recursion gives you a summation formula for k-polygonals:

Knowing a little about sums gives you the direct formula for k-polygonals:

Coming a little out of left-field is this combinatorial formula for k-polygonals:

This last formula expresses two ideas: that the triangular numbers correspond to the r=0 column of Pascal’s triangle, and that every polygonal number can be “triangulated”:

The combinatorial formula for p_kn can be generalized to higher-dimensional polygonal numbers (pyrimidal numbers, etc.).

The recreation here lies in showing that the various formulas for p_k,n are really the same, and then exploring the relationships between the different k-polygonals. A great resource is J.H. Conway and R.K. Guy’s The Book of Numbers.a

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


How The History of Maths is Much More Diverse Than You Think

There is more to where maths came from than the ancient Greeks. From calculus to the theorem we credit to Pythagoras, so much of our knowledge comes from other places, including ancient China, India and the Arabian peninsula, says Kate Kitagawa.

The history of mathematics has an image problem. It is often presented as a meeting of minds among ancient Greeks who became masters of logic. Pythagoras, Euclid and their pals honed the tools for proving theorems and that led them to the biggest results of ancient times. Eventually, other European greats like Leonhard Euler and Isaac Newton came along and made maths modern, which is how we got to where we are today.

But, of course, this telling is greatly distorted. The history of maths is far richer, more chaotic and more diverse than it is given credit for. So much of what is now incorporated into our global knowledge comes from other places, including ancient China, India and the Arabian peninsula.

Take “Pythagoras’s” theorem. This is the one that says that in right-angled triangles, the square of the longest side is the sum of the square of the other two sides. The ancient Greeks certainly knew about this theorem, but so too did mathematicians in ancient Babylonia, Egypt, India and China.

In fact, in the 3rd century AD, Chinese mathematician Liu Hui added a proof of the theorem to the already old and influential book The Nine Chapters on the Mathematical Art. His version includes the earliest written statement of the theorem that we know of. So perhaps we should really call it Liu’s theorem or the gougu theorem as it was known in China.

The history of maths is filled with tales like this. Ideas have sprung up in multiple places at multiple times, leaving room for interpretation as to who should get the credit. As if credit is something that can’t be split.

As a researcher on the history of maths, I had come across examples of distorted views, but it was only when working on a new book, The Secret Lives of Numbers, that I found out just how pervasive they are. Along with my co-author, New Scientist‘s Timothy Revell, we found that the further we dug, the more of the true history of maths there was to uncover.

Another example is the origins of calculus. This is often presented as a battle between Newton and Gottfried Wilhelm Leibniz, two great 17th-century European mathematicians. They both independently developed extensive theories of calculus, but missing from the story is how an incredible school in Kerala, India, led by the mathematician Mādhava, hit upon some of the same ideas 300 years before.

The idea that the European way of doing things is superior didn’t originate in maths – it came from centuries of Western imperialism – but it has infiltrated it. Maths outside ancient Greece has often been put to one side as “ethnomathematics”, as if it were a side story to the real history.

In some cases, history has also distorted legacies. Sophie Kowalevski, who was born in Moscow in 1850, is now a relatively well-known figure. She was a fantastic mathematician, known for tackling a problem she dubbed a “mathematical mermaid” for its allure. The challenge was to describe mathematically how a spinning top moves, and she made breakthroughs where others had faltered.

During her life, she was constantly discouraged from pursuing maths and often had to work for free, collecting tuition money from her students in order to survive. After her death, biographers then tainted her life, painting her as a femme fatale who relied on her looks, implying she effectively passed off others’ work as her own. There is next to no evidence this is true.

Thankfully, historians of mathematics are re-examining and correcting the biases and stereotypes that have plagued the field. This is an ongoing process, but by embracing its diverse and chaotic roots, the next chapters for maths could be the best yet.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Kate Kitagawa *


Why Science Relies Too Much On Mathematics

Mathematics is at the heart of modern science but we shouldn’t forget other ways to reason, says author and researcher Roland Ennos.

“Science is written in the language of mathematics,” proclaimed Galileo in 1623. And over the past few centuries science has become ever more mathematical. Nowadays, mathematics seems to hold total hegemony, particularly in the fields of quantum physics and relativity – the teaching of modern physics seems to involve deriving an endless series of equations.

But though it is an important tool, mathematical analysis is not the only way of approaching scientific enquiry. Scientists also need to develop concepts on which to build the mathematics and carry out experiments to test and demonstrate their ideas. And they also need to translate the equations back into physical concepts and verbal explanations to make them comprehensible. These other aspects have long been undervalued – in both the teaching and practice of physics – and this has damaged and is continuing to damage our understanding of the world around us.

Nowhere is this better exemplified than in the science of rotation and spin, which might at first glance appear to be a shining example of the triumph of mathematics. In his 1687 magnum opus Principia, Isaac Newton laid out the mathematical workings of our solar system: he showed how the laws of motion and gravity explain how the planets orbit around the sun, and how the spin of the earth causes it to bulge, drives the tides and makes its tilted axis slowly wobble. Over the next hundred years, Newton’s analysis was extended and translated into modern mathematical language. All the problems of cosmology appeared to have been solved, the first of many occasions when scientists have mistakenly thought they had uncovered all the secrets of the universe.

Yet Newton’s triumph was only made possible by his more down-to-earth contemporary Robert Hooke. It was Hooke who made the conceptual leap that an object moving in a circle is travelling at a constant speed but is also accelerating at right angles towards the centre of the circle. He also went on to show experimentally how a universal gravity could provide the force that causes the planets to orbit around the sun and the moon around Earth. He hung a large ball, representing Earth, from the ceiling and a small ball, representing the moon, from the large ball, before pulling them away from vertical and setting them moving. The tension in the ropes, representing gravity, provided the inward force that kept them travelling around in a circle.

Unfortunately, Newton, who came to dominate world science, had little time for such conceptual and experimental approaches, insisting that equations were the only way to describe physical reality. His influence impeded further conceptual advances in mechanics and consequently progress in cosmology. For instance, it delayed our understanding of how the solar system was created.

The accepted model – the nebular hypothesis – was put forward in the 18th century by such luminaries as the philosopher Immanuel Kant and the mathematician Pierre-Simon Laplace. The hypothesis proposed that the solar system formed from a spinning ball of dust and gas. Gravity flattened the ball into a disc before the attraction between the particles pulled them together into planets and moons, all orbiting in the same plane and in the same direction.

All seemed well until the 1850s when engineers such as William Rankine finally developed a new mechanical concept – the conservation of angular momentum – 150 years after the conservation of linear momentum had been accepted. This new concept revealed a potential flaw in the nebular hypothesis that had remained hidden in Newton’s equations. To have shrunk to its size and to spin so slowly, the sun must have lost almost all its angular momentum, something that seemed to break this new law of nature.

It was only 40 years ago that a convincing explanation was proposed about how the sun lost its angular momentum. The charged particles shot out by the sun in the solar wind are channelled within magnetic fields before being flung out slowing the spin of the material that remained and allowing gravity to draw it inwards. It was only two years ago that this explanation was finally verified by the Parker Solar Probe, which found that the solar particles were channelled up to 32 million kilometres outwards before being released. And only in October 2023 did the James Webb Space Telescope reveal the same process occurring in the newly forming solar system of the star HH212.

The overreliance on mathematics also delayed our understanding of how the spin of Earth makes it habitable. By the end of the 18th century, Laplace had derived equations describing how Earth’s spin deflects bodies of water moving over its surface. However, even he failed to observe that it would also affect solid objects and gases, so his work was ignored by the early meteorologists.

This only changed in 1851, when the French physicist Jean Foucault produced a free-hanging pendulum that demonstrated Laplace’s forces in action. The forces diverted the bob to the right during each sweep so that its plane of swing gradually rotated, like a Spirograph drawing. Not only did this prove the spin of Earth to a sceptical public, but it showed schoolteacher William Ferrel that Laplace’s forces would also deflect air masses moving around Earth’s surface. This would explain how global air currents are deflected east and west to form the three convection cells that cover each hemisphere and create the world’s climate zones, and how they divert winds into rotating weather systems, creating depressions, hurricanes and anticyclones. Modern meteorology was born.

In 1835, the French engineer Gaspard-Gustave de Coriolis produced more general equations describing the forces on bodies moving within a rotating reference frame. However, since these were in a paper examining the efficiency of water wheels, his work was largely ignored by scientists. Instead, it was a simple experiment that enabled geophysicists to understand how Earth’s spin diverts fluid movements in its interior and produces its magnetic field.

In 1911, the British physicist G. I. Taylor investigated how beakers of water behave when they are set spinning. The water quickly spins with the beaker and its surface rises in a parabola until the extra pressure counters the centrifugal force on the water. What’s interesting is how the water behaves when it is disturbed. Its movement changes the centrifugal force on it, as Coriolis’s equations predicted, so that when heated from below, it moves not in huge convection currents but up and down in narrow rotating columns. This discovery led the geophysicists Walter Elsasser and Edward Bullard to realise that the same forces would deflect convection currents in Earth’s metal outer core that are driven by radioactive decay. They are diverted into north-to-south columns of rotating metal that act like self-excited dynamos, producing the magnetic field that shields Earth from charged particles. A simple laboratory demonstration had illuminated events in Earth’s core that had been hidden in Coriolis’s equations.

Today, perhaps the most damaging failure to translate the mathematics of spin into easy-to-grasp concepts is in the fields of biomechanics and sports science. Our bodies are complex systems of rotating joints, but despite the sophistication of modern motion analysis software, few researchers realise that accelerating our joints can produce torques that actively accelerate our limbs. Biomechanics researchers are only starting to realise that accelerating our bodies upwards at the start of each step swings our arms and legs when we walk, and that a sling action straightens them at the end of each step.

In the same way, when we throw things, we use a multi-stage sling action; rotating our shoulders accelerates first our upper arm, then our forearm and finally our hands. And the reason we can wield heavy sledgehammers and swing wooden clubs to smash golf balls down the fairway is that their handles act as further sling elements; they accelerate forwards due to the centrifugal forces on them without us having to flex our wrists. Failing to articulate these simple mechanical concepts has made biomechanics ill-equipped to communicate with and help physiotherapists, sports coaches and roboticists.

And there is still confusion about the simplest aspects of rotation among physicists. Even Richard Feynman, for instance, was unable to explain the so-called Dzhanibekov effect – why spinning wing nuts on the International Space Station flip every few seconds. This was despite the fact that the mathematician Leonhard Euler had shown this should happen almost 300 years ago. The same is also true of more down-to-earth events: how children power playground swings and how cats land on their feet, for example.

The truth is that the basics of physics, despite involving simple mathematics, are harder to grasp than we tend to think. It took me two years, for instance, to master just the science of spin and rotation for my latest book. We need to spend more time thinking about, visualising and demonstrating basic physical concepts. If we do, we could produce a generation of physicists who can communicate better with everyone else and discover more about the world around us. The answers are probably already there, hidden in the equations.

The Science of Spin by Roland Ennos is out now.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Roland Ennos*


Mathematicians Shocked to Find Pattern in ‘Random’ Prime Numbers

Mathematicians are stunned by the discovery that prime numbers are pickier than previously thought. The find suggests number theorists need to be a little more careful when exploring the vast infinity of primes.

Primes, the numbers divisible only by themselves and 1, are the building blocks from which the rest of the number line is constructed, as all other numbers are created by multiplying primes together. That makes deciphering their mysteries key to understanding the fundamentals of arithmetic.

Although whether a number is prime or not is pre-determined, mathematicians don’t have a way to predict which numbers are prime, and so tend to treat them as if they occur randomly. Now Kannan Soundararajan and Robert Lemke Oliver of Stanford University in California have discovered that isn’t quite right.

“It was very weird,” says Soundararajan. “It’s like some painting you are very familiar with, and then suddenly you realise there is a figure in the painting you’ve never seen before.”

Surprising order

So just what has got mathematicians spooked? Apart from 2 and 5, all prime numbers end in 1, 3, 7 or 9 – they have to, else they would be divisible by 2 or 5 – and each of the four endings is equally likely. But while searching through the primes, the pair noticed that primes ending in 1 were less likely to be followed by another prime ending in 1. That shouldn’t happen if the primes were truly random – consecutive primes shouldn’t care about their neighbour’s digits.

“In ignorance, we thought things would be roughly equal,” says Andrew Granville of the University of Montreal, Canada. “One certainly believed that in a question like this we had a very strong understanding of what was going on.”

The pair found that in the first hundred million primes, a prime ending in 1 is followed by another ending in 1 just 18.5 per cent of the time. If the primes were distributed randomly, you’d expect to see two 1s next to each other 25 per cent of the time. Primes ending in 3 and 7 take up the slack, each following a 1 in 30 per cent of primes, while a 9 follows a 1 in around 22 per cent of occurrences.

Similar patterns showed up for the other combinations of endings, all deviating from the expected random values. The pair also found them in other bases, where numbers are counted in units other than 10s. That means the patterns aren’t a result of our base-10 numbering system, but something inherent to the primes themselves. The patterns become more in line with randomness as you count higher – the pair have checked up to a few trillion – but still persists.

“I was very surprised,” says James Maynard of the University of Oxford, UK, who on hearing of the work immediately performed his own calculations to check the pattern was there. “I somehow needed to see it for myself to really believe it.”

Stretching to infinity

Thankfully, Soundararajan and Lemke Oliver think they have an explanation. Much of the modern research into primes is underpinned G H Hardy and John Littlewood, two mathematicians who worked together at the University of Cambridge in the early 20th century. They came up with a way to estimate how often pairs, triples and larger grouping of primes will appear, known as the k-tuple conjecture.

Just as Einstein’s theory of relativity is an advance on Newton’s theory of gravity, the Hardy-Littlewood conjecture is essentially a more complicated version of the assumption that primes are random – and this latest find demonstrates how the two assumptions differ. “Mathematicians go around assuming primes are random, and 99 per cent of the time this is correct, but you need to remember the 1 per cent of the time it isn’t,” says Maynard.

The pair used Hardy and Littlewood’s work to show that the groupings given by the conjecture are responsible for introducing this last-digit pattern, as they place restrictions on where the last digit of each prime can fall. What’s more, as the primes stretch to infinity, they do eventually shake off the pattern and give the random distribution mathematicians are used to expecting.

“Our initial thought was if there was an explanation to be found, we have to find it using the k-tuple conjecture,” says Soundararajan. “We felt that we would be able to understand it, but it was a real puzzle to figure out.”

The k-tuple conjecture is yet to be proven, but mathematicians strongly suspect it is correct because it is so useful in predicting the behaviour of the primes. “It is the most accurate conjecture we have, it passes every single test with flying colours,” says Maynard. “If anything I view this result as even more confirmation of the k-tuple conjecture.”

Although the new result won’t have any immediate applications to long-standing problems about primes like the twin-prime conjecture or the Riemann hypothesis, it has given the field a bit of a shake-up. “It gives us more of an understanding, every little bit helps,” says Granville. “If what you take for granted is wrong, that makes you rethink some other things you know.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


The Mathematician Who Worked Out How To Time Travel

Mathematics suggested that time travel is physically possible – and Kurt Gödel proved it. Mathematician Karl Sigmund explains how the polymath did it.

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or mathematician to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

There may be no better way to get truly lost in space-time than to travel to the past and fiddle around with causality. Polymath Kurt Gödel suggested that you could, for instance, land near your younger self and “do something” to that person. If your action was drastic enough, like murder (or is it suicide?), then you could neither have embarked on your time trip, nor perpetrated the dark deed. But then no one would have stopped you from going back in time and so you can commit your crime after all. You are lost in a loop. It’s no longer where you are, but whether you are.

Gödel was the first to prove that, according to general relativity, this sort of time travel can be done. While logically impossible, the equations say it is physically possible. How can that actually be the case?

Widely hailed as “the greatest logician since Aristotle”, Gödel is mainly known for his mathematical and philosophical work. By age 25, while at the University of Vienna, he developed his notorious incompleteness theorems. These basically say that there is no finite set of assumptions that can underpin all of mathematics. This was quickly perceived as a turning point in the subject.

In 1934, Gödel, now 28, was among the first to be invited to the newly founded Institute for Advanced Study in Princeton, New Jersey. During the following years, he commuted between Princeton and Vienna.

After a traumatic journey around a war-torn globe, Gödel settled in Princeton for good in 1940. This is when his friendship with Albert Einstein developed. Their daily walks became legendary. Einstein quipped: “I come to my office just for the privilege to escort Gödel back home.”  The two strollers seemed eerily out of their time. The atomic bomb was built without Einstein, and the computer without Gödel.

When Einstein’s 70th birthday approached, Gödel was asked to contribute to the impending Festschrift a philosophical chapter on German philosopher Immanuel Kant and relativity – a well-grazed field. To his mother, he wrote: “I was asked to write a paper for a volume on the philosophical meaning of Einstein and his theory; of course, I could not very well refuse.”

Gödel began to reflect on Kant’s view that time was not, as Newton would have it, an absolute, objective part of the world, but an a priori form of intuition constraining our cognition. As Kant said: “What we represent ourselves as changes would, in beings with other forms of cognition, give rise to a perception in which… change would not occur at all.” Such “beings” would experience the world as timeless.

In his special relativity, Einstein had famously shown that different observers can have different notions of “now”. Hence, no absolute time. (“Newton, forgive me!” sighed Einstein.) However, this theory does not include gravitation. Add mass, and a kind of absolute time seems to sneak back! At least, it does so in the standard model of cosmology. There, the overall flow of matter works as a universal clock. Space-time is sliced in an infinity of layers, each representing a “now”, one succeeding another. Is this a necessary feature of general relativity? Gödel had found a mathematical kernel in a philosophical problem. That was his trademark.

At this stage, according to cosmologist Wolfgang Rindler, serendipity stepped in: Gödel stumbled across a letter to the journal Nature by physicist George Gamow, entitled “Rotating universe?”. It points out that apparently most objects in the sky spin like tops. Stars do it, planets do it, even spiral galaxies do it. They rotate. But why?

Gamow suggested that the whole universe rotates, and that this rotation trickles down, so to speak, to smaller and smaller structures: from universe to galaxies, from galaxies to stars, from stars to planets. The idea was ingenious, but extremely vague. No equations, no measurements. However, the paper ended with a friendly nudge for someone to start calculating.

With typical thoroughness, Gödel took up the gauntlet. He had always been a hard worker, who used an alarm clock not for waking up but for going to bed. He confided to his mother that his cosmology absorbed him so much that even when he tried to listen to the radio or to movies, he could do so “only with half an ear”. Eventually, Gödel discovered exact solutions of Einstein’s equations, which described a rotating universe.

However, while Gamow had imagined that the centre of rotation of our world is somewhere far away, beyond the reach of the strongest telescopes, Gödel’s universe rotates in every point. This does not solve Gamow’s quest for the cause of galactic rotations, but yields another, amazing result. In contrast to all then-known cosmological models, Gödel’s findings showed that there is no “now” that’s valid everywhere. This was exactly what he had set out to achieve: vindicate Kant (and Einstein) by showing that there is no absolute time.

“Talked a lot with Gödel,” wrote his friend Oskar Morgenstern, the economist who, together with John von Neumann, had founded game theory. He knew Gödel from former Viennese days and reported all their meetings in his diary. “His cosmological work makes good progress. Now one can travel into the past, or reach arbitrarily distant places in arbitrarily short time. This will cause a nice stir.” Time travel had been invented.

In Gödel’s universe, you don’t have to flip the arrow of time to go back to the past. Your time runs as usual. No need to shift entropy in return gear. You just step into a rocket and take off, to fly in a very wide curve (very wide!) at a very high speed (but less than the speed of light). The rocket’s trajectory weaves between light cones, never leaving them but exploiting the fact that in a rotating universe, they are not arrayed in parallel. The trip would consume an awful amount of energy.

Gödel just managed to meet the editorial timeline. On his 70th birthday, Einstein got Gödel’s manuscript for a present (and a sweater knitted by Kurt’s wife Adele). He thanked him for the gifts and confessed that the spectre of time travel had worried him for decades. Now the spectre had materialised. Einstein declared Gödel’s paper “one of the most important since my own”, and stated his hope that time travel could be excluded by some as yet unknown physical law. Soon after, Gödel received the first Albert Einstein award. It went with a modest amount of money which Gödel, as it turned out, could use well.

Next, according to philosopher Palle Yourgrau, “something extraordinary happened: nothing”.

For several decades, the mind-bending discovery of Gödel, far from causing “a nice stir”, got very little attention. When Harry Woolf, the director of the Institute for Advanced Study, arranged the eulogies to be given at Gödel’s funeral in 1978, he listed the topics to be covered: set theory and logic, followed by relativity, which he noted was “not worth a talk”.

Only by and by did eminent cosmologists, such as Stephen Hawking, Kip Thorne or John Barrow, convey an area of respectability to the field. Today, it is mainstream. With time, it transpired that, years before Gödel’s breakthrough, several other cosmological models had exhibited both rotation and the possibility of time travel. However, this aspect had never been noticed, not even by the engineers of these universes.

Many physicists are happy to leave the paradoxical aspects of time travel to philosophers. They invoke a “chronology protection law” that would step in to prevent the worst. It sounds like whistling in the dark but helps to overcome the problem of haunting your own present as a revenant from the future.

And does our universe rotate? Gödel was equivocal on that issue. Sometimes he claimed that his model only served as a thought experiment, to display the illusionary character of time, which cannot depend on accidental features of the place we happen to inhabit. Cosmologist Freeman Dyson, however, reported that Gödel, near the end of his life, had shown dismay when told that evidence for a rotating universe is lacking.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karl Sigmund*


Mathematicians Calculate 42-Digit Number After Decades Of Trying

Dedekind numbers describe the number of ways sets of logical operations can be combined, and are fiendishly difficult to calculate, with only eight known since 1991 – and now mathematicians have calculated the ninth in the series.

The ninth Dedekind number was calculated using the Noctua 2 supercomputer at Paderborn University in Germany

A 42-digit-long number that mathematicians have been hunting for decades, thanks to its sheer difficulty to calculate, has suddenly been found by two separate groups at the same time. This ninth Dedekind number, as it is known, may be the last in the sequence that is feasible to discover.

Dedekind numbers describe the number of ways a set of logical operations can be combined. For sets of just two or three elements, the total number is easy to calculate by hand, but for larger sets it rapidly becomes impossible because the number grows so quickly, at what is known as a double exponential speed.

“You’ve got two to the power two to the power n, as a very rough estimate of the complexity of this system,” says Patrick de Causmaecker at KU Leuven in Belgium. “If you want to find the Dedekind numbers, that is the kind of magnitude of counting that you will have to face.”

The challenge of calculating higher Dedekind numbers has attracted researchers in many disciplines, from pure mathematicians to computer scientists, over the years. “It’s an old, famous problem and, because it’s hard to crack, it’s interesting,” says Christian Jäkel at Dresden University of Technology in Germany.

In 1991, mathematician Doug Wiedemann found the eighth Dedekind number using 200 hours of number crunching on the Cray-2 supercomputer, one of the most powerful machines at the time. No one could do any better, until now.

After working on the problem on and off for six years, Jäkel published his calculation for the ninth Dedekind number in early April. Coincidently, Causmaecker and Lennart van Hirtum, also at KU Leuven, published their work three days later, having produced the same result. Both groups were unaware of one another. “I was shocked, I didn’t know about their work. I thought it would take at least 10 years or whatever to recompute it,” says Jäkel.

The resulting number is 286,386,577,668,298,411,128,469,151,667,598,498,812,366, which is 42 digits long.

Jäkel’s calculation took 28 days on eight graphical processing units (GPUs). To reduce the number of calculations required, he multiplied together elements from the much smaller fifth Dedekind number.

Causmaecker and van Hirtum instead used a processor called a field-programmable gate array (FPGA) for their work. Unlike a CPU or a GPU, these can perform many different kinds of interrelated calculations at the same time. “In an FPGA, everything is always happening all at once,” says van Hirtum. “You can compare it to a car assembly line.”

Like Jäkel, the team used elements from a smaller Dedekind number, in their case the sixth, but this still required 5.5 quadrillion operations and more than four months of computing time using the Noctua 2 supercomputer at Paderborn University, says van Hirtum.

People are divided on whether another Dedekind number will ever be found. “The tenth Dedekind number will be in the realm of 10 to the power of 82, which puts you at the number of atoms in the visible universe, so you can imagine you need something big in technical advancement that also grows exponentially,” says Jakel.

Van Hirtum also thinks the amount of computing power becomes impractical for the next number, requiring trillions more computations which would require capturing the power output of the entire sun. “This jump in complexity remains absolutely astronomical,” he says.

Causmaecker, however, is more positive, as he thinks new ways of calculating could bring that requirement down. “The combination of exponential growth of computing power, and the power of the mathematical algorithms, will go together and maybe in 20 or 30 years we can compute [Dedekind number] 10.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Mathematicians Plan Computer Proof Of Fermat’s Last Theorem

Fermat’s last theorem puzzled mathematicians for centuries until it was finally proven in 1993. Now, researchers want to create a version of the proof that can be formally checked by a computer for any errors in logic.

Mathematicians hope to develop a computerised proof of Fermat’s last theorem, an infamous statement about numbers that has beguiled them for centuries, in an ambitious, multi-year project that aims to demonstrate the potential of computer-assisted mathematical proofs.

Pierre de Fermat’s theorem, which he first proposed around 1640, states that there are no integers, or whole numbers, a, b, and c that satisfy the equation an + bn = cn for any integer n greater than 2. Fermat scribbled the claim in a book, famously writing: “I have discovered a truly marvellous proof of this, which this margin is too narrow to contain.”

It wasn’t until 1993 that Andrew Wiles, then at Princeton University, set the mathematical world alight by announcing he had a proof. Spanning more than 100 pages, the proof contained such advanced mathematics that it took more than two years for his colleagues to verify it didn’t contain any errors.

Many mathematicians hope that this work of checking, and eventually writing, proofs can be sped up by translating them into a computer-readable language. This process of formalisation would let computers instantly spot logical mistakes and, potentially, use the theorems as building blocks for other proofs.

But formalising modern proofs can itself be tricky and time-consuming, as much of the modern maths they rely on is yet to be made machine-readable. For this reason, formalising Fermat’s last theorem has long been considered far out of reach. “It was regarded as a tremendously ambitious proof just to prove it in the first place,” says Lawrence Paulson at the University of Cambridge.

Now, Kevin Buzzard at Imperial College London and his colleagues have announced plans to take on the challenge, attempting to formalise Fermat’s last theorem in a programming language called Lean.

“There’s no point in Fermat’s last theorem, it’s completely pointless. It doesn’t have any applications – either theoretical or practical – in the real world,” says Buzzard. “But it’s also a really hard question that’s become infamous because, for centuries, people have generated loads of brilliant new ideas in an attempt to solve it.”

He hopes that by formalising many of these ideas, which now include routine mathematical tools in number theory such as modular forms and Galois representations, it will help other researchers whose work is currently too far beyond the scope of computer assistants.

“It’s the kind of project that could have quite far-reaching and unexpected benefits and consequences,” says Chris Williams at the University of Nottingham, UK.

The proof itself will loosely follow Wiles’s, with slight modifications. A publicly available blueprint will be available online once the project is live, in April, so that anyone from Lean’s fast-growing community can contribute to formalising sections of the proof.

“Ten years ago, this would have taken an infinite amount of time,” says Buzzard. Even so, he will be concentrating on the project full-time from October, putting his teaching responsibilities on hold for five years in an effort to complete it.

“I think it’s unlikely he’ll be able to formalise the entire proof in the next five years, that would be a staggering achievement,” says Williams. “But because a lot of the tools that go into it are so ubiquitous now in number theory and arithmetic geometry, I’d expect any substantial progress towards it would be very useful in the future.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


How Maths Can Help You Pack Your Shopping More Efficiently

How can you ensure you use the fewest bags when loading your shopping? A dash of maths will help, says Peter Rowlett.

You have heaped your shopping on the supermarket conveyor belt and a friendly member of the checkout staff is scanning it through. Items are coming thick and fast and you would like to get them in as few bags as possible. What is your strategy?

This is an example of an optimisation problem, from an area of maths called operational research. One important question is, what are you trying to optimise? Are you thinking about the weight of the items, or how much space they will take up? Do you guess how many bags you might need and start filling that many, or put everything in one until you need to start another?

We design algorithms to solve packing problems when they come up at a larger scale than your weekly shop, like making better use of warehouse space or fitting boxes into delivery vans. Similar algorithms are used for cutting raw materials with minimal waste and storing data on servers.

Bag-packing algorithms generally involve placing items into a single bag until you get to one that won’t fit because you have hit a maximum weight or size. When necessary, you open a second bag, and each time you reach an item that won’t fit in an existing bag, you start a new one.

If you are filling multiple bags at once, it is likely you will come across an item that could fit in more than one bag. Which do you choose? There is no clear best answer, but different algorithms give different ways to make this decision. We are looking for rules that can be applied without detailed thought. You might have more subtle requirements, like putting two items in the same bag because they go in the same cupboard at home, but here we want the kind of simple rule a computer program can mindlessly apply to get the most efficient outcomes, using the fewest bags, every time.

One algorithm we could employ is called first fit. For each new item, you look through the bags in the order you opened them, placing the item in the first one it fits in. An advantage is that this is quick to implement, but it can overlook options and end up using more bags than needed.

An alternative that often uses fewer bags overall is called worst fit. When faced with a choice, you look through the currently open bags for the one with the most space and place the item there.

These algorithms work more effectively if you handle the objects in decreasing order – packing the largest or heaviest first will usually need fewer bags.

So now you are armed with a secret weapon for packing: the worst-fit decreasing algorithm. The next time you are in the checkout line, load your bulkiest shopping onto the conveyor belt first, and always put items in the bag with the most space available – it might just help you use fewer bags overall.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*