We Could Solve The Biggest Problem in Maths in The Next Decade

P is not NP? That is the question

One of the biggest open problems in mathematics may be solved within the next decade, according to a poll of computer scientists. A solution to the so-called P versus NP problem is worth $1 million and could have a profound effect on computing, and perhaps even the entire world.

The problem is a question about how long algorithms take to run and whether some hard mathematical problems are actually easy to solve.

P and NP both represent groups of mathematical problems, but it isn’t known if these groups are actually identical.

P, which stands for polynomial time, consists of problems that can be solved by an algorithm in a relatively short time. NP, which stands for nondeterministic polynomial time, comprises the problems that are easy to check if you have the right answer given a potential candidate, although actually finding an answer in the first place might be difficult.

NP problems include a number of important real-world tasks, such as the travelling salesman problem, which involves finding a route between a list of cities that is shorter than a certain limit. Given such a route, you can easily check if it fits the limit, but finding one might be more difficult.

Equal or not

The P versus NP problem asks whether these two collections of problems are actually the same. If they are, and P = NP, the implications are potentially world-changing, because it could become much easier to solve these tasks. If they aren’t, and P doesn’t equal NP, or P ≠ NP, a proof would still answer fundamental questions about the nature of computation.

The problem was first stated in 1971 and has since become one of the most important open questions in mathematics – anyone who can find the answer either way will receive $1 million from the Clay Mathematics Institute in Cambridge, Massachusetts.

William Gasarch, a computer scientist at the University of Maryland in College Park, conducts polls of his fellow researchers to gauge the current state of the problem. His first poll, in 2002, found that just 61 per cent of respondents thought P ≠ NP. In 2012, that rose to 83 per cent, and now in 2019 it has slightly increased to 88 per cent. Support for P = NP has also risen, however, from 9 per cent in 2002 to 12 per cent in 2019, because the 2002 poll had a large number of “don’t knows”.

Confidence that we might soon have an answer is also rising. In 2002, just 5 per cent thought the problem would be resolved in the next decade, falling to 1 per cent in 2012, but now the figure sits at 22 per cent. “This is very surprising since there has not been any progress on it,” says Gasarch. “If anything, I think that as the problem remains open longer, it seems harder.” More broadly, 66 per cent believe it will be solved before the end of the century.

There was little agreement on the kind of mathematics that would ultimately be used to solve the problem, although a number of respondents suggested that artificial intelligence, not humans, could play a significant role.

“I can see this happening to some extent, but the new idea needed will, I think, come from a human,” says Gasarch. “I hope so, not for any reason of philosophy, but just because if a computer did it we might know that (say) P ≠ NP, but not really know why.”

Neil Immerman at the University of Massachusetts Amherst thinks that this kind of polling is interesting, but ultimately can’t tell us much about the P versus NP problem.

“As this poll demonstrates, there is no consensus on how this problem will be eventually solved,” he says. “For that reason, it is hard to measure the progress we have made since 1971 when the question was first stated.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


Pi Day: How To Calculate Pi Using A Cardboard Tube And A Load Of Balls

Grab a few balls and get calculating pi

Pi Day, which occurs every 14 March – or 3/14, in the US date format – celebrates the world’s favourite mathematical constant. This year, why not try an experiment to calculate its value? All you will need is a cardboard tube and a series of balls, each 100 times lighter than the next. You have those lying around the house, right?

This experiment was first formulated by mathematician Gregory Galperin in 2001. It works because of a mathematical trick involving the masses of a pair of balls and the law of conservation of energy.

First, take the tube and place one end up against a wall. Place two balls of equal mass in the tube. Let’s say that the ball closer to the wall is red, and the other is blue.

Next, bounce the blue ball off the red ball. If you have rolled the blue ball hard enough, there should be three collisions: the blue ball hits the red one, the red ball hits the wall, and the red ball bounces back to hit the blue ball once more. Not-so-coincidentally, three is also the first digit of pi.

To calculate pi a little bit more precisely, replace the red ball with one that is 100 times less massive than the blue ball – a ping pong ball might work, so we will call this the white ball.

When you perform the experiment again, you will find that the blue ball hits the white ball, the white ball hits the wall and then the white ball continues to bounce back and forth between the blue ball and the wall as it slows down. If you count the bounces, you’ll find that there are 31 collisions. That gives you the first two digits of pi: 3.1.

Galperin calculated that if you continue the same way, you will keep getting more digits of pi. If you replace the white ball with another one that is 10,000 times less massive than the blue ball, you will find that there are 314 collisions, and so on. If you have enough balls, you can count as many digits of pi as you like.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Leah Crane*


Millennium Prize: The Poincaré Conjecture

The problem’s been solved … but the sweet treats were declined. Back to the Cutting Board

In 1904, French mathematician Henri Poincaré asked a key question about three-dimensional spaces (“manifolds”).

Imagine a piece of rope, so that firstly a knot is tied in the rope and then the ends are glued together. This is what mathematicians call a knot. A link is a collection of knots that are tangled together.

It has been observed that DNA, which is coiled up within cells, occurs in closed knotted form.

Complex molecules such as polymers are tangled in knotted forms. There are deep connections between knot theory and ideas in mathematical physics. The outsides of a knot or link in space give important examples of three-dimensional spaces.

Torus. Fropuff

Back to Poincaré and his conjecture. He asked if the 3-sphere (which can be formed by either adding a point at infinity to ordinary three-dimensional Euclidean space or by gluing two solid three-dimensional balls together along their boundary 2-spheres) was the only three-dimensional space in which every loop can be continuously shrunk to a point.

Poincaré had introduced important ideas in the structure and classification of surfaces and their higher dimensional analogues (“manifolds”), arising from his work on dynamical systems.

Donuts to go, please

A good way to visualise Poincaré’s conjecture is to examine the boundary of a ball (a two-dimensional sphere) and the boundary of a donut (called a torus). Any loop of string on a 2-sphere can be shrunk to a point while keeping it on the sphere, whereas if a loop goes around the hole in the donut, it cannot be shrunk without leaving the surface of the donut.

Many attempts were made on the Poincaré conjecture, until in 2003 a wonderful solution was announced by a young Russian mathematician, Grigori “Grisha” Perelman.

This is a brief account of the ideas used by Perelman, which built on work of two other outstanding mathematicians, Bill Thurston and Richard Hamilton.

3D spaces

Thurston made enormous strides in our understanding of three-dimensional spaces in the late 1970s. In particular, he realised that essentially all the work that had been done since Poincaré fitted into a single theme.

He observed that known three-dimensional spaces could be divided into pieces in a natural way, so that each piece had a uniform geometry, similar to the flat plane and the round sphere. (To see this geometry on a torus, one must embed it into four-dimensional space!).

Thurston made a bold “geometrisation conjecture” that this should be true for all three-dimensional spaces. He had many brilliant students who further developed his theories, not least by producing powerful computer programs that could test any given space to try to find its geometric structure.

Thurston made spectacular progress on the geometrisation conjecture, which includes the Poincaré conjecture as a special case. The geometrisation conjecture predicts that any three-dimensional space in which every loop shrinks to a point should have a round metric – it would be a 3-sphere and Poincaré’s conjecture would follow.

In 1982, Richard Hamilton published a beautiful paper introducing a new technique in geometric analysis which he called Ricci flow. Hamilton had been looking for analogues of a flow of functions, so that the energy of the function decreases until it reaches a minimum. This type of flow is closely related to the way heat spreads in a material.

Hamilton reasoned that there should be a similar flow for the geometric shape of a space, rather than a function between spaces. He used the Ricci tensor, a key feature of Einstein’s field equations for general relativity, as the driving force for his flow.

He showed that, for three-dimensional spaces where the Ricci curvature is positive, the flow gradually changes the shape until the metric satisfies Thurston’s geometrisation conjecture.

Hamilton attracted many outstanding young mathematicians to work in this area. Ricci flow and other similar flows have become a huge area of research with applications in areas such as moving interfaces, fluid mechanics and computer graphics.

Ricci flow. CBN

He outlined a marvellous program to use Ricci flow to attack Thurston’s geometrisation conjecture. The idea was to keep evolving the shape of a space under Ricci flow.

Hamilton and his collaborators found the space might form a singularity, where a narrow neck became thinner and thinner until the space splits into two smaller spaces.

Hamilton worked hard to try to fully understand this phenomenon and to allow the pieces to keep evolving under Ricci flow until the geometric structure predicted by Thurston could be found.

Perelman

This is when Perelman burst on to the scene. He had produced some brilliant results at a very young age and was a researcher at the famous Steklov Institute in St Petersburg. Perelman got a Miller fellowship to visit UC Berkeley for three years in the early 1990s.

I met him there around 1992. He then “disappeared” from the mathematical scene for nearly ten years and re-emerged to announce that he had completed Hamilton’s Ricci flow program, in a series of papers he posted on the electronic repository called ArXiv.

His papers created enormous excitement and within several months a number of groups had started to work through Perelman’s strategy.

Eventually everyone was convinced that Perelman had indeed succeeded and both the geometrisation and Poincaré conjecture had been solved.

Perelman was awarded both a Fields medal (the mathematical equivalent of a Nobel prize) and also offered a million dollars for solving one of the Millenium prizes from the Clay Institute.

He turned down both these awards, preferring to live a quiet life in St Petersburg. Mathematicians are still finding new ways to use the solution to the geometrisation conjecture, which is one of the outstanding mathematical results of this era.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Hyam Rubinstein*

 


Octonions: The Strange Maths That Could Unite The Laws Of Nature

Could a system of eight-dimensional numbers help physicists find a single mathematical framework that describes the entire universe?

Words can be slippery. That is perhaps even more true in physics than it is in the rest of life. Think of a “particle”, for instance, and we might conjure an image of a tiny sphere. In truth, “particle” is just a poetic term for something far removed from our everyday experience – which is why our best descriptions of reality make use of the cold precision of mathematics.

But just as there are many human languages, so there is more than one type of number system. Most of us deal with only the familiar number line that begins 1, 2, 3. But other, more exotic systems are available. Recently, physicists have been asking a profound question: what if we are trying to describe reality with the wrong type of numbers?

Each mathematical system has its own special disposition, just like languages. Love poems sound better in French. German has that knack of expressing sophisticated concepts – like schadenfreude – in a few syllables. Now, in the wake of a fresh breakthrough revealing tantalising connections between models of how matter works at different energy scales, it seems increasingly likely that an exotic set of numbers known as the octonions might have what it takes to capture the truth about reality.

Mathematicians are excited because they reckon that by translating our theories of reality into the language of the octonions, it could tidy up some of the deepest problems in physics and clear a path to a “grand unified theory” that can describe the universe in one statement. “This feels like a very promising direction,” says Latham Boyle at the Perimeter Institute in Waterloo, Canada. “I find it irresistible to think about.”

Many physicists dream of finding a grand unified theory, a single mathematical framework that tells us where the forces of nature come from and how they act on matter. Critically, such a theory would also capture how and why these properties changed over the life of the universe, as we know they have.

So far, the closest we have come is the standard model of particle physics, which details the universe’s fundamental particles and forces: electrons, quarks, photons and the rest. The trouble is, the standard model has its shortcomings. To make it work, we must feed in around 20 measured numbers, such as the masses of particles. We don’t know why these numbers are what they are. Worse, the standard model has little to say about space-time, the canvas in which particles live. We seem to live in a four-dimensional space-time, but the standard model doesn’t specify that this must be so. “Why not, say, seven-dimensional space-time?” Boyle wonders.

Real and imaginary numbers

Many think the solution to these woes will come when experiments uncover a missing piece of the standard model. But after years of effort, this hasn’t happened, and some are wondering if the problem is the maths itself.

Mathematicians have known for centuries that there are numbers other than the ones we can count on our fingers. Take the square root of -1, known as i. There is no meaningful answer to this expression, as both 1 × 1 and -1 × -1 are equal to 1, so i is an “imaginary number”. They found that by combining i with real numbers – which include all the numbers you could place on a number line, including negative numbers and decimals – they could fashion a new system called the complex numbers.

Think of complex numbers as being two-dimensional; the two parts of each number can record unrelated properties of the same object. This turns out to be extremely handy. All our electronic infrastructure relies on complex numbers. And quantum theory, our hugely successful description of the small-scale world, doesn’t work without them.

In 1843, Irish mathematician William Rowan Hamilton took things a step further. Supplementing the real and the imaginary numbers with two more sets of imaginary numbers called j and k, he gave us the quaternions, a set of four-dimensional numbers. Within a few months, Hamilton’s friend John Graves had found another system with eight dimensions called the octonions.

Real numbers, complex numbers, quarternions and octonions are collectively known as the normed division algebras. They are the only sets of numbers with which you can perform addition, subtraction, multiplication and division. Wilder systems are possible – the 16-dimensional sedenions, for example – but here the normal rules break down.

Today, physics makes prolific use of three of these systems. The real numbers are ubiquitous. Complex numbers are essential in particle physics as well as quantum physics. The mathematical structure of general relativity, Albert Einstein’s theory of gravity, can be expressed elegantly by the quaternions.

The octonions stand oddly apart as the only system not to tie in with a central physical law. But why would nature map onto only three of these four number systems? “This makes one suspect that the octonions – the grandest and least understood of the four – should turn out to be important too,” says Boyle.

In truth, physicists have been thinking such thoughts since the 1970s, but the octonions have yet to fulfil their promise. Michael Duff at Imperial College London was, and still is, drawn to the octonions, but he knows many have tried and failed to decipher their role in describing reality. “The octonions became known as the graveyard of theoretical physics,” he says.

That hasn’t put off a new generation of octonion wranglers, including Nichol Furey at Humboldt University of Berlin. She likes to look at questions in physics without making any assumptions. “I try to solve problems right from scratch,” she says. “In doing so, you can often find alternate paths that earlier authors may have missed.” Now, it seems she and others might be making the beginnings of an octonion breakthrough.

Internal symmetries in quantum mechanics

To get to grips with Furey’s work, it helps to understand a concept in physics called internal symmetry. This isn’t the same as the rotational or reflectional symmetry of a snowflake. Instead, it refers to a number of more abstract properties, such as the character of certain forces and the relationships between fundamental particles. All these particles are defined by a series of quantum numbers – their mass, charge and a quantum property called spin, for instance. If a particle transforms into another particle – an electron becoming a neutrino, say – some of those numbers will change while others won’t. These symmetries define the structure of the standard model.

Internal symmetries are central to the quest for a grand unified theory. Physicists have already found various mathematical models that might explain how reality worked back at the time when the universe had much more energy. At these higher energies, it is thought there would have been more symmetries, meaning that some forces we now experience as distinct would have been one and the same. None of these models have managed to rope gravity into the fold: that would require an even grander “theory of everything”. But they do show, for instance, that the electromagnetic force and weak nuclear force would have been one “electroweak” force until a fraction of a second after the big bang. As the universe cooled, some of the symmetries broke, meaning this particular model would no longer apply.

Each different epoch requires a different mathematical model with a gradually reducing number of symmetries. In a sense, these models all contain each other, like a set of Russian dolls.

One of the most popular candidates for the outermost doll – the grand unified theory that contains all the others – is known as the spin(10) model. It has a whopping 45 symmetries. In one formulation, inside this sits the Pati-Salam model, with 21 symmetries. Then comes the left-right symmetric model, with 15 symmetries, including one known as parity, the kind of left-right symmetry that we encounter when we look in a mirror. Finally, we reach the standard model, with 12 symmetries. The reason we study each of these models is that they work; their symmetries are consistent with experimental evidence. But we have never understood what determines which symmetries fall away at each stage.

In August 2022, Furey, together with Mia Hughes at Imperial College London, showed for the first time that the division algebras, including the octonions, could provide this link. To do so, they drew on ideas Furey had years ago to translate all the mathematical symmetries and particle descriptions of various models into the language of division algebras. “It took a long time,” says Furey. The task required using the Dixon algebra, a set of numbers that allow you to combine real, complex, quaternion and octonion maths. The result was a system that describes a set of octonions specified by quaternions, which are in turn specified by complex numbers that are specified by a set of real numbers. “It’s a fairly crazy beast,” says Hughes.

It is a powerful beast, too. The new formulation exposed an intriguing characteristic of the Russian doll layers. When some numbers involved in the complex, quaternion and octonion formulations are swapped from positive to negative, or vice versa, some of the symmetries change and some don’t. Only the ones that don’t are found in the next layer down. “It allowed us to see connections between these well-studied particle models that had not been picked up on before,” says Furey. This “division algebraic reflection”, as Furey calls it, could be dictating what we encounter in the real physical universe, and – perhaps – showing us the symmetry-breaking road up to the long-sought grand unified theory.

The result is new, and Furey and Hughes haven’t yet been able to see where it may lead. “It hints that there might be some physical symmetry-breaking process that somehow depends upon these division algebraic reflections, but so far the nature of that process is fairly mysterious,” says Hughes.

Furey says the result might have implications for experiments. “We are currently investigating whether the division algebras are telling us what can and cannot be directly measured at different energy scales,” she says. It is a work in progress, but analysis of the reflections seems to suggest that there are certain sets of measurements that physicists should be able to make on particles at low energies – such as the measurement of an electron’s spin – and certain things that won’t be measurable, such as the colour charge of quarks.

Among those who work on octonions, the research is making waves. Duff says that trying to fit the standard model into octonionic language is a relatively new approach: “If it paid off, it would be very significant, so it’s worth trying.” Corinne Manogue at Oregon State University has worked with octonions for decades and has seen interest ebb and flow. “This moment does seem to be a relative high,” she says, “primarily, I think, because of Furey’s strong reputation and advocacy.

The insights from the octonions don’t stop there. Boyle has been toying with another bit of exotic maths called the “exceptional Jordan algebra”, which was invented by German physicist Pascual Jordan in the 1930s. Working with two other luminaries of quantum theory, Eugene Wigner and John von Neumann, Jordan found a set of mathematical properties of quantum theory that resisted classification and were closely related to the octonions.

Probe this exceptional Jordan algebra deeply enough and you will find it contains the mathematical structure that we use to describe Einstein’s four-dimensional space-time. What’s more, we have known for decades that within the exceptional Jordan algebra, you will find a peculiar mathematical structure that we derived through an entirely separate route and process in the early 1970s to describe the standard model’s particles and forces. In other words, this is an octonionic link between our theories of space, time, gravity and quantum theory. “I think this is a very striking, intriguing and suggestive observation,” says Boyle.

Responding to this, Boyle has dug deeper and discovered something intriguing about the way a class of particles called fermions, which includes common particles like electrons and quarks, fits into the octonion-based language. Fermions are “chiral”, meaning their mirror-image reflections – the symmetry physicists call parity – look different. This had created a problem when incorporating fermions into the octonion-based versions of the standard model. But Boyle has now found a way to fix that – and it has a fascinating spin-off. Restoring the mirror symmetry that is broken in the standard model also enables octonionic fermions to sit comfortably in the left-right symmetric model, one level further up towards the grand unified theory.

Beyond the big bang

This line of thinking might even take us beyond the grand unified theory, towards an explanation of where the universe came from. Boyle has been working with Neil Turok, his colleague at the Perimeter Institute, on what they call a “two-sheeted universe” that involves a set of symmetries known as charge, parity and time (CPT). “In this hypothesis, the big bang is a kind of mirror separating our half of the universe from its CPT mirror image on the other side of the bang,” says Boyle. The octonionic properties of fermions that sit in the left-right symmetric model are relevant in developing a coherent theory for this universe, it turns out. “I suspect that combining the octonionic picture with the two-sheeted picture of the cosmos is a further step in the direction of finding the right mathematical framework for describing nature,” says Boyle.

As with all the discoveries linking the octonions to our theories of physics so far, Boyle’s work is only suggestive. No one has yet created a fully fledged theory of physics based on octonions that makes new predictions we can test by using particle colliders, say. “There’s still nothing concrete yet: there’s nothing we can tell the experimentalists to go and look for,” says Duff. Furey agrees: “It is important to say that we are nowhere near being finished.

But Boyle, Furey, Hughes and many others are increasingly absorbed by the possibility that this strange maths really could be our best route to understanding where the laws of nature come from. In fact, Boyle thinks that the octonion-based approach could be just as fruitful as doing new experiments to find new particles. “Most people are imagining that the next bit of progress will be from some new pieces being dropped onto the table,” he says. “That would be great, but maybe we have not yet finished the process of fitting the current pieces together.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Brooks*


Crowds Beat Computers in Answer to Wikipedia-Sized Maths Problem

A maths problem previously tackled with the help of a computer, which produced a proof the size of Wikipedia, has now been cut down to size by a human. Although it is unlikely to have practical applications, the result highlights the differences between two modern approaches to mathematics: crowdsourcing and computers.

Terence Tao of the University of California, Los Angeles, has published a proof of the Erdős discrepancy problem, a puzzle about the properties of an infinite, random sequence of +1s and -1s. In the 1930s, Hungarian mathematician Paul Erdős wondered whether such a sequence would always contain patterns and structure within the randomness.

One way to measure this is by calculating a value known as the discrepancy. This involves adding up all the +1s and -1s within every possible sub-sequence. You might think the pluses and minuses would cancel out to make zero, but Erdős said that as your sub-sequences got longer, this sum would have to go up, revealing an unavoidable structure. In fact, he said the discrepancy would be infinite, meaning you would have to add forever, so mathematicians started by looking at smaller cases in the hopes of finding clues to attack the problem in a different way.

Last year, Alexei Lisitsa and Boris Konev of the University of Liverpool, UK used a computer to prove that the discrepancy will always be larger than two. The resulting proof was a 13 gigabyte file – around the size of the entire text of Wikipedia – that no human could ever hope to check.

Helping hands

Tao has used more traditional mathematics to prove that Erdős was right, and the discrepancy is infinite no matter the sequence you choose. He did it by combining recent results in number theory with some earlier, crowdsourced work.

In 2010, a group of mathematicians, including Tao, decided to work on the problem as the fifth Polymath project, an initiative that allows professionals and amateurs alike to contribute ideas through SaiBlogs and wikis as part of mathematical super-brain. They made some progress, but ultimately had to give up.

“We had figured out an interesting reduction of the Erdős discrepancy problem to a seemingly simpler problem involving a special type of sequence called a completely multiplicative function,” says Tao.

Then, in January this year, a new development in the study of these functions made Tao look again at the Erdős discrepancy problem, after a commenter on his SaiBlog pointed out a possible link to the Polymath project and another problem called the Elliot conjecture.

Not just conjecture

“At first I thought the similarity was only superficial, but after thinking about it more carefully, and revisiting some of the previous partial results from Polymath5, I realised there was a link: if one could prove the Elliott conjecture completely, then one could also resolve the Erdős discrepancy problem,” says Tao.

“I have always felt that that project, despite not solving the problem, was a distinct success,” writes University of Cambridge mathematician Tim Gowers, who started the Polymath project and hopes that others will be encouraged to participate in future. “We now know that Polymath5 has accelerated the solution of a famous open problem.”

Lisitsa praises Tao for doing what his algorithm couldn’t. “It is a typical example of high-class human mathematics,” he says. But mathematicians are increasingly turning to machines for help, a trend that seems likely to continue. “Computers are not needed for this problem to be solved, but I believe they may be useful in other problems,” Lisitsa says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*

 


The Mathematically Perfect Table Size For Solving Any Jigsaw Puzzle

When choosing the perfectly sized table to do your jigsaw puzzle on, work out the area of the completed puzzle and multiply it by 1.73.

People may require a larger table if they like to lay all the pieces out at the start, rather than keeping them in the box or in piles

How large does your table need to be when doing a jigsaw puzzle? The answer is the area of the puzzle when assembled multiplied by 1.73. This creates just enough space for all the pieces to be laid flat without any overlap.

“My husband and I were doing a jigsaw puzzle one day and I just wondered if you could estimate the area that the pieces take up before you put the puzzle together,” says Madeleine Bonsma-Fisher at the University of Toronto in Canada.

To uncover this, Bonsma-Fisher and her husband Kent Bonsma-Fisher, at the National Research Council Canada, turned to mathematics.

Puzzle pieces take on a range of “funky shapes” that are often a bit rectangular or square, says Madeleine Bonsma-Fisher. To get around the variation in shapes, the pair worked on the basis that all the pieces took up the surface area of a square. They then imagined each square sitting inside a circle that touches its corners.

By considering the area around each puzzle piece as a circle, a shape that can be packed in multiple ways, they found that a hexagonal lattice, similar to honeycomb, would mean the pieces could interlock with no overlap. Within each hexagon is one full circle and parts of six circles.

They then found that the area taken up by the unassembled puzzle pieces arranged in the hexagonal pattern would always be the total area of the completed puzzle – calculated by multiplying its length by its width – multiplied by the root of 3, or 1.73.

This also applies to jigsaw puzzle pieces with rectangular shapes, seeing as these would similarly fit within a circle.

While doing a puzzle, some people keep pieces that haven’t yet been used in the box, while others arrange them in piles or lay them on a surface, the latter being Madeleine Bonsma-Fisher’s preferred method. “If you really want to lay all your pieces out flat and be comfortable, your table should be a little over twice as big as your sample puzzle,” she says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Chen Ly*


Mathematician Cracks Centuries-Old Problem About The Number 33

The number 33 has surprising depth

Add three cubed numbers, and what do you get? It is a question that has puzzled mathematicians for centuries.

In 1825, a mathematician known as S. Ryley proved that any fraction could be represented as the sum of three cubes of fractions. In the 1950s, mathematician Louis Mordell asked whether the same could be done for integers, or whole numbers. In other words, are there integers k, x, y and z such that k = x3 + y3 + z3 for each possible value of k?

We still don’t know. “It’s long been clear that there are maths problems that are easy to state, but fiendishly hard to solve,” says Andrew Booker at the University of Bristol, UK – Fermat’s last theorem is a famous example.

Booker has now made another dent in the cube problem by finding a sum for the number 33, previously the lowest unsolved example. He used a computer algorithm to search for a solution:

33 = 8,866,128,975,287,5283 + (-8,778,405,442,862,239)3 + (-2,736,111,468,807,040)3

To cut down calculation time, the program eliminated certain combinations of numbers. “For instance, if x, y and z are all positive and large, then there’s no way that x3 + y3 + z3 is going to be a small number,” says Booker. Even so, it took 15 years of computer-processing time and three weeks of real time to come up with the result.

For some numbers, finding a solution to the equation k = x3 + y3 + z3 is simple, but others involve huge strings of digits. “It’s really easy to find solutions for 29, and we know a solution for 30, but that wasn’t found until 1999, and the numbers were in the millions,” says Booker.

Another example is for the number 3, which has two simple solutions: 1+ 1+ 1 and 4+ 4+ (-5) 3 . “But to this day, we still don’t know whether there are more,” he says.

There are certain numbers that we know definitely can’t be the sum of three cubes, including 4, 5, 13, 14 and infinitely many more.

The solution to 74 was only found in 2016, which leaves 42 as the only number less than 100 without a possible solution. There are still 12 unsolved numbers less than 1000.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Donna Lu*


Facebook’s AI Mathematician Can Solve University Calculus Problems

Machines are getting better at maths – artificial intelligence has learned to solve university-level calculus problems in seconds.

François Charton and Guillaume Lample at Facebook AI Research trained an AI on tens of millions of calculus problems randomly generated by a computer. The problems were mathematical expressions that involved integration, a common technique in calculus for finding the area under a curve.

To find solutions, the AI used natural language processing (NLP), a computational tool commonly used to analyse language. This works because the mathematics in each problem can be thought of as a sentence, with variables, normally denoted x, playing the role of nouns and operations, such as finding the square root, playing the role of verbs. The AI then “translates” the problem into a solution.

When the pair tested the AI on 500 calculus problems, it found a solution with an accuracy of 98 per cent. A comparable standard program for solving maths problems had only an accuracy of 85 per cent on the same problems.

The team also gave the AI differential equations to solve, which are other equations that require integration to solve as well as other techniques. For these equations, the AI wasn’t quite as good, solving them correctly 81 per cent for one type of differential equation and 40 per cent on a harder type.

Despite this, it could still correctly answer questions that confounded other maths programs.

Doing calculus on a computer isn’t especially useful in practice, but with further training AI might one day be able to tackle maths problems that are too hard for humans to crack, says Charton.

The efficiency of the AI could save humans time in other mathematical tasks, for example, when proving theorems, says Nikos Aletras at the University of Sheffield, UK.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gege Li*


Magic Numbers: The Beauty Of Decimal Notation

While adding up your grocery bill in the supermarket, you’re probably not thinking how important or sophisticated our number system is.

But the discovery of the present system, by unknown mathematicians in India roughly 2,000 years ago – and shared with Europe from the 13th century onwards – was pivotal to the development of our modern world.

Now, what if our “decimal” arithmetic, often called the Indo-Arabic system, had been discovered earlier? Or what if it had been shared with the Western world earlier than the 13th century?

First, let’s define “decimal” arithmetic: we’re talking about the combination of zero, the digits one through nine, positional notation, and efficient rules for arithmetic.

“Positional notation” means that the value represented by a digit depends both on its value and position in a string of digits.

Thus 7,654 means:

(7 × 1000) + (6 × 100) + (5 × 10) + 4 = 7,654

The benefit of this positional notation system is that we need no new symbols or calculation schemes for tens, hundreds or thousands, as was needed when manipulating Roman numerals.

While numerals for the counting numbers one, two and three were seen in all ancient civilisations – and some form of zero appeared in two or three of those civilisations (including India) – the crucial combination of zero and positional notation arose only in India and Central America.

Importantly, only the Indian system was suitable for efficient calculation.

Positional arithmetic can be in base-ten (or decimal) for humans, or in base-two (binary) for computers.

In binary, 10101 means:

(1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + 1

Which, in the more-familiar decimal notation, is 21.

The rules we learned in primary school for addition, subtraction, multiplication and division can be easily extended to binary.

The binary system has been implemented in electronic circuits on computers, mostly because the multiplication table for binary arithmetic is much simpler than the decimal system.

Of course, computers can readily convert binary results to decimal notation for us humans.

As easy as counting from one to ten

Perhaps because we learn decimal arithmetic so early, we consider it “trivial”.

Indeed the discovery of decimal arithmetic is given disappointingly brief mention in most western histories of mathematics.

In reality, decimal arithmetic is anything but “trivial” since it eluded the best minds of the ancient world including Greek mathematical super-genius Archimedes of Syracuse.

Archimedes – who lived in the 3rd century BCE – saw far beyond the mathematics of his time, even anticipating numerous key ideas of modern calculus. He also used mathematics in engineering applications.

Nonetheless, he used a cumbersome Greek numeral system that hobbled his calculations.

Imagine trying to multiply the Roman numerals XXXI (31) and XIV (14).

First, one must rewrite the second argument as XIIII, then multiply the second by each letter of the first to obtain CXXXX CXXXX CXXXX XIIII.

These numerals can then be sorted by magnitude to arrive at CCCXXXXXXXXXXXXXIIII.

This can then be rewritten to yield CDXXXIV (434).

(For a bit of fun, try adding MCMLXXXIV and MMXI. First person to comment with the correct answer and their method gets a jelly bean.)

Thus, while possible, calculation with Roman numerals is significantly more time-consuming and error prone than our decimal system (although it is harder to alter the amount payable on a Roman cheque).

History lesson

Although decimal arithmetic was known in the Arab world by the 9th century, it took many centuries to make its way to Europe.

Italian mathematician Leonardo Fibonacci travelled the Mediterranean world in the 13th century, learning from the best Arab mathematicians of the time. Even then, it was several more centuries until decimal arithmetic was fully established in Europe.

Johannes Kepler and Isaac Newton – both giants in the world of physics – relied heavily on extensive decimal calculations (by hand) to devise their theories of planetary motion.

In a similar way, present-day scientists rely on massive computer calculations to test hypotheses and design products. Even our mobile phones do surprisingly sophisticated calculations to process voice and video.

But let us indulge in some alternate history of mathematics. What if decimal arithmetic had been discovered in India even earlier, say 300 BCE? (There are indications it was known by this date, just not well documented.)

And what if a cultural connection along the silk-road had been made between Indian mathematicians and Greek mathematicians at the time?

Such an exchange would have greatly enhanced both worlds, resulting in advances beyond the reach of each system on its own.

For example, a fusion of Indian arithmetic and Greek geometry might well have led to full-fledged trigonometry and calculus, thus enabling ancient astronomers to deduce the laws of motion and gravitation nearly two millennia before Newton.

In fact, the combination of mathematics, efficient arithmetic and physics might have accelerated the development of modern technology by more than two millennia.

It is clear from history that without mathematics, real progress in science and technology is not possible (try building a mobile phone without mathematics). But it’s also clear that mathematics alone is not sufficient.

The prodigious computational skills of ancient Indian mathematicians never flowered into advanced technology, nor did the great mathematical achievements of the Greeks, or many developments in China.

On the other hand, the Romans, who were not known for their mathematics, still managed to develop some impressive technology.

But a combination of advanced mathematics, computation, and technology makes a huge difference.

Our bodies and our brains today are virtually indistinguishable from those of ancient times.

With the earlier adoption of Indo-Arabic decimal arithmetic, the modern technological world of today might – for better or worse – have been achieved centuries ago.

And that’s something worth thinking about next time you’re out grocery shopping.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


If I Had a Blank Cheque I’d … Turn IBM’s Watson Into a Maths Genius

Money should be no object when it comes to the numbers game. krissyho

Mathematics has many grand challenge problems, but none that can potentially be settled by pouring in more money – unlike the case of the Large Hadron Collider, the Square Kilometre Array or other such projects.

Maths is a different beast. But, of course, you’re offering me unlimited, free dosh, so I should really think of something.

Grand Challenges in Mathematics

In his famous 1900 speech The Problems of Mathematics David Hilbert listed 23 problems that set the stage for 20th century mathematics.

It was a speech full of optimism for mathematics in the coming century and Hilbert felt open (or unsolved) problems were a sign of vitality:

“The great importance of definite problems for the progress of mathematical science in general … is undeniable … [for] as long as a branch of knowledge supplies a surplus of such problems, it maintains its vitality … every mathematician certainly shares … the conviction that every mathematical problem is necessarily capable of strict resolution … we hear within ourselves the constant cry: There is the problem, seek the solution. You can find it through pure thought …”

Hilbert’s problems included the continuum hypothesis, the “well-ordering” of the reals, Goldbach’s conjecture, the transcendence of powers of algebraic numbers, the Riemann hypothesis, the extension of Dirichlet’s principle and many more.

Many were solved in subsequent decades, and each time it was a major event for mathematics.

The Riemann hypothesis (which deals with the distribution of prime numbers) remains on a list of seven “third millennium” problems.

For the solution of each millennium problem, the Clay Mathematics Institute offers – in the spirit of the times – a one million dollar prize.

This prize has already been awarded and refused by Perelman for resolving the Poincaré conjecture. The solution also merited Science’s Breakthrough of the Year, the first time mathematics had been so honoured.

Certainly, given unlimited moolah, learned groups could be gathered to attack each problem and assisted in various material ways. But targeted research in mathematics has even less history of success than in the other sciences … which is saying something.

Doron Zeilberger famously said that the Riemann hypothesis is the only piece of mathematics whose proof (i.e. certainty of knowledge) merits $10 billion being spent.

As John McCarthy wrote in Science in 1997:

“In 1965 the Russian mathematician Alexander Konrod said ‘Chess is the Drosophila [a type of fruit fly] of artificial intelligence.

“But computer chess has developed as genetics might have if the geneticists had concentrated their efforts, starting in 1910, on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies.”

Unfortunately, the so-called “curse of exponentiality” – whereby the more difficult a problem becomes, the challenge of solving it increases exponentially – pervades all computing, and especially mathematics.

As a result, many problems – such as Ramsey’s Theorem – will likely be impossible to solve by computer brute force, regardless of advances in technology.

Money for nothing

But, of course, I must get to the point. You’re offering me a blank cheque, so what would I do? A holiday in Greece for two? No, not this time. Here’s my manifesto:

Google has transformed mathematical life (as it has with all aspects of life) but is not very good at answering mathematical questions – even if one knows precisely the question to ask and it involves no symbols.

In February, IBM’s Watson computer walloped the best human Jeopardy players in one of the most impressive displays of natural language competence by a machine.

I would pour money into developing an enhanced Watson for mathematics and would acquire the whole corpus of maths for its database.

Maths ages very well and I am certain we would discover a treasure trove. Since it’s hard to tell where maths ends and physics, computer science and other subjects begin, I would be catholic in my acquisitions.

Since I am as rich as Croesus and can buy my way out of trouble, I will not suffer the same court challenges Google Books has faced.

I should also pay to develop a comprehensive computation and publishing system with features that allow one to manipulate mathematics while reading it and which ensures published mathematics is rich and multi-textured, allowing for reading at a variety of levels.

Since I am still in a spending mood, I would endow a mathematical research institute with great collaboration tools for roughly each ten million people on Earth.

Such institutes have greatly enhanced research in the countries that can afford and chose to fund them.

Content with my work, I would then rest.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*