Mathematicians Found a Guaranteed Way to Win The Lottery

A pair of mathematicians studied the UK National Lottery and figured out a combination of 27 tickets that guarantees you will always win, but they tell New Scientist they don’t bother to play.

David Cushing and David Stewart calculate a winning solution

Earlier this year, two mathematicians revealed that it is possible to guarantee a win on the UK national lottery by buying just 27 tickets, despite there being 45,057,474 possible draw combinations. The pair were shocked to see their findings make headlines around the world and inspire numerous people to play these 27 tickets – with mixed results – and say they don’t bother to play themselves.

David Cushing and David Stewart at the University of Manchester, UK, used a mathematical field called finite geometry to prove that particular sets of 27 tickets would guarantee a win.

They placed each of the lottery numbers from 1 to 59 in pairs or triplets on a point within one of five geometrical shapes, then used these to generate lottery tickets based on the lines within the shapes. The five shapes offer 27 such lines, meaning that 27 tickets will cover every possible winning combination of two numbers, the minimum needed to win a prize. Each ticket costs £2.

It was an elegant and intuitive solution to a tricky problem, but also an irresistible headline that attracted newspapers, radio stations and television channels from around the world. And it also led many people to chance their luck – despite the researchers always pointing out that it was, statistically speaking, a very good way to lose money, as the winnings were in no way guaranteed to even cover the cost of the tickets.

Cushing says he has received numerous emails since the paper was released from people who cheerily announce that they have won tiny amounts, like two free lucky dips – essentially another free go on the lottery. “They were very happy to tell me how much they’d lost basically,” he says.

The pair did calculate that their method would have won them £1810 if they had played on one night during the writing of their research paper – 21 June. Both Cushing and Stewart had decided not to play the numbers themselves that night, but they have since found that a member of their research group “went rogue” and bought the right tickets – putting himself £1756 in profit.

“He said what convinced him to definitely put them on was that it was summer solstice. He said he had this feeling,” says Cushing, shaking his head as he speaks. “He’s a professional statistician. He is incredibly lucky with it; he claims he once found a lottery ticket in the street and it won £10.”

Cushing and Stewart say that while their winning colleague – who would prefer to remain nameless – has not even bought them lunch as a thank you for their efforts, he has continued to play the 27 lottery tickets. However, he now randomly permutes the tickets to alternative 27-ticket, guaranteed-win sets in case others have also been inspired by the set that was made public. Avoiding that set could avert a situation where a future jackpot win would be shared with dozens or even hundreds of mathematically-inclined players.

Stewart says there is no way to know how many people are doing the same because Camelot, which runs the lottery, doesn’t release that information. “If the jackpot comes up and it happens to match exactly one of the [set of] tickets and it gets split a thousand ways, that will be some indication,” he says.

Nonetheless, Cushing says that he no longer has any interest in playing the 27 tickets. “I came to the conclusion that whenever we were involved, they didn’t make any money, and then they made money when we decided not to put them on. That’s not very mathematical, but it seemed to be what was happening,” he says.

And Stewart is keen to stress that mathematics, no matter how neat a proof, can never make the UK lottery a wise investment. “If every single man, woman and child in the UK bought a separate ticket, we’d only have a quarter chance of someone winning the jackpot,” he says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Mathematician Wins Turing Award For Harnessing Randomness

Avi Wigderson has won the 2023 Turing award for his work on understanding how randomness can shape and improve computer algorithms.

The mathematician Avi Wigderson has won the 2023 Turing award, often referred to as the Nobel prize for computing, for his work on understanding how randomness can shape and improve computer algorithms.

Wigderson, who also won the prestigious Abel prize in 2021 for his mathematical contributions to computer science, was taken aback by the award. “The [Turing] committee fooled me into believing that we were going to have some conversation about collaborating,” he says. “When I zoomed in, the whole committee was there and they told me. I was excited, surprised and happy.”

Computers work in a predictable way at the hardware level, but this can make it difficult for them to model real-world problems, which often have elements of randomness and unpredictability. Wigderson, at the Institute for Advanced Study in Princeton, New Jersey, has shown over a decades-long career that computers can also harness randomness in the algorithms that they run.

In the 1980s, Wigderson and his colleagues discovered that by inserting randomness into some algorithms, they could make them easier and faster to solve, but it was unclear how general this technique was. “We were wondering whether this randomness is essential, or maybe you can always get rid of it somehow if you’re clever enough,” he says.

One of Wigderson’s most important discoveries was making clear the relationship between types of problems, in terms of their difficulty to solve, and randomness. He also showed that certain algorithms that contained randomness and were hard to run could be made deterministic, or non-random, and easier to run.

These findings helped computer scientists better understand one of the most famous unproven conjectures in computer science, called “P ≠ NP”, which proposes that easy and hard problems for a computer to solve are fundamentally different. Using randomness, Wigderson discovered special cases where the two classes of problem were the same.

Wigderson first started exploring the relationship between randomness and computers in the 1980s, before the internet existed, and was attracted to the ideas he worked on by intellectual curiosity, rather than how they might be used. “I’m a very impractical person,” he says. “I’m not really motivated by applications.”

However, his ideas have become important for a wide swath of modern computing applications, from cryptography to cloud computing. “Avi’s impact on the theory of computation in the last 40 years is second to none,” says Oded Goldreich at the Weizmann Institute of Science in Israel. “The diversity of the areas to which he has contributed is stunning.”

One of the unexpected ways in which Wigderson’s ideas are now widely used was his work, with Goldreich and others, on zero-knowledge proofs, which detail ways of verifying information without revealing the information itself. These methods are fundamental for cryptocurrencies and blockchains today as a way to establish trust between different users.

Although great strides in the theory of computation have been made over Wigderson’s career, he says that the field is still full of interesting and unsolved problems. “You can’t imagine how happy I am that I am where I am, in the field that I’m in,” he says. “It’s bursting with intellectual questions.”

Wigderson will receive a $1 million prize as part of the Turing award.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Mathematician Wins Abel Prize For Solving Equations With Geometry

Luis Caffarelli has been awarded the most prestigious prize in mathematics for his work on nonlinear partial differential equations, which have many applications in the real world.

Luis Caffarelli has won the 2023 Abel prize, unofficially called the Nobel prize for mathematics, for his work on a class of equations that describe many real-world physical systems, from melting ice to jet engines.

Caffarelli was having breakfast with his wife when he found out the news. “The breakfast was better all of a sudden,” he says. “My wife was happy, I was happy — it was an emotional moment.”

Based at the University of Texas at Austin, Caffarelli started work on partial differential equations (PDEs) in the late 1970s and has contributed to hundreds of papers since. He is known for making connections between seemingly distant mathematical concepts, such as how a theory describing the smallest possible areas that surfaces can occupy can be used to describe PDEs in extreme cases.

PDEs have been studied for hundreds of years and describe almost every sort of physical process, ranging from fluids to combustion engines to financial models. Caffarelli’s most important work concerned nonlinear PDEs, which describe complex relationships between several variables. These equations are more difficult to solve than other PDEs, and often produce solutions that don’t make sense in the physical world.

Caffarelli helped tackle these problems with regularity theory, which sets out how to deal with problematic solutions by borrowing ideas from geometry. His approach carefully elucidated the troublesome parts of the equations, solving a wide range of problems over his more than four-decade career.

“Forty years after these papers appeared, we have digested them and we know how to do some of these things more efficiently,” says Francesco Maggi at the University of Texas at Austin. “But when they appeared back in the day, in the 80s, these were alien mathematics.”

Many of the nonlinear PDEs that Caffarelli helped describe were so-called free boundary problems, which describe physical scenarios where two objects in contact share a changing surface, like ice melting into water or water seeping through a filter.

“He has used insights that combined ingenuity, and sometimes methods that are not ultra-complicated, but which are used in a manner that others could not see — and he has done that time and time again,” says Thomas Chen at the University of Texas at Austin.

These insights have also helped other researchers translate equations so that they can be solved on supercomputers. “He has been one of the most prominent people in bringing this theory to a point where it’s really useful for applications,” says Maggi.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*

 


AI Is Helping Mathematicians Build A Periodic Table Of Shapes

Atomic shapes are so simple that they can’t be broken down any further. Mathematicians are trying to build a “periodic table” of these shapes, and they hope artificial intelligence can help.

Mathematicians attempting to build a “periodic table” of shapes have turned to artificial intelligence for help – but say they don’t understand how it works or whether it can be 100 per cent reliable.

Tom Coates at Imperial College London and his colleagues are working to classify shapes known as Fano varieties, which are so simple that they can’t be broken down into smaller components. Just as chemists arranged elements in the periodic table by their atomic weight and group to reveal new insights, the researchers hope that organising these “atomic” shapes by their various properties will help in understanding them.

The team has assigned each atomic shape a sequence of numbers derived from features such as the number of holes it has or the extent to which it twists around itself. This acts as a bar code to identify it.

Coates and his colleagues have now created an AI that can predict certain properties of these shapes from their bar code numbers alone, with an accuracy of 98 per cent – suggesting a relationship that some mathematicians intuitively thought might be real, but have found impossible to prove.

Unfortunately, there is a vast gulf between demonstrating that something is very often true and mathematically proving that it is always so. While the team suspects a one-to-one connection between each shape and its bar code, the mathematics community is “nowhere close” to proving this, says Coates.

“In pure mathematics, we don’t regard anything as true unless we have an actual proof written down on a piece of paper, and no advances in our understanding of machine learning will get around this problem,” says team member Alexander Kasprzyk at the University of Nottingham, UK.

Even without a proven link between the Fano varieties and bar codes, Kasprzyk says that the AI has let the team organise atomic shapes in a way that begins to mimic the periodic table, so that when you read from left to right, or up and down, there seem to be generalisable patterns in the geometry of the shapes.

“We had no idea that would be true, we had no idea how to begin doing it,” says Kasprzyk. “We probably would still not have had any idea about this in 50 years’ time. Frankly, people have been trying to study these things for 40 years and failing to get to a picture like this.”

The team hopes to refine the model to the point where missing spaces in its periodic table could point to the existence of unknown shapes, or where clustering of shapes could lead to logical categorisation, resulting in a better understanding and new ideas that could create a method of proof. “It clearly knows more things than we know, but it’s so mysterious right now,” says team member Sara Veneziale at Imperial College London.

Graham Niblo at the University of Southampton, UK, who wasn’t involved in the research, says that the work is akin to forming an accurate picture of a cello or a French horn just from the sound of a G note being played – but he stresses that humans will still need to tease understanding from the results provided by AI and create robust and conclusive proofs of these ideas.

“AI has definitely got uncanny abilities. But in the same way that telescopes didn’t put astronomers out of work, AI doesn’t put mathematicians out of work,” he says. “It just gives us a new tool that allows us to explore parts of the mathematical landscape that were out of reach, or, like a microscope, that were too obscure for us to notice with our current understanding.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes *


Should All Mathematical Proofs Be Checked By A Computer?

Proofs, the central tenet of mathematics, occasionally have errors in them. Could computers stop this from happening, asks mathematician Emily Riehl.

Computer proof assistants can verify that mathematical proofs are correct

One miserable morning in 2017, in the third year of my tenure-track job as a mathematics professor, I woke up to a worrying email. It was from a colleague and he questioned the proof of a key theorem in a highly cited paper I had co-authored. “I had always kind of assumed that this was probably not true in general, though I have no proof either way. Did I miss something?” he asked. The proof, he noted, appeared to rest on a tacit assumption that was not warranted.

Much to my alarm and embarrassment, I realised immediately that my colleague was correct. After an anxious week working to get to the bottom of my mistake, it turned out I was very lucky. The theorem was true; it just needed a new proof, which my co-authors and I supplied in a follow-up paper. But if the theorem had been false, the whole edifice of consequences “proven” using it would have come crashing down.

The essence of mathematics is the concept of proof: a combination of assumed axioms and logical inferences that demonstrate the truth of a mathematical statement. Other mathematicians can then attempt to follow the argument for themselves to identify any holes or convince themselves that the statement is indeed true. Patched up in this way, theorems originally proven by the ancient Greeks about the infinitude of primes or the geometry of planar triangles remain true today – and anyone can see the arguments for why this must be.

Proofs have meant that mathematics has largely avoided the replication crises pervading other sciences, where the results of landmark studies have not held up when the experiments were conducted again. But as my experience shows, mistakes in the literature still occur. Ideally, a false claim, like the one I made, would be caught by the peer review process, where a submitted paper is sent to an expert to “referee”. In practice, however, the peer review process in mathematics is less than perfect – not just because experts can make mistakes themselves, but also because they often do not check every step in a proof.

This is not laziness: theorems at the frontiers of mathematics can be dauntingly technical, so much so that it can take years or even decades to confirm the validity of a proof. The mathematician Vladimir Voevodsky, who received a Fields medal, the discipline’s highest honour, noted that “a technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail”. After several experiences in which mistakes in his proofs took over a decade to be resolved – a long time for something to sit in logical limbo – Voevodsky’s subsequent crisis of confidence led him to take the unusual step of abandoning his “curiosity-driven research” to develop a computer program that could verify the correctness of his work.

This kind of computer program is known as a proof assistant, though it might be better called a “proof checker”. It can verify that a string of text proves the stated theorem. The proof assistant knows the methods of logical reasoning and is equipped with a library of proofs of standard results. It will accept a proof only after satisfying each step in the reasoning process, with no shortcuts of the sort that human experts often use.

For instance, a computer can verify that there are infinitely many prime numbers by validating the following proof, which is an adaptation of Greek mathematician Euclid’s argument. The human mathematician first tells the computer exactly what is being claimed – in this case that for any natural number N there is always some prime number p that is larger. The human then tells the computer the formula, defining p to be the minimum prime factor of the number formed by multiplying all the natural numbers up to N together and adding 1, represented as N! + 1.

For the computer proof assistant to make sense of this, it needs a library that contains definitions of the basic arithmetic operations. It also needs proofs of theorems, like the fundamental theorem of arithmetic, which tells us that every natural number can be factored uniquely into a product of primes. The proof assistant then demands a proof that this prime number p is greater than N. This is argued by contradiction – a technique where following an assumption to its conclusion leads to something that cannot possibly be true, demonstrating that the original assumption was false. In this case, if p is less than or equal to N, it should be a factor of both N! + 1 and N!. Some simple mathematics says this means that p must also be a factor of 1, which is absurd.

Computer proof assistants can be used to verify proofs that are so long that human referees are unable to check every step. In 1998, for example, Samuel Ferguson and Thomas Hales announced a proof of Johannes Kepler’s 1611 conjecture that the most efficient way to pack spheres into three-dimensional space is the familiar “cannonball” packing. When their result was accepted for publication in 2005 it came with a caveat: the journal’s reviewers attested to “a strong degree of conviction of the essential correctness of this proof approach” – they declined to certify that every step was correct.

Ferguson and Hales’s proof was based on a strategy proposed by László Fejes Tóth in 1953, which reduced the Kepler conjecture to an optimisation problem in a finite number of variables. Ferguson and Hales figured out how to subdivide this optimisation problem into a few thousand cases that could be solved by linear programming, which explains why human referees felt unable to vouch for the correctness of each calculation. In frustration, Hales launched a formalisation project, where a team of mathematicians and computer scientists meticulously verified every logical and computational step in the argument. The resulting 22-author paper was published in 2017 to as much fanfare as the original proof announcement.

Computer proof assistants can also be used to verify results in subfields that are so technical that only specialists understand the meaning of the central concepts. Fields medallist Peter Scholze spent a year working out the proof of a theorem that he wasn’t quite sure he believed and doubted anyone else would have the stamina to check. To be sure that his reasoning was correct before building further mathematics on a shaky foundation, Scholze posed a formalisation challenge in a SaiBlog post entitled the “liquid tensor experiment” in December 2020. The mathematics involved was so cutting edge that it took 60,000 lines of code to formalise the last five lines of the proof – and all the background results that those arguments relied upon – but nevertheless this project was completed and the proof confirmed this past July by a team led by Johan Commelin.

Could computers just write the proofs themselves, without involving any human mathematicians? At present, large language models like ChatGPT can fluently generate mathematical prose and even output it in LaTeX, a typesetting program for mathematical writing. However, the logic of these “proofs” tends to be nonsense. Researchers at Google and elsewhere are looking to pair large language models with automatically generated formalised proofs to guarantee the correctness of the mathematical arguments, though initial efforts are hampered by sparse training sets – libraries of formalised proofs are much smaller than the collective mathematical output. But while machine capabilities are relatively limited today, auto-formalised maths is surely on its way.

In thinking about how the human mathematics community might wish to collaborate with computers in the future, we should return to the question of what a proof is for. It’s never been solely about separating true statements from false ones, but about understanding why the mathematical world is the way it is. While computers will undoubtedly help humans check their work and learn to think more clearly – it’s a much more exacting task to explain mathematics to a computer than it is to explain it to a kindergartener – understanding what to make of it all will always remain a fundamentally human endeavour.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Emily Riehl*


Deepmind Created a Maths AI That Can Add Up To 6 But Gets 7 Wrong

Artificial intelligence firm DeepMind has tackled games like Go and Starcraft, but now it is turning its attention to more sober affairs: how to solve school-level maths problems.

Researchers at the company tasked an AI with teaching itself to solve arithmetic, algebra and probability problems, among others. It didn’t do a very good job: when the neural network was tested on a maths exam taken by 16-year-olds in the UK, it got just 14 out of 40 questions correct, or the equivalent of an E grade.

There were also strange quirks in the AI’s ability. For example, it could successfully add up 1+1+1+1+1+1 to make 6, but failed when an extra 1 was added. On the other hand, it gave the correct answer for longer sequences and much bigger numbers.

Other oddities included the ability to correctly answer 68 to the question “calculate 17×4.”, but when the full stop was removed, the answer came out at 69.

Puzzling behaviour

The DeepMind researchers concede they don’t have a good explanation for this behaviour. “At the moment, learning systems like neural networks are quite bad at doing ‘algebraic reasoning’,” says David Saxton, one of the team behind the work.

Despite this, it is still worth trying to teach a machine to solve maths problems, says Marcus du Sautoy, a mathematician at the University of Oxford.

“There are already algorithms out there to do these problems much faster, much better than machine-learning algorithms, but that’s not the point,” says du Sautoy. “They are setting themselves a different target – we want to start from nothing, by being told whether you got that one wrong, that one right, whether it can build up how to do this itself. Which is fascinating.”

An AI capable of solving advanced mathematics problems could put him out of a job, says du Sautoy. “That’s my fear. It may not take too much for an AI to get maturity in this world, whereas a maturity in the musical or visual or language world might be much harder for it. So I do think my subject is vulnerable.”

However, he takes some comfort that machine learning’s general weakness in remaining coherent over a long form – such as a novel, rather than a poem – will keep mathematicians safe for now. Creating mathematical proofs, rather than solving maths problems for 16-year-olds, will be difficult for machines, he says.

Noel Sharkey at the University of Sheffield, UK, says the research is more about finding the limits of machine-learning techniques, rather than promoting advancements in mathematics.

The interesting thing, he says, will be to see how the neural networks can adapt to challenges outside of those they were trained on. “The big question is to ask how well they can generalise to novel examples that were not in the training set. This has the potential to demonstrate formal limits to what this type of learning is capable of.”

Saxton says training a neural network on maths problems could help provide AI with reasoning skills for other applications.

“Humans are good at maths, but they are using general reasoning skills that current artificial learning systems don’t possess,” he says. “If we can develop models that are good at solving these problems, then these models would likely be using general skills that would be good at solving other hard problems in AI as well.”

He hopes the work could make a small contribution towards more general mathematical AIs that could tackle things such as proving theorems.

The DeepMind team has published its data set of maths questions, and encouraged people to train their own AI.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Adam Vaughan*


Decade-Long Struggle Over Maths Proof Could Be Decided By $1m Prize

Mathematician Shinichi Mochizuki’s Inter-universal Teichmüller theory has attracted controversy since it was published in 2012, with no one able to agree whether it is true. Now, a $1 million prize is being launched to settle the matter.

The Inter-Universal Geometry Center (IUGC) is overseeing the prize

Zen University

A prize of $1 million is being offered to anyone who can either prove or disprove an impenetrable mathematical theory, the veracity of which has been debated for over a decade.

Inter-universal Teichmüller theory (IUT) was created by Shinichi Mochizuki at Kyoto University, Japan, in a bid to solve a long-standing problem called the ABC conjecture, which focuses on the simple equation a + b = c. It suggests that if a and b are made up of large powers of prime numbers, then c isn’t usually divisible by large powers of primes.

In 2012, Mochizuki published a series of papers, running to more than 500 pages, that appeared to be a serious attempt at tackling the problem, but his dense and unusual style baffled many experts.

His apparent proof struggled to find acceptance and attracted criticism from some of the world’s most prominent mathematicians, including two who claimed in 2018 to have found a “serious, unfixable gap” in the work. Despite this, the paper was formally published in 2020, in a journal edited by Mochizuki himself. It was reported by Nature that he had nothing to do with the journal’s decision.

Since then, the theory has remained in mathematical limbo, with some people believing it to be true, but others disagreeing. Many mathematicians contacted for this story, including Mochizuki, either didn’t respond or declined to comment on the matter.

Now, the founder of Japanese telecoms and media company Dwango, Nobuo Kawakami, hopes to settle the issue by launching a cash prize for a paper that can prove – or disprove – the theory.

Two prizes are on offer. The first will see between $20,000 and $100,000 awarded annually, for the next 10 years, to the author of the best paper on IUT and related fields. The second – worth $1 million – is reserved for the mathematician who can write a paper that “shows an inherent flaw in the theory”, according to a press release.

Dwango didn’t respond to a request for interview, but during a press conference Kawakami said he hoped that his “modest reward will help increase the number of mathematicians who decide to get involved in IUT theory”.

To be eligible for the prizes, papers will need to be published in a peer-reviewed journal selected from a list compiled by the prize organisers, according to a report in The Asahi Shimbun newspaper, and Kawakami will choose the winner.

The competition is being run by the Inter-Universal Geometry Center (IUGC), which has been founded by Kawakami specifically to promote IUT, says Fumiharu Kato, director of the IUGC.

Kato says that Kawakami isn’t a mathematician, but sees IUT as a momentous part of the history of mathematics and believes that the cash prize is a “good investment” if it can finally clear up the controversy one way or the other.

“For me, IUT theory is logically simple. Of course, I mean, technically very, very hard. But logically it’s simple,” says Kato, who estimates that fewer than 10 people in the world comprehend the concept.

Kato believes that the controversy stems from the fact that Mochizuki doesn’t want to promote his theory, talk to journalists or other mathematicians about it or present the idea in a more easily digestible format, believing his work speaks for itself. Kato says that his current and former students are also reticent to do the same because they see him “as a god” in mathematics and don’t want to go against his wishes.

Because of this, most mathematicians are “at a loss” for a way to understand IUT, says Kato, who concedes that, despite earlier optimism about the idea, it is possible that the theory will eventually be disproven.

Ivan Fesenko at the University of Nottingham, UK, who is also deputy director at the IUGC, has long been a supporter of Mochizuki. He told New Scientist that there is no doubt about the correctness of IUT and that it all hinges on a deep understanding of an existing field called anabelian geometry.

“All negative public statements about the validity of IUT have been made by people who do not have proven expertise in anabelian geometry and who have zero research track record in anabelian geometry,” he says. “The new $1m IUT Challenger Prize will challenge every mathematician who has ever publicly criticised IUT to produce a paper with full proofs and get it published in a good math journal.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Octonions: The Strange Maths That Could Unite The Laws Of Nature

Could a system of eight-dimensional numbers help physicists find a single mathematical framework that describes the entire universe?

Words can be slippery. That is perhaps even more true in physics than it is in the rest of life. Think of a “particle”, for instance, and we might conjure an image of a tiny sphere. In truth, “particle” is just a poetic term for something far removed from our everyday experience – which is why our best descriptions of reality make use of the cold precision of mathematics.

But just as there are many human languages, so there is more than one type of number system. Most of us deal with only the familiar number line that begins 1, 2, 3. But other, more exotic systems are available. Recently, physicists have been asking a profound question: what if we are trying to describe reality with the wrong type of numbers?

Each mathematical system has its own special disposition, just like languages. Love poems sound better in French. German has that knack of expressing sophisticated concepts – like schadenfreude – in a few syllables. Now, in the wake of a fresh breakthrough revealing tantalising connections between models of how matter works at different energy scales, it seems increasingly likely that an exotic set of numbers known as the octonions might have what it takes to capture the truth about reality.

Mathematicians are excited because they reckon that by translating our theories of reality into the language of the octonions, it could tidy up some of the deepest problems in physics and clear a path to a “grand unified theory” that can describe the universe in one statement. “This feels like a very promising direction,” says Latham Boyle at the Perimeter Institute in Waterloo, Canada. “I find it irresistible to think about.”

Many physicists dream of finding a grand unified theory, a single mathematical framework that tells us where the forces of nature come from and how they act on matter. Critically, such a theory would also capture how and why these properties changed over the life of the universe, as we know they have.

So far, the closest we have come is the standard model of particle physics, which details the universe’s fundamental particles and forces: electrons, quarks, photons and the rest. The trouble is, the standard model has its shortcomings. To make it work, we must feed in around 20 measured numbers, such as the masses of particles. We don’t know why these numbers are what they are. Worse, the standard model has little to say about space-time, the canvas in which particles live. We seem to live in a four-dimensional space-time, but the standard model doesn’t specify that this must be so. “Why not, say, seven-dimensional space-time?” Boyle wonders.

Real and imaginary numbers

Many think the solution to these woes will come when experiments uncover a missing piece of the standard model. But after years of effort, this hasn’t happened, and some are wondering if the problem is the maths itself.

Mathematicians have known for centuries that there are numbers other than the ones we can count on our fingers. Take the square root of -1, known as i. There is no meaningful answer to this expression, as both 1 × 1 and -1 × -1 are equal to 1, so i is an “imaginary number”. They found that by combining i with real numbers – which include all the numbers you could place on a number line, including negative numbers and decimals – they could fashion a new system called the complex numbers.

Think of complex numbers as being two-dimensional; the two parts of each number can record unrelated properties of the same object. This turns out to be extremely handy. All our electronic infrastructure relies on complex numbers. And quantum theory, our hugely successful description of the small-scale world, doesn’t work without them.

In 1843, Irish mathematician William Rowan Hamilton took things a step further. Supplementing the real and the imaginary numbers with two more sets of imaginary numbers called j and k, he gave us the quaternions, a set of four-dimensional numbers. Within a few months, Hamilton’s friend John Graves had found another system with eight dimensions called the octonions.

Real numbers, complex numbers, quarternions and octonions are collectively known as the normed division algebras. They are the only sets of numbers with which you can perform addition, subtraction, multiplication and division. Wilder systems are possible – the 16-dimensional sedenions, for example – but here the normal rules break down.

Today, physics makes prolific use of three of these systems. The real numbers are ubiquitous. Complex numbers are essential in particle physics as well as quantum physics. The mathematical structure of general relativity, Albert Einstein’s theory of gravity, can be expressed elegantly by the quaternions.

The octonions stand oddly apart as the only system not to tie in with a central physical law. But why would nature map onto only three of these four number systems? “This makes one suspect that the octonions – the grandest and least understood of the four – should turn out to be important too,” says Boyle.

In truth, physicists have been thinking such thoughts since the 1970s, but the octonions have yet to fulfil their promise. Michael Duff at Imperial College London was, and still is, drawn to the octonions, but he knows many have tried and failed to decipher their role in describing reality. “The octonions became known as the graveyard of theoretical physics,” he says.

That hasn’t put off a new generation of octonion wranglers, including Nichol Furey at Humboldt University of Berlin. She likes to look at questions in physics without making any assumptions. “I try to solve problems right from scratch,” she says. “In doing so, you can often find alternate paths that earlier authors may have missed.” Now, it seems she and others might be making the beginnings of an octonion breakthrough.

Internal symmetries in quantum mechanics

To get to grips with Furey’s work, it helps to understand a concept in physics called internal symmetry. This isn’t the same as the rotational or reflectional symmetry of a snowflake. Instead, it refers to a number of more abstract properties, such as the character of certain forces and the relationships between fundamental particles. All these particles are defined by a series of quantum numbers – their mass, charge and a quantum property called spin, for instance. If a particle transforms into another particle – an electron becoming a neutrino, say – some of those numbers will change while others won’t. These symmetries define the structure of the standard model.

Internal symmetries are central to the quest for a grand unified theory. Physicists have already found various mathematical models that might explain how reality worked back at the time when the universe had much more energy. At these higher energies, it is thought there would have been more symmetries, meaning that some forces we now experience as distinct would have been one and the same. None of these models have managed to rope gravity into the fold: that would require an even grander “theory of everything”. But they do show, for instance, that the electromagnetic force and weak nuclear force would have been one “electroweak” force until a fraction of a second after the big bang. As the universe cooled, some of the symmetries broke, meaning this particular model would no longer apply.

Each different epoch requires a different mathematical model with a gradually reducing number of symmetries. In a sense, these models all contain each other, like a set of Russian dolls.

One of the most popular candidates for the outermost doll – the grand unified theory that contains all the others – is known as the spin(10) model. It has a whopping 45 symmetries. In one formulation, inside this sits the Pati-Salam model, with 21 symmetries. Then comes the left-right symmetric model, with 15 symmetries, including one known as parity, the kind of left-right symmetry that we encounter when we look in a mirror. Finally, we reach the standard model, with 12 symmetries. The reason we study each of these models is that they work; their symmetries are consistent with experimental evidence. But we have never understood what determines which symmetries fall away at each stage.

In August 2022, Furey, together with Mia Hughes at Imperial College London, showed for the first time that the division algebras, including the octonions, could provide this link. To do so, they drew on ideas Furey had years ago to translate all the mathematical symmetries and particle descriptions of various models into the language of division algebras. “It took a long time,” says Furey. The task required using the Dixon algebra, a set of numbers that allow you to combine real, complex, quaternion and octonion maths. The result was a system that describes a set of octonions specified by quaternions, which are in turn specified by complex numbers that are specified by a set of real numbers. “It’s a fairly crazy beast,” says Hughes.

It is a powerful beast, too. The new formulation exposed an intriguing characteristic of the Russian doll layers. When some numbers involved in the complex, quaternion and octonion formulations are swapped from positive to negative, or vice versa, some of the symmetries change and some don’t. Only the ones that don’t are found in the next layer down. “It allowed us to see connections between these well-studied particle models that had not been picked up on before,” says Furey. This “division algebraic reflection”, as Furey calls it, could be dictating what we encounter in the real physical universe, and – perhaps – showing us the symmetry-breaking road up to the long-sought grand unified theory.

The result is new, and Furey and Hughes haven’t yet been able to see where it may lead. “It hints that there might be some physical symmetry-breaking process that somehow depends upon these division algebraic reflections, but so far the nature of that process is fairly mysterious,” says Hughes.

Furey says the result might have implications for experiments. “We are currently investigating whether the division algebras are telling us what can and cannot be directly measured at different energy scales,” she says. It is a work in progress, but analysis of the reflections seems to suggest that there are certain sets of measurements that physicists should be able to make on particles at low energies – such as the measurement of an electron’s spin – and certain things that won’t be measurable, such as the colour charge of quarks.

Among those who work on octonions, the research is making waves. Duff says that trying to fit the standard model into octonionic language is a relatively new approach: “If it paid off, it would be very significant, so it’s worth trying.” Corinne Manogue at Oregon State University has worked with octonions for decades and has seen interest ebb and flow. “This moment does seem to be a relative high,” she says, “primarily, I think, because of Furey’s strong reputation and advocacy.

The insights from the octonions don’t stop there. Boyle has been toying with another bit of exotic maths called the “exceptional Jordan algebra”, which was invented by German physicist Pascual Jordan in the 1930s. Working with two other luminaries of quantum theory, Eugene Wigner and John von Neumann, Jordan found a set of mathematical properties of quantum theory that resisted classification and were closely related to the octonions.

Probe this exceptional Jordan algebra deeply enough and you will find it contains the mathematical structure that we use to describe Einstein’s four-dimensional space-time. What’s more, we have known for decades that within the exceptional Jordan algebra, you will find a peculiar mathematical structure that we derived through an entirely separate route and process in the early 1970s to describe the standard model’s particles and forces. In other words, this is an octonionic link between our theories of space, time, gravity and quantum theory. “I think this is a very striking, intriguing and suggestive observation,” says Boyle.

Responding to this, Boyle has dug deeper and discovered something intriguing about the way a class of particles called fermions, which includes common particles like electrons and quarks, fits into the octonion-based language. Fermions are “chiral”, meaning their mirror-image reflections – the symmetry physicists call parity – look different. This had created a problem when incorporating fermions into the octonion-based versions of the standard model. But Boyle has now found a way to fix that – and it has a fascinating spin-off. Restoring the mirror symmetry that is broken in the standard model also enables octonionic fermions to sit comfortably in the left-right symmetric model, one level further up towards the grand unified theory.

Beyond the big bang

This line of thinking might even take us beyond the grand unified theory, towards an explanation of where the universe came from. Boyle has been working with Neil Turok, his colleague at the Perimeter Institute, on what they call a “two-sheeted universe” that involves a set of symmetries known as charge, parity and time (CPT). “In this hypothesis, the big bang is a kind of mirror separating our half of the universe from its CPT mirror image on the other side of the bang,” says Boyle. The octonionic properties of fermions that sit in the left-right symmetric model are relevant in developing a coherent theory for this universe, it turns out. “I suspect that combining the octonionic picture with the two-sheeted picture of the cosmos is a further step in the direction of finding the right mathematical framework for describing nature,” says Boyle.

As with all the discoveries linking the octonions to our theories of physics so far, Boyle’s work is only suggestive. No one has yet created a fully fledged theory of physics based on octonions that makes new predictions we can test by using particle colliders, say. “There’s still nothing concrete yet: there’s nothing we can tell the experimentalists to go and look for,” says Duff. Furey agrees: “It is important to say that we are nowhere near being finished.

But Boyle, Furey, Hughes and many others are increasingly absorbed by the possibility that this strange maths really could be our best route to understanding where the laws of nature come from. In fact, Boyle thinks that the octonion-based approach could be just as fruitful as doing new experiments to find new particles. “Most people are imagining that the next bit of progress will be from some new pieces being dropped onto the table,” he says. “That would be great, but maybe we have not yet finished the process of fitting the current pieces together.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Brooks*


Crowds Beat Computers in Answer to Wikipedia-Sized Maths Problem

A maths problem previously tackled with the help of a computer, which produced a proof the size of Wikipedia, has now been cut down to size by a human. Although it is unlikely to have practical applications, the result highlights the differences between two modern approaches to mathematics: crowdsourcing and computers.

Terence Tao of the University of California, Los Angeles, has published a proof of the Erdős discrepancy problem, a puzzle about the properties of an infinite, random sequence of +1s and -1s. In the 1930s, Hungarian mathematician Paul Erdős wondered whether such a sequence would always contain patterns and structure within the randomness.

One way to measure this is by calculating a value known as the discrepancy. This involves adding up all the +1s and -1s within every possible sub-sequence. You might think the pluses and minuses would cancel out to make zero, but Erdős said that as your sub-sequences got longer, this sum would have to go up, revealing an unavoidable structure. In fact, he said the discrepancy would be infinite, meaning you would have to add forever, so mathematicians started by looking at smaller cases in the hopes of finding clues to attack the problem in a different way.

Last year, Alexei Lisitsa and Boris Konev of the University of Liverpool, UK used a computer to prove that the discrepancy will always be larger than two. The resulting proof was a 13 gigabyte file – around the size of the entire text of Wikipedia – that no human could ever hope to check.

Helping hands

Tao has used more traditional mathematics to prove that Erdős was right, and the discrepancy is infinite no matter the sequence you choose. He did it by combining recent results in number theory with some earlier, crowdsourced work.

In 2010, a group of mathematicians, including Tao, decided to work on the problem as the fifth Polymath project, an initiative that allows professionals and amateurs alike to contribute ideas through SaiBlogs and wikis as part of mathematical super-brain. They made some progress, but ultimately had to give up.

“We had figured out an interesting reduction of the Erdős discrepancy problem to a seemingly simpler problem involving a special type of sequence called a completely multiplicative function,” says Tao.

Then, in January this year, a new development in the study of these functions made Tao look again at the Erdős discrepancy problem, after a commenter on his SaiBlog pointed out a possible link to the Polymath project and another problem called the Elliot conjecture.

Not just conjecture

“At first I thought the similarity was only superficial, but after thinking about it more carefully, and revisiting some of the previous partial results from Polymath5, I realised there was a link: if one could prove the Elliott conjecture completely, then one could also resolve the Erdős discrepancy problem,” says Tao.

“I have always felt that that project, despite not solving the problem, was a distinct success,” writes University of Cambridge mathematician Tim Gowers, who started the Polymath project and hopes that others will be encouraged to participate in future. “We now know that Polymath5 has accelerated the solution of a famous open problem.”

Lisitsa praises Tao for doing what his algorithm couldn’t. “It is a typical example of high-class human mathematics,” he says. But mathematicians are increasingly turning to machines for help, a trend that seems likely to continue. “Computers are not needed for this problem to be solved, but I believe they may be useful in other problems,” Lisitsa says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*

 


The Mathematically Perfect Table Size For Solving Any Jigsaw Puzzle

When choosing the perfectly sized table to do your jigsaw puzzle on, work out the area of the completed puzzle and multiply it by 1.73.

People may require a larger table if they like to lay all the pieces out at the start, rather than keeping them in the box or in piles

How large does your table need to be when doing a jigsaw puzzle? The answer is the area of the puzzle when assembled multiplied by 1.73. This creates just enough space for all the pieces to be laid flat without any overlap.

“My husband and I were doing a jigsaw puzzle one day and I just wondered if you could estimate the area that the pieces take up before you put the puzzle together,” says Madeleine Bonsma-Fisher at the University of Toronto in Canada.

To uncover this, Bonsma-Fisher and her husband Kent Bonsma-Fisher, at the National Research Council Canada, turned to mathematics.

Puzzle pieces take on a range of “funky shapes” that are often a bit rectangular or square, says Madeleine Bonsma-Fisher. To get around the variation in shapes, the pair worked on the basis that all the pieces took up the surface area of a square. They then imagined each square sitting inside a circle that touches its corners.

By considering the area around each puzzle piece as a circle, a shape that can be packed in multiple ways, they found that a hexagonal lattice, similar to honeycomb, would mean the pieces could interlock with no overlap. Within each hexagon is one full circle and parts of six circles.

They then found that the area taken up by the unassembled puzzle pieces arranged in the hexagonal pattern would always be the total area of the completed puzzle – calculated by multiplying its length by its width – multiplied by the root of 3, or 1.73.

This also applies to jigsaw puzzle pieces with rectangular shapes, seeing as these would similarly fit within a circle.

While doing a puzzle, some people keep pieces that haven’t yet been used in the box, while others arrange them in piles or lay them on a surface, the latter being Madeleine Bonsma-Fisher’s preferred method. “If you really want to lay all your pieces out flat and be comfortable, your table should be a little over twice as big as your sample puzzle,” she says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Chen Ly*