Why 2024 Abel Prize Winner Michel Talagrand Became A Mathematician

After losing an eye at the age of 5, the 2024 Abel prize winner Michel Talagrand found comfort in mathematics.

French mathematician Michel Talagrand has won the 2024 Abel prize for his work on probability theory and describing randomness. Shortly after he had heard the news, New Scientist spoke with Talagrand to learn more about his mathematical journey.

Alex Wilkins: What does it mean to win the Abel prize?

Michel Talagrand: I think everybody would agree that the Abel prize is really considered like the equivalent of the Nobel prize in mathematics. So it’s something for me totally unexpected, I never, ever dreamed I would receive this prize. And actually, it’s not such an easy thing to do, because there is this list of people who already received it. And on that list, they are true giants of mathematics. And it’s not such a comfortable feeling to sit with them, let me tell you, because it’s clear that their achievements are on an entirely other scale than I am.

What are your attributes as a mathematician?

I’m not able to learn mathematics easily. I have to work. It takes a very long time and I have a terrible memory. I forget things. So I try to work, despite handicaps, and the way I worked was trying to understand really well the simple things. Really, really well, in complete detail. And that turned out to be a successful approach.

Why does maths appeal to you?

Once you are in mathematics, and you start to understand how it works, it’s completely fascinating and it’s very attractive. There are all kinds of levels, you are an explorer. First, you have to understand what people before you did, and that’s pretty challenging, and then you are on your own to explore, and soon you love it. Of course, it is extremely frustrating at the same time. So you have to have the personality that you will accept to be frustrated.

But my solution is when I’m frustrated with something, I put it aside, when it’s obvious that I’m not going to make any more progress, I put it aside and do something else, and I come back to it at a later date, and I have used that strategy with great efficiency. And the reason why it succeeds is the function of the human brain, things mature when you don’t look at them. There are questions which I’ve literally worked on for a period of 30 years, you know, coming back to them. And actually at the end of the 30 years, I still made progress. That’s what is incredible.

How did you get your start?

Now, that’s a very personal story. First, it helps that my father was a maths teacher, and of course that helped. But really, the determining factor is I was unlucky to have been born with a deficiency in my retinas. And I lost my right eye when I was 5 years old. I had multiple retinal detachments when I was 15. I stayed in the hospital a long time, I missed school for six months. And that was extremely traumatic, I lived in constant terror that there will be a next retinal detachment.

To escape that, I started to study. And my father really immensely helped me, you know, when he knew how hard it was, and when I was in hospital, he came to see me every day and he started talking about some simple mathematics, just to keep my brain functioning. I started studying hard mathematics and physics to really, as I say, to fight the terror and, of course, when you start studying, then you become good at it and once you become good, it’s very appealing.

What is it like to be a professional mathematician?

Nobody tells me what I have to do and I’m completely free to use my time and do what I like. That fitted my personality well, of course, and it’s helped me to devote myself totally to my work.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Mathematicians Found a Guaranteed Way to Win The Lottery

A pair of mathematicians studied the UK National Lottery and figured out a combination of 27 tickets that guarantees you will always win, but they tell New Scientist they don’t bother to play.

David Cushing and David Stewart calculate a winning solution

Earlier this year, two mathematicians revealed that it is possible to guarantee a win on the UK national lottery by buying just 27 tickets, despite there being 45,057,474 possible draw combinations. The pair were shocked to see their findings make headlines around the world and inspire numerous people to play these 27 tickets – with mixed results – and say they don’t bother to play themselves.

David Cushing and David Stewart at the University of Manchester, UK, used a mathematical field called finite geometry to prove that particular sets of 27 tickets would guarantee a win.

They placed each of the lottery numbers from 1 to 59 in pairs or triplets on a point within one of five geometrical shapes, then used these to generate lottery tickets based on the lines within the shapes. The five shapes offer 27 such lines, meaning that 27 tickets will cover every possible winning combination of two numbers, the minimum needed to win a prize. Each ticket costs £2.

It was an elegant and intuitive solution to a tricky problem, but also an irresistible headline that attracted newspapers, radio stations and television channels from around the world. And it also led many people to chance their luck – despite the researchers always pointing out that it was, statistically speaking, a very good way to lose money, as the winnings were in no way guaranteed to even cover the cost of the tickets.

Cushing says he has received numerous emails since the paper was released from people who cheerily announce that they have won tiny amounts, like two free lucky dips – essentially another free go on the lottery. “They were very happy to tell me how much they’d lost basically,” he says.

The pair did calculate that their method would have won them £1810 if they had played on one night during the writing of their research paper – 21 June. Both Cushing and Stewart had decided not to play the numbers themselves that night, but they have since found that a member of their research group “went rogue” and bought the right tickets – putting himself £1756 in profit.

“He said what convinced him to definitely put them on was that it was summer solstice. He said he had this feeling,” says Cushing, shaking his head as he speaks. “He’s a professional statistician. He is incredibly lucky with it; he claims he once found a lottery ticket in the street and it won £10.”

Cushing and Stewart say that while their winning colleague – who would prefer to remain nameless – has not even bought them lunch as a thank you for their efforts, he has continued to play the 27 lottery tickets. However, he now randomly permutes the tickets to alternative 27-ticket, guaranteed-win sets in case others have also been inspired by the set that was made public. Avoiding that set could avert a situation where a future jackpot win would be shared with dozens or even hundreds of mathematically-inclined players.

Stewart says there is no way to know how many people are doing the same because Camelot, which runs the lottery, doesn’t release that information. “If the jackpot comes up and it happens to match exactly one of the [set of] tickets and it gets split a thousand ways, that will be some indication,” he says.

Nonetheless, Cushing says that he no longer has any interest in playing the 27 tickets. “I came to the conclusion that whenever we were involved, they didn’t make any money, and then they made money when we decided not to put them on. That’s not very mathematical, but it seemed to be what was happening,” he says.

And Stewart is keen to stress that mathematics, no matter how neat a proof, can never make the UK lottery a wise investment. “If every single man, woman and child in the UK bought a separate ticket, we’d only have a quarter chance of someone winning the jackpot,” he says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Mathematician Wins Turing Award For Harnessing Randomness

Avi Wigderson has won the 2023 Turing award for his work on understanding how randomness can shape and improve computer algorithms.

The mathematician Avi Wigderson has won the 2023 Turing award, often referred to as the Nobel prize for computing, for his work on understanding how randomness can shape and improve computer algorithms.

Wigderson, who also won the prestigious Abel prize in 2021 for his mathematical contributions to computer science, was taken aback by the award. “The [Turing] committee fooled me into believing that we were going to have some conversation about collaborating,” he says. “When I zoomed in, the whole committee was there and they told me. I was excited, surprised and happy.”

Computers work in a predictable way at the hardware level, but this can make it difficult for them to model real-world problems, which often have elements of randomness and unpredictability. Wigderson, at the Institute for Advanced Study in Princeton, New Jersey, has shown over a decades-long career that computers can also harness randomness in the algorithms that they run.

In the 1980s, Wigderson and his colleagues discovered that by inserting randomness into some algorithms, they could make them easier and faster to solve, but it was unclear how general this technique was. “We were wondering whether this randomness is essential, or maybe you can always get rid of it somehow if you’re clever enough,” he says.

One of Wigderson’s most important discoveries was making clear the relationship between types of problems, in terms of their difficulty to solve, and randomness. He also showed that certain algorithms that contained randomness and were hard to run could be made deterministic, or non-random, and easier to run.

These findings helped computer scientists better understand one of the most famous unproven conjectures in computer science, called “P ≠ NP”, which proposes that easy and hard problems for a computer to solve are fundamentally different. Using randomness, Wigderson discovered special cases where the two classes of problem were the same.

Wigderson first started exploring the relationship between randomness and computers in the 1980s, before the internet existed, and was attracted to the ideas he worked on by intellectual curiosity, rather than how they might be used. “I’m a very impractical person,” he says. “I’m not really motivated by applications.”

However, his ideas have become important for a wide swath of modern computing applications, from cryptography to cloud computing. “Avi’s impact on the theory of computation in the last 40 years is second to none,” says Oded Goldreich at the Weizmann Institute of Science in Israel. “The diversity of the areas to which he has contributed is stunning.”

One of the unexpected ways in which Wigderson’s ideas are now widely used was his work, with Goldreich and others, on zero-knowledge proofs, which detail ways of verifying information without revealing the information itself. These methods are fundamental for cryptocurrencies and blockchains today as a way to establish trust between different users.

Although great strides in the theory of computation have been made over Wigderson’s career, he says that the field is still full of interesting and unsolved problems. “You can’t imagine how happy I am that I am where I am, in the field that I’m in,” he says. “It’s bursting with intellectual questions.”

Wigderson will receive a $1 million prize as part of the Turing award.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Mathematician Wins Abel Prize For Solving Equations With Geometry

Luis Caffarelli has been awarded the most prestigious prize in mathematics for his work on nonlinear partial differential equations, which have many applications in the real world.

Luis Caffarelli has won the 2023 Abel prize, unofficially called the Nobel prize for mathematics, for his work on a class of equations that describe many real-world physical systems, from melting ice to jet engines.

Caffarelli was having breakfast with his wife when he found out the news. “The breakfast was better all of a sudden,” he says. “My wife was happy, I was happy — it was an emotional moment.”

Based at the University of Texas at Austin, Caffarelli started work on partial differential equations (PDEs) in the late 1970s and has contributed to hundreds of papers since. He is known for making connections between seemingly distant mathematical concepts, such as how a theory describing the smallest possible areas that surfaces can occupy can be used to describe PDEs in extreme cases.

PDEs have been studied for hundreds of years and describe almost every sort of physical process, ranging from fluids to combustion engines to financial models. Caffarelli’s most important work concerned nonlinear PDEs, which describe complex relationships between several variables. These equations are more difficult to solve than other PDEs, and often produce solutions that don’t make sense in the physical world.

Caffarelli helped tackle these problems with regularity theory, which sets out how to deal with problematic solutions by borrowing ideas from geometry. His approach carefully elucidated the troublesome parts of the equations, solving a wide range of problems over his more than four-decade career.

“Forty years after these papers appeared, we have digested them and we know how to do some of these things more efficiently,” says Francesco Maggi at the University of Texas at Austin. “But when they appeared back in the day, in the 80s, these were alien mathematics.”

Many of the nonlinear PDEs that Caffarelli helped describe were so-called free boundary problems, which describe physical scenarios where two objects in contact share a changing surface, like ice melting into water or water seeping through a filter.

“He has used insights that combined ingenuity, and sometimes methods that are not ultra-complicated, but which are used in a manner that others could not see — and he has done that time and time again,” says Thomas Chen at the University of Texas at Austin.

These insights have also helped other researchers translate equations so that they can be solved on supercomputers. “He has been one of the most prominent people in bringing this theory to a point where it’s really useful for applications,” says Maggi.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*

 


AI Is Helping Mathematicians Build A Periodic Table Of Shapes

Atomic shapes are so simple that they can’t be broken down any further. Mathematicians are trying to build a “periodic table” of these shapes, and they hope artificial intelligence can help.

Mathematicians attempting to build a “periodic table” of shapes have turned to artificial intelligence for help – but say they don’t understand how it works or whether it can be 100 per cent reliable.

Tom Coates at Imperial College London and his colleagues are working to classify shapes known as Fano varieties, which are so simple that they can’t be broken down into smaller components. Just as chemists arranged elements in the periodic table by their atomic weight and group to reveal new insights, the researchers hope that organising these “atomic” shapes by their various properties will help in understanding them.

The team has assigned each atomic shape a sequence of numbers derived from features such as the number of holes it has or the extent to which it twists around itself. This acts as a bar code to identify it.

Coates and his colleagues have now created an AI that can predict certain properties of these shapes from their bar code numbers alone, with an accuracy of 98 per cent – suggesting a relationship that some mathematicians intuitively thought might be real, but have found impossible to prove.

Unfortunately, there is a vast gulf between demonstrating that something is very often true and mathematically proving that it is always so. While the team suspects a one-to-one connection between each shape and its bar code, the mathematics community is “nowhere close” to proving this, says Coates.

“In pure mathematics, we don’t regard anything as true unless we have an actual proof written down on a piece of paper, and no advances in our understanding of machine learning will get around this problem,” says team member Alexander Kasprzyk at the University of Nottingham, UK.

Even without a proven link between the Fano varieties and bar codes, Kasprzyk says that the AI has let the team organise atomic shapes in a way that begins to mimic the periodic table, so that when you read from left to right, or up and down, there seem to be generalisable patterns in the geometry of the shapes.

“We had no idea that would be true, we had no idea how to begin doing it,” says Kasprzyk. “We probably would still not have had any idea about this in 50 years’ time. Frankly, people have been trying to study these things for 40 years and failing to get to a picture like this.”

The team hopes to refine the model to the point where missing spaces in its periodic table could point to the existence of unknown shapes, or where clustering of shapes could lead to logical categorisation, resulting in a better understanding and new ideas that could create a method of proof. “It clearly knows more things than we know, but it’s so mysterious right now,” says team member Sara Veneziale at Imperial College London.

Graham Niblo at the University of Southampton, UK, who wasn’t involved in the research, says that the work is akin to forming an accurate picture of a cello or a French horn just from the sound of a G note being played – but he stresses that humans will still need to tease understanding from the results provided by AI and create robust and conclusive proofs of these ideas.

“AI has definitely got uncanny abilities. But in the same way that telescopes didn’t put astronomers out of work, AI doesn’t put mathematicians out of work,” he says. “It just gives us a new tool that allows us to explore parts of the mathematical landscape that were out of reach, or, like a microscope, that were too obscure for us to notice with our current understanding.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes *


Deepmind AI Finds New Way To Multiply Numbers And Speed Up Computers

Matrix multiplication – where two grids of numbers are multiplied together – forms the basis of many computing tasks, and an improved technique discovered by an artificial intelligence could boost computation speeds by up to 20 per cent.

Multiplying numbers is a fundamental task for computers

An artificial intelligence created by the firm DeepMind has discovered a new way to multiply numbers, the first such advance in over 50 years. The find could boost some computation speeds by up to 20 per cent, as a range of software relies on carrying out the task at great scale.

Matrix multiplication – where two grids of numbers are multiplied together – is a fundamental computing task used in virtually all software to some extent, but particularly so in graphics, AI and scientific simulations. Even a small improvement in the efficiency of these algorithms could bring large performance gains, or significant energy savings.

For centuries, it was believed that the most efficient way of multiplying matrices would be proportional to the number of elements being multiplied, meaning that the task becomes proportionally harder for larger and larger matrices.

But the mathematician Volker Strassen proved in 1969 that multiplying a matrix of two rows of two numbers with another of the same size doesn’t necessarily involve eight multiplications and that, with a clever trick, it can be reduced to seven. This approach, called the Strassen algorithm, requires some extra addition, but this is acceptable because additions in a computer take far less time than multiplications.

The algorithm has stood as the most efficient approach on most matrix sizes for more than 50 years, although some slight improvements that aren’t easily adapted to computer code have been found. But DeepMind’s AI has now discovered a faster technique that works perfectly on current hardware. The company’s new AI, AlphaTensor, started with no knowledge of any solutions and was presented with the problem of creating a working algorithm that completed the task with the minimum number of steps.

It found an algorithm for multiplying two matrices of four rows of four numbers using just 47 multiplications, which outperforms Strassen’s 49 multiplications. It also developed improved techniques for multiplying matrices of other sizes, 70 in total.

AlphaTensor discovered thousands of functional algorithms for each size of matrix, including 14,000 for 4×4 matrices alone. But only a small minority were better than the state of the art. The research builds on AlphaZero, DeepMind’s game-playing model, and has been two years in the making.

Hussein Fawzi at DeepMind says the results are mathematically sound, but are far from intuitive for humans. “We don’t really know why the system came up with this, essentially,” he says. “Why is it the best way of multiplying matrices? It’s unclear.”

“Somehow, the neural networks get an intuition of what looks good and what looks bad. I honestly can’t tell you exactly how that works. I think there is some theoretical work to be done there on how exactly deep learning manages to do these kinds of things,” says Fawzi.

DeepMind found that the algorithms could boost computation speed by between 10 and 20 per cent on certain hardware such as an Nvidia V100 graphics processing unit (GPU) and a Google tensor processing unit (TPU) v2, but there is no guarantee that those gains would also be seen on common devices like a smartphone or laptop.

James Knight at the University of Sussex, UK, says that a range of software run on supercomputers and powerful hardware, like AI research and weather simulation, is effectively large-scale matrix multiplication.
“If this type of approach was actually implemented there, then it could be a sort of universal speed-up,” he says. “If Nvidia implemented this in their CUDA library [a tool that allows GPUs to work together], it would knock some percentage off most deep-learning workloads, I’d say.”

Oded Lachish at Birkbeck, University of London, says the new algorithms could boost the efficiency of a wide range of software, because matrix multiplication is such a common problem – and more algorithms are likely to follow.

“I believe we’ll be seeing AI-generated results for other problems of a similar nature, albeit rarely something as central as matrix multiplication. There’s significant motivation for such technology, since fewer operations in an algorithm doesn’t just mean faster results, it also means less energy spent,” he says. If a task can be completed slightly more efficiently, then it can be run on less powerful, less power-intensive hardware, or on the same hardware in less time, using less energy.

But DeepMind’s advances don’t necessarily mean human coders are out of a job. “Should programmers be worried? Maybe in the far future. Automatic optimisation has been done for decades in the microchip design industry and this is just another important tool in the coder’s arsenal,” says Lachish.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Fields Medal 2022: Work On Prime Numbers And Spheres Wins Maths Prize

Mathematicians who have studied the most efficient way to pack spheres in eight-dimensional space and the spacing of prime numbers are among this year’s recipients of the highest award in mathematics, the Fields medal.

Mathematicians who have studied the most efficient way to pack spheres in eight-dimensional space and the spacing of prime numbers are among this year’s recipients of the highest award in mathematics, the Fields medal.

The winners for 2022 are James Maynard at the University of Oxford; Maryna Viazovska at the Swiss Federal Institute of Technology in Lausanne (EPFL); Hugo Duminil-Copin at the University of Geneva, Switzerland; and June Huh at Princeton University in New Jersey.

Kyiv-born Viazovska is only the second female recipient among the 64 mathematicians to have received the award.

“Sphere packing is a very natural geometric problem. You have a big box, and you have an infinite collection of equal balls, and you’re trying to put as many balls into the box as you can,” says Viazovska. Her contribution was to provide an explicit formula to prove the most efficient stacking pattern for spheres in eight dimensions – a problem she says took 13 years to solve.

Maynard’s work involved understanding the gaps between prime numbers, while Duminil-Copin’s contribution was in the theory of phase transitions – such as water turning to ice, or evaporating into steam – in statistical physics.

June Huh, who dropped out of high school aged 16 to become a poet, was recognised for a range of work including the innovative use of geometry in the field of combinatorics, the mathematics of counting and arranging.

The medal, which is considered to be as prestigious as the Nobel prize, is given to two, three or four mathematicians under the age of 40 every four years.

The awards were first given out in 1936 and are named in honour of Canadian mathematician John Charles Fields. This year’s awards were due to be presented at the International Congress of Mathematicians in Saint Petersburg, Russia, but the ceremony was relocated to Helsinki, Finland.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Should All Mathematical Proofs Be Checked By A Computer?

Proofs, the central tenet of mathematics, occasionally have errors in them. Could computers stop this from happening, asks mathematician Emily Riehl.

Computer proof assistants can verify that mathematical proofs are correct

One miserable morning in 2017, in the third year of my tenure-track job as a mathematics professor, I woke up to a worrying email. It was from a colleague and he questioned the proof of a key theorem in a highly cited paper I had co-authored. “I had always kind of assumed that this was probably not true in general, though I have no proof either way. Did I miss something?” he asked. The proof, he noted, appeared to rest on a tacit assumption that was not warranted.

Much to my alarm and embarrassment, I realised immediately that my colleague was correct. After an anxious week working to get to the bottom of my mistake, it turned out I was very lucky. The theorem was true; it just needed a new proof, which my co-authors and I supplied in a follow-up paper. But if the theorem had been false, the whole edifice of consequences “proven” using it would have come crashing down.

The essence of mathematics is the concept of proof: a combination of assumed axioms and logical inferences that demonstrate the truth of a mathematical statement. Other mathematicians can then attempt to follow the argument for themselves to identify any holes or convince themselves that the statement is indeed true. Patched up in this way, theorems originally proven by the ancient Greeks about the infinitude of primes or the geometry of planar triangles remain true today – and anyone can see the arguments for why this must be.

Proofs have meant that mathematics has largely avoided the replication crises pervading other sciences, where the results of landmark studies have not held up when the experiments were conducted again. But as my experience shows, mistakes in the literature still occur. Ideally, a false claim, like the one I made, would be caught by the peer review process, where a submitted paper is sent to an expert to “referee”. In practice, however, the peer review process in mathematics is less than perfect – not just because experts can make mistakes themselves, but also because they often do not check every step in a proof.

This is not laziness: theorems at the frontiers of mathematics can be dauntingly technical, so much so that it can take years or even decades to confirm the validity of a proof. The mathematician Vladimir Voevodsky, who received a Fields medal, the discipline’s highest honour, noted that “a technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail”. After several experiences in which mistakes in his proofs took over a decade to be resolved – a long time for something to sit in logical limbo – Voevodsky’s subsequent crisis of confidence led him to take the unusual step of abandoning his “curiosity-driven research” to develop a computer program that could verify the correctness of his work.

This kind of computer program is known as a proof assistant, though it might be better called a “proof checker”. It can verify that a string of text proves the stated theorem. The proof assistant knows the methods of logical reasoning and is equipped with a library of proofs of standard results. It will accept a proof only after satisfying each step in the reasoning process, with no shortcuts of the sort that human experts often use.

For instance, a computer can verify that there are infinitely many prime numbers by validating the following proof, which is an adaptation of Greek mathematician Euclid’s argument. The human mathematician first tells the computer exactly what is being claimed – in this case that for any natural number N there is always some prime number p that is larger. The human then tells the computer the formula, defining p to be the minimum prime factor of the number formed by multiplying all the natural numbers up to N together and adding 1, represented as N! + 1.

For the computer proof assistant to make sense of this, it needs a library that contains definitions of the basic arithmetic operations. It also needs proofs of theorems, like the fundamental theorem of arithmetic, which tells us that every natural number can be factored uniquely into a product of primes. The proof assistant then demands a proof that this prime number p is greater than N. This is argued by contradiction – a technique where following an assumption to its conclusion leads to something that cannot possibly be true, demonstrating that the original assumption was false. In this case, if p is less than or equal to N, it should be a factor of both N! + 1 and N!. Some simple mathematics says this means that p must also be a factor of 1, which is absurd.

Computer proof assistants can be used to verify proofs that are so long that human referees are unable to check every step. In 1998, for example, Samuel Ferguson and Thomas Hales announced a proof of Johannes Kepler’s 1611 conjecture that the most efficient way to pack spheres into three-dimensional space is the familiar “cannonball” packing. When their result was accepted for publication in 2005 it came with a caveat: the journal’s reviewers attested to “a strong degree of conviction of the essential correctness of this proof approach” – they declined to certify that every step was correct.

Ferguson and Hales’s proof was based on a strategy proposed by László Fejes Tóth in 1953, which reduced the Kepler conjecture to an optimisation problem in a finite number of variables. Ferguson and Hales figured out how to subdivide this optimisation problem into a few thousand cases that could be solved by linear programming, which explains why human referees felt unable to vouch for the correctness of each calculation. In frustration, Hales launched a formalisation project, where a team of mathematicians and computer scientists meticulously verified every logical and computational step in the argument. The resulting 22-author paper was published in 2017 to as much fanfare as the original proof announcement.

Computer proof assistants can also be used to verify results in subfields that are so technical that only specialists understand the meaning of the central concepts. Fields medallist Peter Scholze spent a year working out the proof of a theorem that he wasn’t quite sure he believed and doubted anyone else would have the stamina to check. To be sure that his reasoning was correct before building further mathematics on a shaky foundation, Scholze posed a formalisation challenge in a SaiBlog post entitled the “liquid tensor experiment” in December 2020. The mathematics involved was so cutting edge that it took 60,000 lines of code to formalise the last five lines of the proof – and all the background results that those arguments relied upon – but nevertheless this project was completed and the proof confirmed this past July by a team led by Johan Commelin.

Could computers just write the proofs themselves, without involving any human mathematicians? At present, large language models like ChatGPT can fluently generate mathematical prose and even output it in LaTeX, a typesetting program for mathematical writing. However, the logic of these “proofs” tends to be nonsense. Researchers at Google and elsewhere are looking to pair large language models with automatically generated formalised proofs to guarantee the correctness of the mathematical arguments, though initial efforts are hampered by sparse training sets – libraries of formalised proofs are much smaller than the collective mathematical output. But while machine capabilities are relatively limited today, auto-formalised maths is surely on its way.

In thinking about how the human mathematics community might wish to collaborate with computers in the future, we should return to the question of what a proof is for. It’s never been solely about separating true statements from false ones, but about understanding why the mathematical world is the way it is. While computers will undoubtedly help humans check their work and learn to think more clearly – it’s a much more exacting task to explain mathematics to a computer than it is to explain it to a kindergartener – understanding what to make of it all will always remain a fundamentally human endeavour.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Emily Riehl*


Deepmind Created a Maths AI That Can Add Up To 6 But Gets 7 Wrong

Artificial intelligence firm DeepMind has tackled games like Go and Starcraft, but now it is turning its attention to more sober affairs: how to solve school-level maths problems.

Researchers at the company tasked an AI with teaching itself to solve arithmetic, algebra and probability problems, among others. It didn’t do a very good job: when the neural network was tested on a maths exam taken by 16-year-olds in the UK, it got just 14 out of 40 questions correct, or the equivalent of an E grade.

There were also strange quirks in the AI’s ability. For example, it could successfully add up 1+1+1+1+1+1 to make 6, but failed when an extra 1 was added. On the other hand, it gave the correct answer for longer sequences and much bigger numbers.

Other oddities included the ability to correctly answer 68 to the question “calculate 17×4.”, but when the full stop was removed, the answer came out at 69.

Puzzling behaviour

The DeepMind researchers concede they don’t have a good explanation for this behaviour. “At the moment, learning systems like neural networks are quite bad at doing ‘algebraic reasoning’,” says David Saxton, one of the team behind the work.

Despite this, it is still worth trying to teach a machine to solve maths problems, says Marcus du Sautoy, a mathematician at the University of Oxford.

“There are already algorithms out there to do these problems much faster, much better than machine-learning algorithms, but that’s not the point,” says du Sautoy. “They are setting themselves a different target – we want to start from nothing, by being told whether you got that one wrong, that one right, whether it can build up how to do this itself. Which is fascinating.”

An AI capable of solving advanced mathematics problems could put him out of a job, says du Sautoy. “That’s my fear. It may not take too much for an AI to get maturity in this world, whereas a maturity in the musical or visual or language world might be much harder for it. So I do think my subject is vulnerable.”

However, he takes some comfort that machine learning’s general weakness in remaining coherent over a long form – such as a novel, rather than a poem – will keep mathematicians safe for now. Creating mathematical proofs, rather than solving maths problems for 16-year-olds, will be difficult for machines, he says.

Noel Sharkey at the University of Sheffield, UK, says the research is more about finding the limits of machine-learning techniques, rather than promoting advancements in mathematics.

The interesting thing, he says, will be to see how the neural networks can adapt to challenges outside of those they were trained on. “The big question is to ask how well they can generalise to novel examples that were not in the training set. This has the potential to demonstrate formal limits to what this type of learning is capable of.”

Saxton says training a neural network on maths problems could help provide AI with reasoning skills for other applications.

“Humans are good at maths, but they are using general reasoning skills that current artificial learning systems don’t possess,” he says. “If we can develop models that are good at solving these problems, then these models would likely be using general skills that would be good at solving other hard problems in AI as well.”

He hopes the work could make a small contribution towards more general mathematical AIs that could tackle things such as proving theorems.

The DeepMind team has published its data set of maths questions, and encouraged people to train their own AI.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Adam Vaughan*


Decade-Long Struggle Over Maths Proof Could Be Decided By $1m Prize

Mathematician Shinichi Mochizuki’s Inter-universal Teichmüller theory has attracted controversy since it was published in 2012, with no one able to agree whether it is true. Now, a $1 million prize is being launched to settle the matter.

The Inter-Universal Geometry Center (IUGC) is overseeing the prize

Zen University

A prize of $1 million is being offered to anyone who can either prove or disprove an impenetrable mathematical theory, the veracity of which has been debated for over a decade.

Inter-universal Teichmüller theory (IUT) was created by Shinichi Mochizuki at Kyoto University, Japan, in a bid to solve a long-standing problem called the ABC conjecture, which focuses on the simple equation a + b = c. It suggests that if a and b are made up of large powers of prime numbers, then c isn’t usually divisible by large powers of primes.

In 2012, Mochizuki published a series of papers, running to more than 500 pages, that appeared to be a serious attempt at tackling the problem, but his dense and unusual style baffled many experts.

His apparent proof struggled to find acceptance and attracted criticism from some of the world’s most prominent mathematicians, including two who claimed in 2018 to have found a “serious, unfixable gap” in the work. Despite this, the paper was formally published in 2020, in a journal edited by Mochizuki himself. It was reported by Nature that he had nothing to do with the journal’s decision.

Since then, the theory has remained in mathematical limbo, with some people believing it to be true, but others disagreeing. Many mathematicians contacted for this story, including Mochizuki, either didn’t respond or declined to comment on the matter.

Now, the founder of Japanese telecoms and media company Dwango, Nobuo Kawakami, hopes to settle the issue by launching a cash prize for a paper that can prove – or disprove – the theory.

Two prizes are on offer. The first will see between $20,000 and $100,000 awarded annually, for the next 10 years, to the author of the best paper on IUT and related fields. The second – worth $1 million – is reserved for the mathematician who can write a paper that “shows an inherent flaw in the theory”, according to a press release.

Dwango didn’t respond to a request for interview, but during a press conference Kawakami said he hoped that his “modest reward will help increase the number of mathematicians who decide to get involved in IUT theory”.

To be eligible for the prizes, papers will need to be published in a peer-reviewed journal selected from a list compiled by the prize organisers, according to a report in The Asahi Shimbun newspaper, and Kawakami will choose the winner.

The competition is being run by the Inter-Universal Geometry Center (IUGC), which has been founded by Kawakami specifically to promote IUT, says Fumiharu Kato, director of the IUGC.

Kato says that Kawakami isn’t a mathematician, but sees IUT as a momentous part of the history of mathematics and believes that the cash prize is a “good investment” if it can finally clear up the controversy one way or the other.

“For me, IUT theory is logically simple. Of course, I mean, technically very, very hard. But logically it’s simple,” says Kato, who estimates that fewer than 10 people in the world comprehend the concept.

Kato believes that the controversy stems from the fact that Mochizuki doesn’t want to promote his theory, talk to journalists or other mathematicians about it or present the idea in a more easily digestible format, believing his work speaks for itself. Kato says that his current and former students are also reticent to do the same because they see him “as a god” in mathematics and don’t want to go against his wishes.

Because of this, most mathematicians are “at a loss” for a way to understand IUT, says Kato, who concedes that, despite earlier optimism about the idea, it is possible that the theory will eventually be disproven.

Ivan Fesenko at the University of Nottingham, UK, who is also deputy director at the IUGC, has long been a supporter of Mochizuki. He told New Scientist that there is no doubt about the correctness of IUT and that it all hinges on a deep understanding of an existing field called anabelian geometry.

“All negative public statements about the validity of IUT have been made by people who do not have proven expertise in anabelian geometry and who have zero research track record in anabelian geometry,” he says. “The new $1m IUT Challenger Prize will challenge every mathematician who has ever publicly criticised IUT to produce a paper with full proofs and get it published in a good math journal.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*