Mathematicians Find 12,000 Solutions For Fiendish Three-Body Problem

Until recently, working out how three objects can stably orbit each other was nearly impossible, but now mathematicians have found a record number of solutions.

The motion of three objects is more complex than you might think

The question of how three objects can form a stable orbit around each other has troubled mathematicians for more than 300 years, but now researchers have found a record 12,000 orbital arrangements permitted by Isaac Newton’s laws of motion.

While mathematically describing the movement of two orbiting bodies and how each one’s gravity affects the other is relatively simple, the problem becomes vastly more complex once a third object is added. In 2017, researchers found 1223 new solutions to the three-body problem, doubling the number of possibilities then known. Now, Ivan Hristov at Sofia University in Bulgaria and his colleagues have unearthed more than 12,000 further orbits that work.

The team used a supercomputer to run an optimised version of the algorithm used in the 2017 work, discovering 12,392 new solutions. Hristov says that if he repeated the search with even more powerful hardware he could find “five times more”.

All the solutions found by the researchers start with all three bodies being stationary, before entering freefall as they are pulled towards each other by gravity. Their momentum then carries them past each other before they slow down, stop and are attracted together once more. The team found that, assuming there is no friction, this pattern would repeat infinitely.

Solutions to the three-body problem are of interest to astronomers, as they can describe how any three celestial objects – be they stars, planets or moons – can maintain a stable orbit. But it remains to be seen how stable the new solutions are when the tiny influences of additional, distant bodies and other real-world noise are taken into account.

“Their physical and astronomical relevance will be better known after the study of stability – it’s very important,” says Hristov. “But, nevertheless – stable or unstable – they are of great theoretical interest. They have a very beautiful spatial and temporal structure.”

Juhan Frank at Louisiana State University says that finding so many solutions in a precise set of conditions will be of interest to mathematicians, but of limited application in the real world.

“Most, if not all, require such precise initial conditions that they are probably never realised in nature,” says Frank. “After a complex and yet predictable orbital interaction, such three-body systems tend to break into a binary and an escaping third body, usually the least massive of the three.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes *


Exciting the brain could be key to boosting math learning, says new study

Exciting a brain region using electrical noise stimulation can help improve mathematical learning in those who struggle with the subject, according to a new study from the Universities of Surrey and Oxford, Loughborough University, and Radboud University in The Netherlands.

During this unique study, published in PLOS Biology, researchers investigated the impact of neurostimulation on learning. Despite the growing interest in this non-invasive technique, little is known about the neurophysiological changes induced and the effect it has on learning.

Researchers found that electrical noise stimulation over the frontal part of the brain improved the mathematical ability of people whose brain was less excited (by mathematics) before the application of stimulation. No improvement in mathematical scores was identified in those who had a high level of brain excitation during the initial assessment or in the placebo groups. Researchers believe that electrical noise stimulation acts on the sodium channels in the brain, interfering with the cell membrane of the neurons, which increases cortical excitability.

Professor Roi Cohen Kadosh, Professor of Cognitive Neuroscience and Head of the School of Psychology at the University of Surrey who led this project, said, “Learning is key to everything we do in life—from developing new skills, such as driving a car, to learning how to code. Our brains are constantly absorbing and acquiring new knowledge.

“Previously, we have shown that a person’s ability to learn is associated with neuronal excitation in their brains. What we wanted to discover in this case is if our novel stimulation protocol could boost, in other words excite, this activity and improve mathematical skills.”

For the study, 102 participants were recruited, and their mathematical skills were assessed through a series of multiplication problems. Participants were then split into four groups including a learning group exposed to high-frequency random electrical noise stimulation and an overlearning group in which participants practiced the multiplication beyond the point of mastery with high-frequency random electrical noise stimulation.

The remaining two groups consisted of a learning and overlearning group but they were exposed to a sham (i.e., placebo) condition, an experience akin to real stimulation without applying significant electrical currents. EEG recordings were taken at the beginning and at the end of the stimulation to measure brain activity.

Dr. Nienke van Bueren, from Radboud University, who led this work under Professor Cohen Kadosh’s supervision, said, “These findings highlight that individuals with lower brain excitability may be more receptive to noise stimulation, leading to enhanced learning outcomes, while those with high brain excitability might not experience the same benefits in their mathematical abilities.”

Professor Cohen Kadosh adds, “What we have found is how this promising neurostimulation works and under which conditions the stimulation protocol is most effective. This discovery could not only pave the way for a more tailored approach in a person’s learning journey but also shed light on the optimal timing and duration of its application.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Surrey


Statistics can help us figure out how historic battles could have turned out differently, say experts

Statistical methods can evaluate whether pivotal military events, like the Battle of Jutland, American involvement in the Vietnam war or the nuclear arms race, could’ve turned out otherwise, according to a new book.

Military historical narratives and statistical modeling bring fresh perspectives to the fore in a ground-breaking text, “Quantifying Counterfactual Military History,” by Brennen Fagan, Ian Horwood, Niall MacKay, Christopher Price and Jamie Wood, a team of historians and mathematicians.

The authors explain, “In writing history, it must always be remembered that a historical fact is simply one of numberless possibilities until the historical actor moves or an event occurs, at which point it becomes real. To understand the one-time possibility that became evidence we must also understand the possibilities that remained unrealized.”

Re-examining the battlefield

Midway through the First World War, Britain and Germany were locked in a technologically driven arms race which culminated in the Battle of Jutland in 1916. By that time, both nations had built over 50 dreadnoughts—speedy, heavily-armored, turbine-powered “all-big-gun” warships.

In the context of a battle that Britain couldn’t afford to lose, Winston Churchill was quoted as saying that the British commander John Jellicoe was “the only man on either side who could lose the war in an afternoon.”

Both nations, at various points in time, have claimed victory in the Battle of Jutland, and there is no consensus on who “won.” Using mathematical modeling, “Quantifying Counterfactual Military History” probes whether the Germans could have achieved a decisive victory.

The five scholars note, “This reconstructive battling enables us to put some level of statistical insight into multiple realizations of a key phase of Jutland. The model is crude and laden with assumptions—as are all wargames—but, unlike in a wargame, our goal is simply to understand what is plausible and what is not.”

Understanding nuclear deterrence

Counterfactual reasoning is positioned centrally when it comes to the fraught history of nuclear deterrence. By the 1980s, the nuclear arms race between the US and the Soviet Union had already spanned three decades and 1983 would bring a crisis less well-known than the Cuban missile crisis of 1962.

The authors draw attention to the peak of intensity in November 1983 during the so-called “Second Cold War.” A NATO “command post” exercise in Western Europe—known as Able Archer—was created to test communications in the event of nuclear war.

The Soviets, however—likely the result of faulty intelligence gathering—believed that an attack was imminent with the NATO exercise interpreted as the first phase. “Quantifying Counterfactual Military History” highlights the example as one where each side placed themselves in dangerous counterfactual mindsets.

“Mutual misapprehension in 1983 continued a long tradition of misunderstanding which had always created catastrophic potential for war, now based consciously and unconsciously on game theory and its erroneous assumption that rational actors were guided by accurate information,” the authors explain. “In this way they stumbled towards a war that neither had willed.”

Embrace the alternative

“Quantifying Counterfactual Military History” uses case studies of Jutland, Able Archer, the Battle of Britain and the Vietnam War to appraise long-established narratives around military events and examine the probabilities of the events that took place alongside the potential for alternative outcomes.

The book’s authors, however, take a restrained approach to counterfactual theory, one that acknowledges and considers why some events—including the actions of individuals or the rise of institutions—are more important than others and can be considered “critical junctures.” They understand this as very different from the arbitrary, loosely substantiated suppositions made by “exuberant” counterfactuals.

They say, “We can never be certain of the existence of critical junctures, or of the grounds of their criticality, but ‘restrained’ counterfactuals, if done with multiple perspectives and sufficient thoroughness, can surely make a distinctive contribution to the literature.”

The book is underpinned by an inter-disciplinary method which combines historical narrative and statistical data and analysis, offering both quantitative and qualitative rigor.

They explain, “This study has taken us in directions which are not common in academic collaboration, but which we hope demonstrate that collaborative research exploring what had been dead ground between the sciences and the humanities is long overdue.”

Rather than attempt to merely reinvent the past, “Quantifying Counterfactual Military History” calls attention to the dynamism inherent in historical practice and offers another tool for understanding historical actors, the decisions they made and the futures they shaped.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Taylor & Francis

 


How The History of Maths is Much More Diverse Than You Think

There is more to where maths came from than the ancient Greeks. From calculus to the theorem we credit to Pythagoras, so much of our knowledge comes from other places, including ancient China, India and the Arabian peninsula, says Kate Kitagawa.

The history of mathematics has an image problem. It is often presented as a meeting of minds among ancient Greeks who became masters of logic. Pythagoras, Euclid and their pals honed the tools for proving theorems and that led them to the biggest results of ancient times. Eventually, other European greats like Leonhard Euler and Isaac Newton came along and made maths modern, which is how we got to where we are today.

But, of course, this telling is greatly distorted. The history of maths is far richer, more chaotic and more diverse than it is given credit for. So much of what is now incorporated into our global knowledge comes from other places, including ancient China, India and the Arabian peninsula.

Take “Pythagoras’s” theorem. This is the one that says that in right-angled triangles, the square of the longest side is the sum of the square of the other two sides. The ancient Greeks certainly knew about this theorem, but so too did mathematicians in ancient Babylonia, Egypt, India and China.

In fact, in the 3rd century AD, Chinese mathematician Liu Hui added a proof of the theorem to the already old and influential book The Nine Chapters on the Mathematical Art. His version includes the earliest written statement of the theorem that we know of. So perhaps we should really call it Liu’s theorem or the gougu theorem as it was known in China.

The history of maths is filled with tales like this. Ideas have sprung up in multiple places at multiple times, leaving room for interpretation as to who should get the credit. As if credit is something that can’t be split.

As a researcher on the history of maths, I had come across examples of distorted views, but it was only when working on a new book, The Secret Lives of Numbers, that I found out just how pervasive they are. Along with my co-author, New Scientist‘s Timothy Revell, we found that the further we dug, the more of the true history of maths there was to uncover.

Another example is the origins of calculus. This is often presented as a battle between Newton and Gottfried Wilhelm Leibniz, two great 17th-century European mathematicians. They both independently developed extensive theories of calculus, but missing from the story is how an incredible school in Kerala, India, led by the mathematician Mādhava, hit upon some of the same ideas 300 years before.

The idea that the European way of doing things is superior didn’t originate in maths – it came from centuries of Western imperialism – but it has infiltrated it. Maths outside ancient Greece has often been put to one side as “ethnomathematics”, as if it were a side story to the real history.

In some cases, history has also distorted legacies. Sophie Kowalevski, who was born in Moscow in 1850, is now a relatively well-known figure. She was a fantastic mathematician, known for tackling a problem she dubbed a “mathematical mermaid” for its allure. The challenge was to describe mathematically how a spinning top moves, and she made breakthroughs where others had faltered.

During her life, she was constantly discouraged from pursuing maths and often had to work for free, collecting tuition money from her students in order to survive. After her death, biographers then tainted her life, painting her as a femme fatale who relied on her looks, implying she effectively passed off others’ work as her own. There is next to no evidence this is true.

Thankfully, historians of mathematics are re-examining and correcting the biases and stereotypes that have plagued the field. This is an ongoing process, but by embracing its diverse and chaotic roots, the next chapters for maths could be the best yet.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Kate Kitagawa *


Mathematicians Find 27 Tickets That Guarantee UK National Lottery Win

Buying a specific set of 27 tickets for the UK National Lottery will mathematically guarantee that you win something.

Buying 27 tickets ensures a win in the UK National Lottery

You can guarantee a win in every draw of the UK National Lottery by buying just 27 tickets, say a pair of mathematicians – but you won’t necessarily make a profit.

While there are many variations of lottery in the UK, players in the standard “Lotto” choose six numbers from 1 to 59, paying £2 per ticket. Six numbers are randomly drawn and prizes are awarded for tickets matching two or more.

David Cushing and David Stewart at the University of Manchester, UK, claim that despite there being 45,057,474 combinations of draws, it is possible to guarantee a win with just 27 specific tickets. They say this is the optimal number, as the same can’t be guaranteed with 26.

The proof of their idea relies on a mathematical field called finite geometry and involves placing each of the numbers from 1 to 59 in pairs or triplets on a point within one of five geometrical shapes, then using these to generate lottery tickets based on the lines within the shapes. The five shapes offer 27 such lines, meaning that 27 tickets bought using those numbers, at a cost of £54, will hit every possible winning combination of two numbers.

The 27 tickets that guarantee a win on the UK National Lottery

Their research yielded a specific list of 27 tickets (see above), but they say subsequent work has shown that there are two other combinations of 27 tickets that will also guarantee a win.

“We’ve been thinking about this problem for a few months. I can’t really explain the thought process behind it,” says Cushing. “I was on a train to Manchester and saw this [shape] and that’s the best logical [explanation] I can give.”

Looking at the winning numbers from the 21 June Lotto draw, the pair found their method would have won £1810. But the same numbers played on 1 July would have matched just two balls on three of the tickets – still a technical win, but giving a prize of just three “lucky dip” tries on a subsequent lottery, each of which came to nothing.

Stewart says proving that 27 tickets could guarantee a win was the easiest part of the research, while proving it is impossible to guarantee a win with 26 was far trickier. He estimates that the number of calculations needed to verify that would be 10165, far more than the number of atoms in the universe. “There’d be absolutely no way to brute force this,” he says.

The solution was a computer programming language called Prolog, developed in France in 1971, which Stewart says is the “hero of the story”. Unlike traditional computer languages where a coder sets out precisely what a machine should do, step by step, Prolog instead takes a list of known facts surrounding a problem and works on its own to deduce whether or not a solution is possible. It takes these facts and builds on them or combines them in order to slowly understand the problem and whittle down the array of possible solutions.

“You end up with very, very elegant-looking programs,” says Stewart. “But they are quite temperamental.”

Cushing says the research shouldn’t be taken as a reason to gamble more, particularly as it doesn’t guarantee a profit, but hopes instead that it encourages other researchers to delve into using Prolog on thorny mathematical problems.

A spokesperson from Camelot, the company that operates the lottery, told New Scientist that the paper made for “interesting reading”.

“Our approach has always been to have lots of people playing a little, with players individually spending small amounts on our games,” they say. “It’s also important to bear in mind that, ultimately, Lotto is a lottery. Like all other National Lottery draw-based games, all of the winning Lotto numbers are chosen at random – any one number has the same and equal chance of being drawn as any other, and every line of numbers entered into a draw has the same and equal chance of winning as any other.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Why putting off college math can be a good idea

Putting off college math could improve the likelihood that students remain in college. But that may only be true as long as students don’t procrastinate more than one year. This is what colleagues and I found in a study published in 2023 of 1,119 students at a public university for whom no remedial coursework was required during their first year.

Enrolling in a math course during the first semester of college resulted in students being four times more likely to drop out. Although delayed enrollment in a math course had benefits in the first year, its advantages vanished by the end of the second year. In our study, almost 40% of students who postponed the course beyond a year did not attempt it at all and failed to obtain a degree within six years.

Why it matters

Nearly 1.7 million students who recently graduated from high school will immediately enroll in college. Math is a requirement for most degrees, but students aren’t always ready to do college-level math. By putting off college math for a year, it gives students time to adjust to college and prepare for more challenging coursework.

Approximately 40% of four-year college students must first take a remedial math course. This can extend the time it takes to graduate and increase the likelihood of dropping out. Our study did not apply to students who need remedial math.

For students who do not require remedial courses, some delay can be beneficial, but students’ past experiences in math can lead to avoidance of math courses. Many students experience math anxiety. Procrastination can be an avoidance strategy for managing fears about math. The fear of math for students may be a more significant barrier than their performance.

It is estimated that at least 17% of the population will likely experience high levels of math anxiety. Math anxiety can lead to a drop in math performance. It can also lead to avoiding majors and career paths involving math.

Our study fills the void in research on the effects of how soon students take college-level math courses. It also supports prior evidence that students benefit from a mix of coursework that is challenging yet not overwhelming as they transition to college.

What still isn’t known

We believe colleges need to better promote student confidence in math by examining how student success courses can reduce math anxiety. Student success courses provide students with study skills, note-taking skills, goal setting, time management and stress management, as well as career and financial decision making to support the transition to college. Although student success courses are a proven practice that help students stick with college, rarely do these courses address students’ fear of math.

Students are at the greatest risk of dropping out of college during their first year. Advisors play a crucial role in providing students with resources for success. This includes recommendations on what courses to take and when to take them. More research is also needed about how advisors can effectively communicate the impact of when math is taken by students.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Forrest Lane, The Conversation


Should All Mathematical Proofs Be Checked By A Computer?

Proofs, the central tenet of mathematics, occasionally have errors in them. Could computers stop this from happening, asks mathematician Emily Riehl.

Computer proof assistants can verify that mathematical proofs are correct

One miserable morning in 2017, in the third year of my tenure-track job as a mathematics professor, I woke up to a worrying email. It was from a colleague and he questioned the proof of a key theorem in a highly cited paper I had co-authored. “I had always kind of assumed that this was probably not true in general, though I have no proof either way. Did I miss something?” he asked. The proof, he noted, appeared to rest on a tacit assumption that was not warranted.

Much to my alarm and embarrassment, I realised immediately that my colleague was correct. After an anxious week working to get to the bottom of my mistake, it turned out I was very lucky. The theorem was true; it just needed a new proof, which my co-authors and I supplied in a follow-up paper. But if the theorem had been false, the whole edifice of consequences “proven” using it would have come crashing down.

The essence of mathematics is the concept of proof: a combination of assumed axioms and logical inferences that demonstrate the truth of a mathematical statement. Other mathematicians can then attempt to follow the argument for themselves to identify any holes or convince themselves that the statement is indeed true. Patched up in this way, theorems originally proven by the ancient Greeks about the infinitude of primes or the geometry of planar triangles remain true today – and anyone can see the arguments for why this must be.

Proofs have meant that mathematics has largely avoided the replication crises pervading other sciences, where the results of landmark studies have not held up when the experiments were conducted again. But as my experience shows, mistakes in the literature still occur. Ideally, a false claim, like the one I made, would be caught by the peer review process, where a submitted paper is sent to an expert to “referee”. In practice, however, the peer review process in mathematics is less than perfect – not just because experts can make mistakes themselves, but also because they often do not check every step in a proof.

This is not laziness: theorems at the frontiers of mathematics can be dauntingly technical, so much so that it can take years or even decades to confirm the validity of a proof. The mathematician Vladimir Voevodsky, who received a Fields medal, the discipline’s highest honour, noted that “a technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail”. After several experiences in which mistakes in his proofs took over a decade to be resolved – a long time for something to sit in logical limbo – Voevodsky’s subsequent crisis of confidence led him to take the unusual step of abandoning his “curiosity-driven research” to develop a computer program that could verify the correctness of his work.

This kind of computer program is known as a proof assistant, though it might be better called a “proof checker”. It can verify that a string of text proves the stated theorem. The proof assistant knows the methods of logical reasoning and is equipped with a library of proofs of standard results. It will accept a proof only after satisfying each step in the reasoning process, with no shortcuts of the sort that human experts often use.

For instance, a computer can verify that there are infinitely many prime numbers by validating the following proof, which is an adaptation of Greek mathematician Euclid’s argument. The human mathematician first tells the computer exactly what is being claimed – in this case that for any natural number N there is always some prime number p that is larger. The human then tells the computer the formula, defining p to be the minimum prime factor of the number formed by multiplying all the natural numbers up to N together and adding 1, represented as N! + 1.

For the computer proof assistant to make sense of this, it needs a library that contains definitions of the basic arithmetic operations. It also needs proofs of theorems, like the fundamental theorem of arithmetic, which tells us that every natural number can be factored uniquely into a product of primes. The proof assistant then demands a proof that this prime number p is greater than N. This is argued by contradiction – a technique where following an assumption to its conclusion leads to something that cannot possibly be true, demonstrating that the original assumption was false. In this case, if p is less than or equal to N, it should be a factor of both N! + 1 and N!. Some simple mathematics says this means that p must also be a factor of 1, which is absurd.

Computer proof assistants can be used to verify proofs that are so long that human referees are unable to check every step. In 1998, for example, Samuel Ferguson and Thomas Hales announced a proof of Johannes Kepler’s 1611 conjecture that the most efficient way to pack spheres into three-dimensional space is the familiar “cannonball” packing. When their result was accepted for publication in 2005 it came with a caveat: the journal’s reviewers attested to “a strong degree of conviction of the essential correctness of this proof approach” – they declined to certify that every step was correct.

Ferguson and Hales’s proof was based on a strategy proposed by László Fejes Tóth in 1953, which reduced the Kepler conjecture to an optimisation problem in a finite number of variables. Ferguson and Hales figured out how to subdivide this optimisation problem into a few thousand cases that could be solved by linear programming, which explains why human referees felt unable to vouch for the correctness of each calculation. In frustration, Hales launched a formalisation project, where a team of mathematicians and computer scientists meticulously verified every logical and computational step in the argument. The resulting 22-author paper was published in 2017 to as much fanfare as the original proof announcement.

Computer proof assistants can also be used to verify results in subfields that are so technical that only specialists understand the meaning of the central concepts. Fields medallist Peter Scholze spent a year working out the proof of a theorem that he wasn’t quite sure he believed and doubted anyone else would have the stamina to check. To be sure that his reasoning was correct before building further mathematics on a shaky foundation, Scholze posed a formalisation challenge in a SaiBlog post entitled the “liquid tensor experiment” in December 2020. The mathematics involved was so cutting edge that it took 60,000 lines of code to formalise the last five lines of the proof – and all the background results that those arguments relied upon – but nevertheless this project was completed and the proof confirmed this past July by a team led by Johan Commelin.

Could computers just write the proofs themselves, without involving any human mathematicians? At present, large language models like ChatGPT can fluently generate mathematical prose and even output it in LaTeX, a typesetting program for mathematical writing. However, the logic of these “proofs” tends to be nonsense. Researchers at Google and elsewhere are looking to pair large language models with automatically generated formalised proofs to guarantee the correctness of the mathematical arguments, though initial efforts are hampered by sparse training sets – libraries of formalised proofs are much smaller than the collective mathematical output. But while machine capabilities are relatively limited today, auto-formalised maths is surely on its way.

In thinking about how the human mathematics community might wish to collaborate with computers in the future, we should return to the question of what a proof is for. It’s never been solely about separating true statements from false ones, but about understanding why the mathematical world is the way it is. While computers will undoubtedly help humans check their work and learn to think more clearly – it’s a much more exacting task to explain mathematics to a computer than it is to explain it to a kindergartener – understanding what to make of it all will always remain a fundamentally human endeavour.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Emily Riehl*


Mathematicians Calculate 42-Digit Number After Decades Of Trying

Dedekind numbers describe the number of ways sets of logical operations can be combined, and are fiendishly difficult to calculate, with only eight known since 1991 – and now mathematicians have calculated the ninth in the series.

The ninth Dedekind number was calculated using the Noctua 2 supercomputer at Paderborn University in Germany

A 42-digit-long number that mathematicians have been hunting for decades, thanks to its sheer difficulty to calculate, has suddenly been found by two separate groups at the same time. This ninth Dedekind number, as it is known, may be the last in the sequence that is feasible to discover.

Dedekind numbers describe the number of ways a set of logical operations can be combined. For sets of just two or three elements, the total number is easy to calculate by hand, but for larger sets it rapidly becomes impossible because the number grows so quickly, at what is known as a double exponential speed.

“You’ve got two to the power two to the power n, as a very rough estimate of the complexity of this system,” says Patrick de Causmaecker at KU Leuven in Belgium. “If you want to find the Dedekind numbers, that is the kind of magnitude of counting that you will have to face.”

The challenge of calculating higher Dedekind numbers has attracted researchers in many disciplines, from pure mathematicians to computer scientists, over the years. “It’s an old, famous problem and, because it’s hard to crack, it’s interesting,” says Christian Jäkel at Dresden University of Technology in Germany.

In 1991, mathematician Doug Wiedemann found the eighth Dedekind number using 200 hours of number crunching on the Cray-2 supercomputer, one of the most powerful machines at the time. No one could do any better, until now.

After working on the problem on and off for six years, Jäkel published his calculation for the ninth Dedekind number in early April. Coincidently, Causmaecker and Lennart van Hirtum, also at KU Leuven, published their work three days later, having produced the same result. Both groups were unaware of one another. “I was shocked, I didn’t know about their work. I thought it would take at least 10 years or whatever to recompute it,” says Jäkel.

The resulting number is 286,386,577,668,298,411,128,469,151,667,598,498,812,366, which is 42 digits long.

Jäkel’s calculation took 28 days on eight graphical processing units (GPUs). To reduce the number of calculations required, he multiplied together elements from the much smaller fifth Dedekind number.

Causmaecker and van Hirtum instead used a processor called a field-programmable gate array (FPGA) for their work. Unlike a CPU or a GPU, these can perform many different kinds of interrelated calculations at the same time. “In an FPGA, everything is always happening all at once,” says van Hirtum. “You can compare it to a car assembly line.”

Like Jäkel, the team used elements from a smaller Dedekind number, in their case the sixth, but this still required 5.5 quadrillion operations and more than four months of computing time using the Noctua 2 supercomputer at Paderborn University, says van Hirtum.

People are divided on whether another Dedekind number will ever be found. “The tenth Dedekind number will be in the realm of 10 to the power of 82, which puts you at the number of atoms in the visible universe, so you can imagine you need something big in technical advancement that also grows exponentially,” says Jakel.

Van Hirtum also thinks the amount of computing power becomes impractical for the next number, requiring trillions more computations which would require capturing the power output of the entire sun. “This jump in complexity remains absolutely astronomical,” he says.

Causmaecker, however, is more positive, as he thinks new ways of calculating could bring that requirement down. “The combination of exponential growth of computing power, and the power of the mathematical algorithms, will go together and maybe in 20 or 30 years we can compute [Dedekind number] 10.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


How Maths Can Help You Pack Your Shopping More Efficiently

How can you ensure you use the fewest bags when loading your shopping? A dash of maths will help, says Peter Rowlett.

You have heaped your shopping on the supermarket conveyor belt and a friendly member of the checkout staff is scanning it through. Items are coming thick and fast and you would like to get them in as few bags as possible. What is your strategy?

This is an example of an optimisation problem, from an area of maths called operational research. One important question is, what are you trying to optimise? Are you thinking about the weight of the items, or how much space they will take up? Do you guess how many bags you might need and start filling that many, or put everything in one until you need to start another?

We design algorithms to solve packing problems when they come up at a larger scale than your weekly shop, like making better use of warehouse space or fitting boxes into delivery vans. Similar algorithms are used for cutting raw materials with minimal waste and storing data on servers.

Bag-packing algorithms generally involve placing items into a single bag until you get to one that won’t fit because you have hit a maximum weight or size. When necessary, you open a second bag, and each time you reach an item that won’t fit in an existing bag, you start a new one.

If you are filling multiple bags at once, it is likely you will come across an item that could fit in more than one bag. Which do you choose? There is no clear best answer, but different algorithms give different ways to make this decision. We are looking for rules that can be applied without detailed thought. You might have more subtle requirements, like putting two items in the same bag because they go in the same cupboard at home, but here we want the kind of simple rule a computer program can mindlessly apply to get the most efficient outcomes, using the fewest bags, every time.

One algorithm we could employ is called first fit. For each new item, you look through the bags in the order you opened them, placing the item in the first one it fits in. An advantage is that this is quick to implement, but it can overlook options and end up using more bags than needed.

An alternative that often uses fewer bags overall is called worst fit. When faced with a choice, you look through the currently open bags for the one with the most space and place the item there.

These algorithms work more effectively if you handle the objects in decreasing order – packing the largest or heaviest first will usually need fewer bags.

So now you are armed with a secret weapon for packing: the worst-fit decreasing algorithm. The next time you are in the checkout line, load your bulkiest shopping onto the conveyor belt first, and always put items in the bag with the most space available – it might just help you use fewer bags overall.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


Decade-Long Struggle Over Maths Proof Could Be Decided By $1m Prize

Mathematician Shinichi Mochizuki’s Inter-universal Teichmüller theory has attracted controversy since it was published in 2012, with no one able to agree whether it is true. Now, a $1 million prize is being launched to settle the matter.

The Inter-Universal Geometry Center (IUGC) is overseeing the prize

Zen University

A prize of $1 million is being offered to anyone who can either prove or disprove an impenetrable mathematical theory, the veracity of which has been debated for over a decade.

Inter-universal Teichmüller theory (IUT) was created by Shinichi Mochizuki at Kyoto University, Japan, in a bid to solve a long-standing problem called the ABC conjecture, which focuses on the simple equation a + b = c. It suggests that if a and b are made up of large powers of prime numbers, then c isn’t usually divisible by large powers of primes.

In 2012, Mochizuki published a series of papers, running to more than 500 pages, that appeared to be a serious attempt at tackling the problem, but his dense and unusual style baffled many experts.

His apparent proof struggled to find acceptance and attracted criticism from some of the world’s most prominent mathematicians, including two who claimed in 2018 to have found a “serious, unfixable gap” in the work. Despite this, the paper was formally published in 2020, in a journal edited by Mochizuki himself. It was reported by Nature that he had nothing to do with the journal’s decision.

Since then, the theory has remained in mathematical limbo, with some people believing it to be true, but others disagreeing. Many mathematicians contacted for this story, including Mochizuki, either didn’t respond or declined to comment on the matter.

Now, the founder of Japanese telecoms and media company Dwango, Nobuo Kawakami, hopes to settle the issue by launching a cash prize for a paper that can prove – or disprove – the theory.

Two prizes are on offer. The first will see between $20,000 and $100,000 awarded annually, for the next 10 years, to the author of the best paper on IUT and related fields. The second – worth $1 million – is reserved for the mathematician who can write a paper that “shows an inherent flaw in the theory”, according to a press release.

Dwango didn’t respond to a request for interview, but during a press conference Kawakami said he hoped that his “modest reward will help increase the number of mathematicians who decide to get involved in IUT theory”.

To be eligible for the prizes, papers will need to be published in a peer-reviewed journal selected from a list compiled by the prize organisers, according to a report in The Asahi Shimbun newspaper, and Kawakami will choose the winner.

The competition is being run by the Inter-Universal Geometry Center (IUGC), which has been founded by Kawakami specifically to promote IUT, says Fumiharu Kato, director of the IUGC.

Kato says that Kawakami isn’t a mathematician, but sees IUT as a momentous part of the history of mathematics and believes that the cash prize is a “good investment” if it can finally clear up the controversy one way or the other.

“For me, IUT theory is logically simple. Of course, I mean, technically very, very hard. But logically it’s simple,” says Kato, who estimates that fewer than 10 people in the world comprehend the concept.

Kato believes that the controversy stems from the fact that Mochizuki doesn’t want to promote his theory, talk to journalists or other mathematicians about it or present the idea in a more easily digestible format, believing his work speaks for itself. Kato says that his current and former students are also reticent to do the same because they see him “as a god” in mathematics and don’t want to go against his wishes.

Because of this, most mathematicians are “at a loss” for a way to understand IUT, says Kato, who concedes that, despite earlier optimism about the idea, it is possible that the theory will eventually be disproven.

Ivan Fesenko at the University of Nottingham, UK, who is also deputy director at the IUGC, has long been a supporter of Mochizuki. He told New Scientist that there is no doubt about the correctness of IUT and that it all hinges on a deep understanding of an existing field called anabelian geometry.

“All negative public statements about the validity of IUT have been made by people who do not have proven expertise in anabelian geometry and who have zero research track record in anabelian geometry,” he says. “The new $1m IUT Challenger Prize will challenge every mathematician who has ever publicly criticised IUT to produce a paper with full proofs and get it published in a good math journal.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*