Should All Mathematical Proofs Be Checked By A Computer?

Proofs, the central tenet of mathematics, occasionally have errors in them. Could computers stop this from happening, asks mathematician Emily Riehl.

Computer proof assistants can verify that mathematical proofs are correct

One miserable morning in 2017, in the third year of my tenure-track job as a mathematics professor, I woke up to a worrying email. It was from a colleague and he questioned the proof of a key theorem in a highly cited paper I had co-authored. “I had always kind of assumed that this was probably not true in general, though I have no proof either way. Did I miss something?” he asked. The proof, he noted, appeared to rest on a tacit assumption that was not warranted.

Much to my alarm and embarrassment, I realised immediately that my colleague was correct. After an anxious week working to get to the bottom of my mistake, it turned out I was very lucky. The theorem was true; it just needed a new proof, which my co-authors and I supplied in a follow-up paper. But if the theorem had been false, the whole edifice of consequences “proven” using it would have come crashing down.

The essence of mathematics is the concept of proof: a combination of assumed axioms and logical inferences that demonstrate the truth of a mathematical statement. Other mathematicians can then attempt to follow the argument for themselves to identify any holes or convince themselves that the statement is indeed true. Patched up in this way, theorems originally proven by the ancient Greeks about the infinitude of primes or the geometry of planar triangles remain true today – and anyone can see the arguments for why this must be.

Proofs have meant that mathematics has largely avoided the replication crises pervading other sciences, where the results of landmark studies have not held up when the experiments were conducted again. But as my experience shows, mistakes in the literature still occur. Ideally, a false claim, like the one I made, would be caught by the peer review process, where a submitted paper is sent to an expert to “referee”. In practice, however, the peer review process in mathematics is less than perfect – not just because experts can make mistakes themselves, but also because they often do not check every step in a proof.

This is not laziness: theorems at the frontiers of mathematics can be dauntingly technical, so much so that it can take years or even decades to confirm the validity of a proof. The mathematician Vladimir Voevodsky, who received a Fields medal, the discipline’s highest honour, noted that “a technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail”. After several experiences in which mistakes in his proofs took over a decade to be resolved – a long time for something to sit in logical limbo – Voevodsky’s subsequent crisis of confidence led him to take the unusual step of abandoning his “curiosity-driven research” to develop a computer program that could verify the correctness of his work.

This kind of computer program is known as a proof assistant, though it might be better called a “proof checker”. It can verify that a string of text proves the stated theorem. The proof assistant knows the methods of logical reasoning and is equipped with a library of proofs of standard results. It will accept a proof only after satisfying each step in the reasoning process, with no shortcuts of the sort that human experts often use.

For instance, a computer can verify that there are infinitely many prime numbers by validating the following proof, which is an adaptation of Greek mathematician Euclid’s argument. The human mathematician first tells the computer exactly what is being claimed – in this case that for any natural number N there is always some prime number p that is larger. The human then tells the computer the formula, defining p to be the minimum prime factor of the number formed by multiplying all the natural numbers up to N together and adding 1, represented as N! + 1.

For the computer proof assistant to make sense of this, it needs a library that contains definitions of the basic arithmetic operations. It also needs proofs of theorems, like the fundamental theorem of arithmetic, which tells us that every natural number can be factored uniquely into a product of primes. The proof assistant then demands a proof that this prime number p is greater than N. This is argued by contradiction – a technique where following an assumption to its conclusion leads to something that cannot possibly be true, demonstrating that the original assumption was false. In this case, if p is less than or equal to N, it should be a factor of both N! + 1 and N!. Some simple mathematics says this means that p must also be a factor of 1, which is absurd.

Computer proof assistants can be used to verify proofs that are so long that human referees are unable to check every step. In 1998, for example, Samuel Ferguson and Thomas Hales announced a proof of Johannes Kepler’s 1611 conjecture that the most efficient way to pack spheres into three-dimensional space is the familiar “cannonball” packing. When their result was accepted for publication in 2005 it came with a caveat: the journal’s reviewers attested to “a strong degree of conviction of the essential correctness of this proof approach” – they declined to certify that every step was correct.

Ferguson and Hales’s proof was based on a strategy proposed by László Fejes Tóth in 1953, which reduced the Kepler conjecture to an optimisation problem in a finite number of variables. Ferguson and Hales figured out how to subdivide this optimisation problem into a few thousand cases that could be solved by linear programming, which explains why human referees felt unable to vouch for the correctness of each calculation. In frustration, Hales launched a formalisation project, where a team of mathematicians and computer scientists meticulously verified every logical and computational step in the argument. The resulting 22-author paper was published in 2017 to as much fanfare as the original proof announcement.

Computer proof assistants can also be used to verify results in subfields that are so technical that only specialists understand the meaning of the central concepts. Fields medallist Peter Scholze spent a year working out the proof of a theorem that he wasn’t quite sure he believed and doubted anyone else would have the stamina to check. To be sure that his reasoning was correct before building further mathematics on a shaky foundation, Scholze posed a formalisation challenge in a SaiBlog post entitled the “liquid tensor experiment” in December 2020. The mathematics involved was so cutting edge that it took 60,000 lines of code to formalise the last five lines of the proof – and all the background results that those arguments relied upon – but nevertheless this project was completed and the proof confirmed this past July by a team led by Johan Commelin.

Could computers just write the proofs themselves, without involving any human mathematicians? At present, large language models like ChatGPT can fluently generate mathematical prose and even output it in LaTeX, a typesetting program for mathematical writing. However, the logic of these “proofs” tends to be nonsense. Researchers at Google and elsewhere are looking to pair large language models with automatically generated formalised proofs to guarantee the correctness of the mathematical arguments, though initial efforts are hampered by sparse training sets – libraries of formalised proofs are much smaller than the collective mathematical output. But while machine capabilities are relatively limited today, auto-formalised maths is surely on its way.

In thinking about how the human mathematics community might wish to collaborate with computers in the future, we should return to the question of what a proof is for. It’s never been solely about separating true statements from false ones, but about understanding why the mathematical world is the way it is. While computers will undoubtedly help humans check their work and learn to think more clearly – it’s a much more exacting task to explain mathematics to a computer than it is to explain it to a kindergartener – understanding what to make of it all will always remain a fundamentally human endeavour.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Emily Riehl*


Mathematicians Calculate 42-Digit Number After Decades Of Trying

Dedekind numbers describe the number of ways sets of logical operations can be combined, and are fiendishly difficult to calculate, with only eight known since 1991 – and now mathematicians have calculated the ninth in the series.

The ninth Dedekind number was calculated using the Noctua 2 supercomputer at Paderborn University in Germany

A 42-digit-long number that mathematicians have been hunting for decades, thanks to its sheer difficulty to calculate, has suddenly been found by two separate groups at the same time. This ninth Dedekind number, as it is known, may be the last in the sequence that is feasible to discover.

Dedekind numbers describe the number of ways a set of logical operations can be combined. For sets of just two or three elements, the total number is easy to calculate by hand, but for larger sets it rapidly becomes impossible because the number grows so quickly, at what is known as a double exponential speed.

“You’ve got two to the power two to the power n, as a very rough estimate of the complexity of this system,” says Patrick de Causmaecker at KU Leuven in Belgium. “If you want to find the Dedekind numbers, that is the kind of magnitude of counting that you will have to face.”

The challenge of calculating higher Dedekind numbers has attracted researchers in many disciplines, from pure mathematicians to computer scientists, over the years. “It’s an old, famous problem and, because it’s hard to crack, it’s interesting,” says Christian Jäkel at Dresden University of Technology in Germany.

In 1991, mathematician Doug Wiedemann found the eighth Dedekind number using 200 hours of number crunching on the Cray-2 supercomputer, one of the most powerful machines at the time. No one could do any better, until now.

After working on the problem on and off for six years, Jäkel published his calculation for the ninth Dedekind number in early April. Coincidently, Causmaecker and Lennart van Hirtum, also at KU Leuven, published their work three days later, having produced the same result. Both groups were unaware of one another. “I was shocked, I didn’t know about their work. I thought it would take at least 10 years or whatever to recompute it,” says Jäkel.

The resulting number is 286,386,577,668,298,411,128,469,151,667,598,498,812,366, which is 42 digits long.

Jäkel’s calculation took 28 days on eight graphical processing units (GPUs). To reduce the number of calculations required, he multiplied together elements from the much smaller fifth Dedekind number.

Causmaecker and van Hirtum instead used a processor called a field-programmable gate array (FPGA) for their work. Unlike a CPU or a GPU, these can perform many different kinds of interrelated calculations at the same time. “In an FPGA, everything is always happening all at once,” says van Hirtum. “You can compare it to a car assembly line.”

Like Jäkel, the team used elements from a smaller Dedekind number, in their case the sixth, but this still required 5.5 quadrillion operations and more than four months of computing time using the Noctua 2 supercomputer at Paderborn University, says van Hirtum.

People are divided on whether another Dedekind number will ever be found. “The tenth Dedekind number will be in the realm of 10 to the power of 82, which puts you at the number of atoms in the visible universe, so you can imagine you need something big in technical advancement that also grows exponentially,” says Jakel.

Van Hirtum also thinks the amount of computing power becomes impractical for the next number, requiring trillions more computations which would require capturing the power output of the entire sun. “This jump in complexity remains absolutely astronomical,” he says.

Causmaecker, however, is more positive, as he thinks new ways of calculating could bring that requirement down. “The combination of exponential growth of computing power, and the power of the mathematical algorithms, will go together and maybe in 20 or 30 years we can compute [Dedekind number] 10.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


How Maths Can Help You Pack Your Shopping More Efficiently

How can you ensure you use the fewest bags when loading your shopping? A dash of maths will help, says Peter Rowlett.

You have heaped your shopping on the supermarket conveyor belt and a friendly member of the checkout staff is scanning it through. Items are coming thick and fast and you would like to get them in as few bags as possible. What is your strategy?

This is an example of an optimisation problem, from an area of maths called operational research. One important question is, what are you trying to optimise? Are you thinking about the weight of the items, or how much space they will take up? Do you guess how many bags you might need and start filling that many, or put everything in one until you need to start another?

We design algorithms to solve packing problems when they come up at a larger scale than your weekly shop, like making better use of warehouse space or fitting boxes into delivery vans. Similar algorithms are used for cutting raw materials with minimal waste and storing data on servers.

Bag-packing algorithms generally involve placing items into a single bag until you get to one that won’t fit because you have hit a maximum weight or size. When necessary, you open a second bag, and each time you reach an item that won’t fit in an existing bag, you start a new one.

If you are filling multiple bags at once, it is likely you will come across an item that could fit in more than one bag. Which do you choose? There is no clear best answer, but different algorithms give different ways to make this decision. We are looking for rules that can be applied without detailed thought. You might have more subtle requirements, like putting two items in the same bag because they go in the same cupboard at home, but here we want the kind of simple rule a computer program can mindlessly apply to get the most efficient outcomes, using the fewest bags, every time.

One algorithm we could employ is called first fit. For each new item, you look through the bags in the order you opened them, placing the item in the first one it fits in. An advantage is that this is quick to implement, but it can overlook options and end up using more bags than needed.

An alternative that often uses fewer bags overall is called worst fit. When faced with a choice, you look through the currently open bags for the one with the most space and place the item there.

These algorithms work more effectively if you handle the objects in decreasing order – packing the largest or heaviest first will usually need fewer bags.

So now you are armed with a secret weapon for packing: the worst-fit decreasing algorithm. The next time you are in the checkout line, load your bulkiest shopping onto the conveyor belt first, and always put items in the bag with the most space available – it might just help you use fewer bags overall.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


Decade-Long Struggle Over Maths Proof Could Be Decided By $1m Prize

Mathematician Shinichi Mochizuki’s Inter-universal Teichmüller theory has attracted controversy since it was published in 2012, with no one able to agree whether it is true. Now, a $1 million prize is being launched to settle the matter.

The Inter-Universal Geometry Center (IUGC) is overseeing the prize

Zen University

A prize of $1 million is being offered to anyone who can either prove or disprove an impenetrable mathematical theory, the veracity of which has been debated for over a decade.

Inter-universal Teichmüller theory (IUT) was created by Shinichi Mochizuki at Kyoto University, Japan, in a bid to solve a long-standing problem called the ABC conjecture, which focuses on the simple equation a + b = c. It suggests that if a and b are made up of large powers of prime numbers, then c isn’t usually divisible by large powers of primes.

In 2012, Mochizuki published a series of papers, running to more than 500 pages, that appeared to be a serious attempt at tackling the problem, but his dense and unusual style baffled many experts.

His apparent proof struggled to find acceptance and attracted criticism from some of the world’s most prominent mathematicians, including two who claimed in 2018 to have found a “serious, unfixable gap” in the work. Despite this, the paper was formally published in 2020, in a journal edited by Mochizuki himself. It was reported by Nature that he had nothing to do with the journal’s decision.

Since then, the theory has remained in mathematical limbo, with some people believing it to be true, but others disagreeing. Many mathematicians contacted for this story, including Mochizuki, either didn’t respond or declined to comment on the matter.

Now, the founder of Japanese telecoms and media company Dwango, Nobuo Kawakami, hopes to settle the issue by launching a cash prize for a paper that can prove – or disprove – the theory.

Two prizes are on offer. The first will see between $20,000 and $100,000 awarded annually, for the next 10 years, to the author of the best paper on IUT and related fields. The second – worth $1 million – is reserved for the mathematician who can write a paper that “shows an inherent flaw in the theory”, according to a press release.

Dwango didn’t respond to a request for interview, but during a press conference Kawakami said he hoped that his “modest reward will help increase the number of mathematicians who decide to get involved in IUT theory”.

To be eligible for the prizes, papers will need to be published in a peer-reviewed journal selected from a list compiled by the prize organisers, according to a report in The Asahi Shimbun newspaper, and Kawakami will choose the winner.

The competition is being run by the Inter-Universal Geometry Center (IUGC), which has been founded by Kawakami specifically to promote IUT, says Fumiharu Kato, director of the IUGC.

Kato says that Kawakami isn’t a mathematician, but sees IUT as a momentous part of the history of mathematics and believes that the cash prize is a “good investment” if it can finally clear up the controversy one way or the other.

“For me, IUT theory is logically simple. Of course, I mean, technically very, very hard. But logically it’s simple,” says Kato, who estimates that fewer than 10 people in the world comprehend the concept.

Kato believes that the controversy stems from the fact that Mochizuki doesn’t want to promote his theory, talk to journalists or other mathematicians about it or present the idea in a more easily digestible format, believing his work speaks for itself. Kato says that his current and former students are also reticent to do the same because they see him “as a god” in mathematics and don’t want to go against his wishes.

Because of this, most mathematicians are “at a loss” for a way to understand IUT, says Kato, who concedes that, despite earlier optimism about the idea, it is possible that the theory will eventually be disproven.

Ivan Fesenko at the University of Nottingham, UK, who is also deputy director at the IUGC, has long been a supporter of Mochizuki. He told New Scientist that there is no doubt about the correctness of IUT and that it all hinges on a deep understanding of an existing field called anabelian geometry.

“All negative public statements about the validity of IUT have been made by people who do not have proven expertise in anabelian geometry and who have zero research track record in anabelian geometry,” he says. “The new $1m IUT Challenger Prize will challenge every mathematician who has ever publicly criticised IUT to produce a paper with full proofs and get it published in a good math journal.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Mathematicians Make Even Better Never-Repeating Tile Discovery

An unsatisfying caveat in a mathematical breakthrough discovery of a single tile shape that can cover a surface without ever creating a repeating pattern has been eradicated. The newly discovered “spectre” shape can cover a surface without repeating and without mirror images.

The pattern on the left side is made up of the “hat” shape, including reflections. The pattern on the right is made up of round-edged “spectre” shapes that repeat infinitely without reflections

David Smith et al

Mathematicians solved a decades-long mystery earlier this year when they discovered a shape that can cover a surface completely without ever creating a repeating pattern. But the breakthrough had come with a caveat: both the shape and its mirror image were required. Now the same team has discovered that a tweaked version of the original shape can complete the task without its mirror.

Simple shapes such as squares and equilateral triangles can tile a surface without gaps in a repeating pattern. Mathematicians have long been interested in a more complex version of tiling, known as aperiodic tiling, which involves using more complex shapes that never form such a repeating pattern.

The most famous aperiodic tiles were created by mathematician Roger Penrose, who in the 1970s discovered that two different shapes could be combined to create an infinite, never-repeating tiling. In March, Chaim Goodman-Strauss at the University of Arkansas and his colleagues found the “hat”, a shape that could technically do it alone, but using a left-handed and right-handed version. This was a slightly unsatisfying solution and left the question of whether a single shape could achieve the same thing with no reflections remaining.

The researchers have now tweaked the equilateral polygon from their previous research to create a new family of shapes called spectres. These shapes allow non-repeating pattern tiling using no reflections at all.

Until now, it wasn’t clear whether such a single shape, known as an einstein (from the German “ein stein” or “one stone”), could even exist. The researchers say in their paper that the previous discovery of the hat was a reminder of how little understood tiling patterns are, and that they were surprised to make another breakthrough so soon.

“Certainly there is no evidence to suggest that the hat (and the continuum of shapes to which it belongs) is somehow unique, and we might therefore hope that a zoo of interesting new monotiles will emerge in its wake,” the researchers write in their new paper. “Nonetheless, we did not expect to find one so close at hand.”

Sarah Hart at Birkbeck, University of London, says the new result is even more impressive than the original finding. “It’s very intellectually satisfying to have a solution that doesn’t need the mirror image because if you actually had real tiles then a tile and its mirror image are not the same,” she says. “With this new tile there are no such caveats.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


How Maths Reveals The Best Time to Add Milk For Hotter Tea

If you want your cup of tea to stay as hot as possible, should you put milk in immediately, or wait until you are ready to drink it? Katie Steckles does the sums.

Picture the scene: you are making a cup of tea for a friend who is on their way and won’t be arriving for a little while. But – disaster – you have already poured hot water onto a teabag! The question is, if you don’t want their tea to be too cold when they come to drink it, do you add the cold milk straight away or wait until your friend arrives?

Luckily, maths has the answer. When a hot object like a cup of tea is exposed to cooler air, it will cool down by losing heat. This is the kind of situation we can describe using a mathematical model – in this case, one that represents cooling. The rate at which heat is lost depends on many factors, but since most have only a small effect, for simplicity we can base our model on the difference in temperature between the cup of tea and the cool air around it.

A bigger difference between these temperatures results in a much faster rate of cooling. So, as the tea and the surrounding air approach the same temperature, the heat transfer between them, and therefore cooling of the tea, slows down. This means that the crucial factor in this situation is the starting condition. In other words, the initial temperature of the tea relative to the temperature of the room will determine exactly how the cooling plays out.

When you put cold milk into the hot tea, it will also cause a drop in temperature. Your instinct might be to hold off putting milk into the tea, because that will cool it down and you want it to stay as hot as possible until your friend comes to drink it. But does this fit with the model?

Let’s say your tea starts off at around 80°C (176°F): if you put milk in straight away, the tea will drop to around 60°C (140°F), which is closer in temperature to the surrounding air. This means the rate of cooling will be much slower for the milky tea when compared with a cup of non-milky tea, which would have continued to lose heat at a faster rate. In either situation, the graph (pictured above) will show exponential decay, but adding milk at different times will lead to differences in the steepness of the curve.

Once your friend arrives, if you didn’t put milk in initially, their tea may well have cooled to about 55°C (131°F) – and now adding milk will cause another temperature drop, to around 45°C (113°F). By contrast, the tea that had milk put in straight away will have cooled much more slowly and will generally be hotter than if the milk had been added at a later stage.

Mathematicians use their knowledge of the rate at which objects cool to study the heat from stars, planets and even the human body, and there are further applications of this in chemistry, geology and architecture. But the same mathematical principles apply to them as to a cup of tea cooling on your table. Listening to the model will mean your friend’s tea stays as hot as possible.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


Mathematician Wins Abel Prize For Solving Equations With Geometry

Luis Caffarelli has been awarded the most prestigious prize in mathematics for his work on nonlinear partial differential equations, which have many applications in the real world.

Luis Caffarelli has won the 2023 Abel prize, unofficially called the Nobel prize for mathematics, for his work on a class of equations that describe many real-world physical systems, from melting ice to jet engines.

Caffarelli was having breakfast with his wife when he found out the news. “The breakfast was better all of a sudden,” he says. “My wife was happy, I was happy — it was an emotional moment.”

Based at the University of Texas at Austin, Caffarelli started work on partial differential equations (PDEs) in the late 1970s and has contributed to hundreds of papers since. He is known for making connections between seemingly distant mathematical concepts, such as how a theory describing the smallest possible areas that surfaces can occupy can be used to describe PDEs in extreme cases.

PDEs have been studied for hundreds of years and describe almost every sort of physical process, ranging from fluids to combustion engines to financial models. Caffarelli’s most important work concerned nonlinear PDEs, which describe complex relationships between several variables. These equations are more difficult to solve than other PDEs, and often produce solutions that don’t make sense in the physical world.

Caffarelli helped tackle these problems with regularity theory, which sets out how to deal with problematic solutions by borrowing ideas from geometry. His approach carefully elucidated the troublesome parts of the equations, solving a wide range of problems over his more than four-decade career.

“Forty years after these papers appeared, we have digested them and we know how to do some of these things more efficiently,” says Francesco Maggi at the University of Texas at Austin. “But when they appeared back in the day, in the 80s, these were alien mathematics.”

Many of the nonlinear PDEs that Caffarelli helped describe were so-called free boundary problems, which describe physical scenarios where two objects in contact share a changing surface, like ice melting into water or water seeping through a filter.

“He has used insights that combined ingenuity, and sometimes methods that are not ultra-complicated, but which are used in a manner that others could not see — and he has done that time and time again,” says Thomas Chen at the University of Texas at Austin.

These insights have also helped other researchers translate equations so that they can be solved on supercomputers. “He has been one of the most prominent people in bringing this theory to a point where it’s really useful for applications,” says Maggi.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*

 


Why Maths, Our Best Tool To Describe The Universe, May Be Fallible

Our laws of nature are written in the language of mathematics. But maths itself is only as dependable as the axioms it is built on, and we have to assume those axioms are true.

You might think that mathematics is the most trustworthy thing humans have ever come up with. It is the basis of scientific rigour and the bedrock of much of our other knowledge too. And you might be right. But be careful: maths isn’t all it seems. “The trustworthiness of mathematics is limited,” says Penelope Maddy, a philosopher of mathematics at the University of California, Irvine.

Maddy is no conspiracy theorist. All mathematicians know her statement to be true because their subject is built on “axioms” – and try as they might, they can never prove these axioms to be true.

An axiom is essentially an assumption based on observations of how things are. Scientists observe a phenomenon, formalise it and write down a law of nature. In a similar way, mathematicians use their observations to create an axiom. One example is the observation that there always seems to be a unique straight line that can be drawn between two points. Assume this to be universally true and you can build up the rules of Euclidean geometry. Another is that 1 + 2 is the same as 2 + 1, an assumption that allows us to do arithmetic. “The fact that maths is built on unprovable axioms is not that surprising,” says mathematician Vera Fischer at the University of Vienna in Austria.

These axioms might seem self-evident, but maths goes a lot further than arithmetic. Mathematicians aim to uncover things like the properties of numbers, the ways in which they are all related to one another and how they can be used to model the real world. These more complex tasks are still worked out through theorems and proofs built on axioms, but the relevant axioms might have to change. Lines between points have different properties on curved surfaces than flat ones, for example, which means the underlying axioms have to be different in different geometries. We always have to be careful that our axioms are reliable and reflect the world we are trying to model with our maths.

Set theory

The gold standard for mathematical reliability is set theory, which describes the properties of collections of things, including numbers themselves. Beginning in the early 1900s, mathematicians developed a set of underpinning axioms for set theory known as ZFC (for “Zermelo-Fraenkel”, from two of its initiators, Ernst Zermelo and Abraham Fraenkel, plus something called the “axiom of choice”).

ZFC is a powerful foundation. “If it could be guaranteed that ZFC is consistent, all uncertainty about mathematics could be dispelled,” says Maddy. But, brutally, that is impossible. “Alas, it soon became clear that the consistency of those axioms could be proved only by assuming even stronger axioms,” she says, “which obviously defeats the purpose.”

Maddy is untroubled by the limits: “Set theorists have been proving theorems from ZFC for 100 years with no hint of a contradiction.” It has been hugely productive, she says, allowing mathematicians to create no end of interesting results, and they have even been able to develop mathematically precise measures of just how much trust we can put in theories derived from ZFC.

In the end, then, mathematicians might be providing the bedrock on which much scientific knowledge is built, but they can’t offer cast-iron guarantees that it won’t ever shift or change. In general, they don’t worry about it: they shrug their shoulders and turn up to work like everybody else. “The aim of obtaining a perfect axiomatic system is exactly as feasible as the aim of obtaining a perfect understanding of our physical universe,” says Fischer.

At least mathematicians are fully aware of the futility of seeking perfection, thanks to the “incompleteness” theorems laid out by Kurt Gödel in the 1930s. These show that, in any domain of mathematics, a useful theory will generate statements about this domain that can’t be proved true or false. A limit to reliable knowledge is therefore inescapable. “This is a fact of life mathematicians have learned to live with,” says David Aspero at the University of East Anglia, UK.

All in all, maths is in pretty good shape despite this – and nobody is too bothered. “Go to any mathematics department and talk to anyone who’s not a logician, and they’ll say, ‘Oh, the axioms are just there’. That’s it. And that’s how it should be. It’s a very healthy approach,” says Fischer. In fact, the limits are in some ways what makes it fun, she says. “The possibility of development, of getting better, is exactly what makes mathematics an absolutely fascinating subject.”

HOW BIG IS INFINITY?

Infinity is infinitely big, right? Sadly, it isn’t that simple. We have long known that there are different sizes of infinity. In the 19th century, mathematician Georg Cantor showed that there are two types of infinity. The “natural numbers” (1, 2, 3 and so on forever) are a countable infinity. But between each natural number, there is a continuum of “real numbers” (such as 1.234567… with digits that go on forever). Real number infinities turn out not to be countable. And so, overall, Cantor concluded that there are two types of infinity, each of a different size.

In the everyday world, we never encounter anything infinite. We have to content ourselves with saying that the infinite “goes on forever” without truly grasping conceptually what that means. This matters, of course, because infinities crop up all the time in physics equations, most notably in those that describe the big bang and black holes. You might have expected mathematicians to have a better grasp of this concept, then – but it remains tricky.

This is especially true when you consider that Cantor suggested there might be another size of infinity nestled between the two he identified, an idea known as the continuum hypothesis. Traditionally, mathematicians thought that it would be impossible to decide whether this was true, but work on the foundations of mathematics has recently shown that there may be hope of finding out either way after all.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Brooks*


The Mathematician Who Worked Out How To Time Travel

Mathematics suggested that time travel is physically possible – and Kurt Gödel proved it. Mathematician Karl Sigmund explains how the polymath did it.

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or mathematician to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

There may be no better way to get truly lost in space-time than to travel to the past and fiddle around with causality. Polymath Kurt Gödel suggested that you could, for instance, land near your younger self and “do something” to that person. If your action was drastic enough, like murder (or is it suicide?), then you could neither have embarked on your time trip, nor perpetrated the dark deed. But then no one would have stopped you from going back in time and so you can commit your crime after all. You are lost in a loop. It’s no longer where you are, but whether you are.

Gödel was the first to prove that, according to general relativity, this sort of time travel can be done. While logically impossible, the equations say it is physically possible. How can that actually be the case?

Widely hailed as “the greatest logician since Aristotle”, Gödel is mainly known for his mathematical and philosophical work. By age 25, while at the University of Vienna, he developed his notorious incompleteness theorems. These basically say that there is no finite set of assumptions that can underpin all of mathematics. This was quickly perceived as a turning point in the subject.

In 1934, Gödel, now 28, was among the first to be invited to the newly founded Institute for Advanced Study in Princeton, New Jersey. During the following years, he commuted between Princeton and Vienna.

After a traumatic journey around a war-torn globe, Gödel settled in Princeton for good in 1940. This is when his friendship with Albert Einstein developed. Their daily walks became legendary. Einstein quipped: “I come to my office just for the privilege to escort Gödel back home.”  The two strollers seemed eerily out of their time. The atomic bomb was built without Einstein, and the computer without Gödel.

When Einstein’s 70th birthday approached, Gödel was asked to contribute to the impending Festschrift a philosophical chapter on German philosopher Immanuel Kant and relativity – a well-grazed field. To his mother, he wrote: “I was asked to write a paper for a volume on the philosophical meaning of Einstein and his theory; of course, I could not very well refuse.”

Gödel began to reflect on Kant’s view that time was not, as Newton would have it, an absolute, objective part of the world, but an a priori form of intuition constraining our cognition. As Kant said: “What we represent ourselves as changes would, in beings with other forms of cognition, give rise to a perception in which… change would not occur at all.” Such “beings” would experience the world as timeless.

In his special relativity, Einstein had famously shown that different observers can have different notions of “now”. Hence, no absolute time. (“Newton, forgive me!” sighed Einstein.) However, this theory does not include gravitation. Add mass, and a kind of absolute time seems to sneak back! At least, it does so in the standard model of cosmology. There, the overall flow of matter works as a universal clock. Space-time is sliced in an infinity of layers, each representing a “now”, one succeeding another. Is this a necessary feature of general relativity? Gödel had found a mathematical kernel in a philosophical problem. That was his trademark.

At this stage, according to cosmologist Wolfgang Rindler, serendipity stepped in: Gödel stumbled across a letter to the journal Nature by physicist George Gamow, entitled “Rotating universe?”. It points out that apparently most objects in the sky spin like tops. Stars do it, planets do it, even spiral galaxies do it. They rotate. But why?

Gamow suggested that the whole universe rotates, and that this rotation trickles down, so to speak, to smaller and smaller structures: from universe to galaxies, from galaxies to stars, from stars to planets. The idea was ingenious, but extremely vague. No equations, no measurements. However, the paper ended with a friendly nudge for someone to start calculating.

With typical thoroughness, Gödel took up the gauntlet. He had always been a hard worker, who used an alarm clock not for waking up but for going to bed. He confided to his mother that his cosmology absorbed him so much that even when he tried to listen to the radio or to movies, he could do so “only with half an ear”. Eventually, Gödel discovered exact solutions of Einstein’s equations, which described a rotating universe.

However, while Gamow had imagined that the centre of rotation of our world is somewhere far away, beyond the reach of the strongest telescopes, Gödel’s universe rotates in every point. This does not solve Gamow’s quest for the cause of galactic rotations, but yields another, amazing result. In contrast to all then-known cosmological models, Gödel’s findings showed that there is no “now” that’s valid everywhere. This was exactly what he had set out to achieve: vindicate Kant (and Einstein) by showing that there is no absolute time.

“Talked a lot with Gödel,” wrote his friend Oskar Morgenstern, the economist who, together with John von Neumann, had founded game theory. He knew Gödel from former Viennese days and reported all their meetings in his diary. “His cosmological work makes good progress. Now one can travel into the past, or reach arbitrarily distant places in arbitrarily short time. This will cause a nice stir.” Time travel had been invented.

In Gödel’s universe, you don’t have to flip the arrow of time to go back to the past. Your time runs as usual. No need to shift entropy in return gear. You just step into a rocket and take off, to fly in a very wide curve (very wide!) at a very high speed (but less than the speed of light). The rocket’s trajectory weaves between light cones, never leaving them but exploiting the fact that in a rotating universe, they are not arrayed in parallel. The trip would consume an awful amount of energy.

Gödel just managed to meet the editorial timeline. On his 70th birthday, Einstein got Gödel’s manuscript for a present (and a sweater knitted by Kurt’s wife Adele). He thanked him for the gifts and confessed that the spectre of time travel had worried him for decades. Now the spectre had materialised. Einstein declared Gödel’s paper “one of the most important since my own”, and stated his hope that time travel could be excluded by some as yet unknown physical law. Soon after, Gödel received the first Albert Einstein award. It went with a modest amount of money which Gödel, as it turned out, could use well.

Next, according to philosopher Palle Yourgrau, “something extraordinary happened: nothing”.

For several decades, the mind-bending discovery of Gödel, far from causing “a nice stir”, got very little attention. When Harry Woolf, the director of the Institute for Advanced Study, arranged the eulogies to be given at Gödel’s funeral in 1978, he listed the topics to be covered: set theory and logic, followed by relativity, which he noted was “not worth a talk”.

Only by and by did eminent cosmologists, such as Stephen Hawking, Kip Thorne or John Barrow, convey an area of respectability to the field. Today, it is mainstream. With time, it transpired that, years before Gödel’s breakthrough, several other cosmological models had exhibited both rotation and the possibility of time travel. However, this aspect had never been noticed, not even by the engineers of these universes.

Many physicists are happy to leave the paradoxical aspects of time travel to philosophers. They invoke a “chronology protection law” that would step in to prevent the worst. It sounds like whistling in the dark but helps to overcome the problem of haunting your own present as a revenant from the future.

And does our universe rotate? Gödel was equivocal on that issue. Sometimes he claimed that his model only served as a thought experiment, to display the illusionary character of time, which cannot depend on accidental features of the place we happen to inhabit. Cosmologist Freeman Dyson, however, reported that Gödel, near the end of his life, had shown dismay when told that evidence for a rotating universe is lacking.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karl Sigmund*


To Make Maths Classes Sizzle, Inject Some Politics And Social Justice

Relating mathematics to questions that are relevant to many students can help address its image problem, argues Eugenia Cheng.

Mathematics has an image problem: far too many people are put off it and conclude that the subject just isn’t for them. There are many issues, including the curriculum, standardised tests and constraints placed on teachers. But one of the biggest problems is how maths is presented, as cold and dry.

Attempts at “real-life” applications are often detached from our daily lives, such as arithmetic problems involving a ludicrous number of watermelons or using a differential equation to calculate how long a hypothetical cup of coffee will take to cool.

I have a different approach, which is to relate abstract maths to questions of politics and social justice. I have taught fairly maths-phobic art students in this way for the past seven years and have seen their attitudes transformed. They now believe maths is relevant to them and can genuinely help them in their everyday lives.

At a basic level, maths is founded on logic, so when I am teaching the principles of logic, I use examples from current events rather than the old-fashioned, detached type of problem. Instead of studying the logic of a statement like “all dogs have four legs”, I might discuss the (also erroneous) statement “all immigrants are illegal”.

But I do this with specific mathematical structures, too. For example, I teach a type of structure called an ordered set, which is a set of objects subject to an order relation such as “is less than”. We then study functions that map members of one ordered set to members of another, and ask which functions are “order-preserving”. A typical example might be the function that takes an ordinary number and maps it to the number obtained from multiplying by 2. We would then say that if x < y then also 2x < 2y, so the function is order-preserving. By contrast the function that squares numbers isn’t order-preserving because, for example, -2 < -1, but (-2)2 > (-1)2. If we work through those squaring operations, we get 4 and 1.

However, rather than sticking to this type of dry mathematical example, I introduce ones about issues like privilege and wealth. If we think of one ordered set with people ordered by privilege, we can make a function to another set where the people are now ordered by wealth instead. What does it mean for that to be order-preserving, and do we expect it to be so? Which is to say, if someone is more privileged than someone else, are they automatically more wealthy? We can also ask about hours worked and income: if someone works more hours, do they necessarily earn more? The answer there is clearly no, but then we go on to discuss whether we think this function should be order-preserving or not, and why.

My approach is contentious because, traditionally, maths is supposed to be neutral and apolitical. I have been criticised by people who think my approach will be off-putting to those who don’t care about social justice; however, the dry approach is off-putting to those who do care about social justice. In fact, I believe that all academic disciplines should address our most important issues in whatever way they can. Abstract maths is about making rigorous logical arguments, which is relevant to everything. I don’t demand that students agree with me about politics, but I do ask that they construct rigorous arguments to back up their thoughts and develop the crucial ability to analyse the logic of people they disagree with.

Maths isn’t just about numbers and equations, it is about studying different logical systems in which different arguments are valid. We can apply it to balls rolling down different hills, but we can also apply it to pressing social issues. I think we should do both, for the sake of society and to be more inclusive towards different types of student in maths education.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Eugenia Cheng*