How To Perfectly Wrap Gifts Of All Shapes And Sizes Using Maths

Reduce wastage and enjoy deeply satisfying neat folds by applying a little geometry to your gift-wrapping, says Katie Steckles.

Wrapping gifts in paper involves converting a 2D shape into a 3D one, which presents plenty of geometrical challenges. Mathematics can help with this, in particular by making sure that you are using just the right amount of paper, with no wastage.

When you are dealing with a box-shaped gift, you might already wrap the paper around it to make a rectangular tube, then fold in the ends. With a little measuring, though, you can figure out precisely how much paper you will need to wrap a gift using this method, keeping the ends nice and neat.

For example, if your gift is a box with a square cross-section, you will need to measure the length of the long side, L, and the thickness, T, which is the length of one side of the square. Then, you will need a piece of paper measuring 4 × T (to wrap around the four sides with a small overlap) by L + T. Once wrapped around the shape, a bit of paper half the height of the square will stick out at each end, and if you push the four sides in carefully, you can create diagonal folds to make four points that meet neatly in the middle. The square ends of the gift make this possible (and deeply satisfying).

Similarly, if you are wrapping a cylindrical gift with diameter D (such as a candle), mathematics tells us you need your paper to be just more than π × D wide, and L + D long. This means the ends can be folded in – possibly less neatly – to also meet exactly in the middle (sticky bows are your friend here).

How about if your gift is an equilateral triangular prism? Here, the length of one side of the triangle gives the thickness T, and your paper should be a little over 3 × T wide and L + (2 × T) long. The extra length is needed because it is harder to fold the excess end bits to make the points meet in the middle. Instead, you can fold the paper to cover the end triangle exactly, by pushing it in from one side at a time and creating a three-layered triangle of paper that sits exactly over the end.

It is also possible to wrap large, flat, square-ish gifts using a diagonal method. If the diagonal of the top surface of your box is D, and the height is H, you can wrap it using a square piece of paper that measures a little over D + (√2 × H) along each side.

Place your gift in the centre of the paper, oriented diagonally, and bring the four corners to meet in the middle of your gift, securing it with one piece of tape or a sticky bow. This will cover all the faces exactly, and look pretty smart too.

For maximum mathematical satisfaction, what you want is to get the pattern on the paper to line up exactly. This is easier for a soft gift, where you can squash it to line up the pattern, but will only work with a box if the distance around it is exactly a multiple of the width of the repeat on the pattern. Otherwise, follow my example (above) and get your own custom wrapping paper printed!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


Millennium Prize: the Birch and Swinnerton-Dyer Conjecture

Elliptic curves have a long and distinguished history that can be traced back to antiquity. They are prevalent in many branches of modern mathematics, foremost of which is number theory.

In simplest terms, one can describe these curves by using a cubic equation of the form

where A and B are fixed rational numbers (to ensure the curve E is nice and smooth everywhere, one also needs to assume that its discriminant 4A3 + 27B2 is non-zero).

To illustrate, let’s consider an example: choosing A=-1 and B=0, we obtain the following picture:

At this point it becomes clear that, despite their name, elliptic curves have nothing whatsoever to do with ellipses! The reason for this historical confusion is that these curves have a strong connection to elliptic integrals, which arise when describing the motion of planetary bodies in space.

The ancient Greek mathematician Diophantus is considered by many to be the father of algebra. His major mathematical work was written up in the tome Arithmetica which was essentially a school textbook for geniuses. Within it, he outlined many tools for studying solutions to polynomial equations with several variables, termed Diophantine Equations in his honour.

One of the main problems Diophantus considered was to find all solutions to a particular polynomial equation that lie in the field of rational numbers Q. For equations of “degree two” (circles, ellipses, parabolas, hyperbolas) we now have a complete answer to this problem. This answer is thanks to the late German mathematician Helmut Hasse, and allows one to find all such points, should they exist at all.

Returning to our elliptic curve E, the analogous problem is to find all the rational solutions (x,y) which satisfy the equation defining E. If we call this set of points E(Q), then we are asking if there exists an algorithm that allows us to obtain all points (x,y) belonging to E(Q).

At this juncture we need to introduce a group law on E, which gives an eccentric way of fusing together two points (p₁ and p₂) on the curve, to obtain a brand new point (p₄). This mimics the addition law for numbers we learn from childhood (i.e. the sum or difference of any two numbers is still a number). There’s an illustration of this rule below:

Under this geometric model, the point p₄ is defined to be the sum of p₁ and p₂ (it’s easy to see that the addition law does not depend on the order of the points p₁, p₂). Moreover the set of rational points is preserved by this notion of addition; in other words, the sum of two rational points is again a rational point.

Louis Mordell, who was Sadleirian Professor of Pure Mathematics at Cambridge University from 1945 to 1953, was the first to determine the structure of this group of rational points. In 1922 he proved

where the number of copies of the integers Z above is called the “rank r(E) of the elliptic curve E”. The finite group ΤE(Q) on the end is uninteresting, as it never has more than 16 elements.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Daniel Delbourgo*


Super Models – Using Maths to Mitigate Natural Disasters

We can’t tame the oceans, but modelling can help us better understand them.

Last year will go on record as one of significant natural disasters both in Australia and overseas. Indeed, the flooding of the Brisbane River in January is still making news as the Queensland floods inquiry investigates whether water released from Wivenhoe Dam was responsible. Water modelling is being used to answer the question: could modelling have avoided the problem in the first place?

This natural disaster – as well as the Japanese tsunami in March and the flooding in Bangkok in October – involved the movement of fluids: water, mud or both. And all had a human cost – displaced persons, the spread of disease, disrupted transport, disrupted businesses, broken infrastructure and damaged or destroyed homes. With the planet now housing 7 billion people, the potential for adverse humanitarian effects from natural disasters is greater than ever.

Here in CSIRO’s division of Mathematical and Information Sciences, we’ve been working with various government agencies (in Australia and China) to model the flow of flood waters and the debris they carry. Governments are starting to realise just how powerful computational modelling is for understanding and analysing natural disasters and how to plan for them.

This power is based on two things – the power of computers and the power of the algorithms (computer processing steps) that run on the computers.

In recent years, the huge increase in computer power and speed coupled with advances in algorithm development has allowed mathematical modellers like us to make large strides in our research.

These advances have enabled us to model millions, even billions of water particles, allowing us to more accurately predict the effects of natural and man-made fluid flows, such as tsunamis, dam breaks, floods, mudslides, coastal inundation and storm surges.

So how does it work?

Well, fluids such as sea water can be represented as billions of particles moving around, filling spaces, flowing downwards, interacting with objects and in turn being interacted upon. Or they can be visualised as a mesh of the fluids’ shape.

Let’s consider a tsunami such as the one that struck the Japanese coast in March of last year. When a tsunami first emerges as a result of an earthquake, shallow water modelling techniques give us the most accurate view of the wave’s formation and early movement.

Mesh modelling of water being poured into a glass.

Once the wave is closer to the coast however, techniques known collectively as smoothed particle hydrodynamics (SPH) are better at predicting how the wave interacts with local geography. We’ve created models of a hypothetical tsunami off the northern Californian coastline to test this.

A dam break can also be modelled using SPH. The modelling shows how fast the water moves at certain times and in certain places, where water “overtops” hills and how quickly it reaches towns or infrastructure such as power stations.

This can help town planners to build mitigating structures and emergency services to co-ordinate an efficient response. Our models have been validated using historical data from a real dam that broke in California in 1928 – the St. Francis Dam.

Having established that our modelling techniques work better than others, we can apply them to a range of what-if situations.

In collaboration with the Satellite Surveying and Mapping Application Centre in China we tested scenarios such as the hypothetical collapse of the massive Geheyan Dam in China.

We combined our modelling techniques with digital terrain models to get a realistic picture of how such a disaster would unfold and, therefore, what actions could mitigate it.

Our experience in developing and using these techniques over several decades allows us to combine them in unique ways for each situation.

We’ve modelled fluids not just for natural disaster planning but also movie special effects, hot metal production, water sports and even something as everyday as insurance.

Insurance companies have been looking to us for help to understand how natural disasters unfold. They cop a lot of media flak after disasters for not covering people affected. People living in low-lying areas have traditionally had difficulty accessing flood insurance and find themselves unprotected in flood situations.

Insurers are starting to realise that the modelling of geophysical flows can provide a basis for predicting localised risk of damage due to flooding and make flood coverage a viable business proposition. One Australian insurance company has been working with us to quantify risk of inundation in particular areas.

Using data from the 1974 Brisbane floods, the floods of last year and fluid modelling data, an insurance company can reliably assess residents’ exposure to particular risks and thereby determine suitable premiums.

With evidence-based tools such as fluid modelling in their arsenal, decision-makers are better prepared for the future. That may be a future of more frequent natural disasters, a future with a more-densely-populated planet, or, more likely, a combination of both.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Mahesh Prakash*


Rubik’s Cube Solution Unlocked By Memorising 3915 Final Move Sequences

For the first time, a speedcuber has demonstrated a solution to the Rubik’s cube that combines the two final steps of the puzzle’s solution into one.

A Rubik’s cube solver has become the first person to show proof of successfully combining the final two steps of solving the mechanical puzzle into one move. The feat required the memorisation of thousands of possible sequences for the final step.

Most skilled speedcubers – people who compete to solve Rubik’s cubes with the most speed and efficiency – choose to solve the final layer of the cube with two separate moves that involve 57 possible sequences for the penultimate step and 21 possible sequences for the final move.

Combining those two separate actions into a single move requires a person to memorise 3915 possible sequences. These sequences were previously known to be possible, but nobody is reported to have successfully achieved this so-called “Full 1 Look Last Layer” (Full 1LLL) move until a speedcuber going by the online username “edmarter” shared a YouTube video demonstrating that accomplishment.

Edmarter says he decided to take up the challenge after seeing notable speedcubers try and fail. Over the course of about a year, he spent 10 hours each weekend and any free time during the week practising and memorising the necessary sequences, he told New Scientist. That often involved memorising 144 movement sequences in a single day.

All that effort paid off on 4 August 2022 when edmarter uploaded a video demonstrating the Full 1LLL over the course of 100 separate puzzle solves. He also posted his accomplishment to Reddit’s r/Cubers community.

His average solve time for each Rubik’s cube over the course of that video demonstration run was 14.79 seconds. He says he had an average solve time as low as 12.50 seconds during two practice runs before recording the video.

The Rubik’s cube community has reacted with overwhelming enthusiasm and awe. The top-voted comment on his Reddit post detailing the achievement simply reads: “This is absolutely insane.”

But he is not resting on his laurels. Next up, he plans to try practising some other methods for finishing the Rubik’s cube that have only previously been mastered by a handful of people.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jeremy Hsu*


How Many Knots Exist? A New Computing Trick Is Untangling The Answer

Finding how many knots there are for a given number of string crossings was thought to be an impossibly complex task. Now, algorithm engineering is cracking it – and showing us how to solve other fiendishly intricate maths problems.

IT USED to be one of the most frustrating part of any journey on public transport. You squeeze your way past the other bodies, sit down and fish your earphones out of your pocket. You didn’t bother to wind up the wires into a neat loop the last time you used them, and so – sigh – you now need to spend the next 5 minutes untangling this knot. Thank goodness for the invention of wireless earbuds.

Knots aren’t just an everyday annoyance, though. They are also a source of endless inspiration for researchers. Take mathematician Benjamin Burton, who is fascinated by one simple question: how many knots are there? “There is something tantalising about problems that you can describe to a 10-year-old, but that mathematicians have not yet solved,” he says.

Taking a census of knots is one of those problems that ought to be impossible to solve because of its complexity. There are so many ways the strings can be crossed and looped that even the fastest computer could never catalogue them all. Yet Burton has been giving it a shot, and along the way he is showing that, with a few clever computational tricks, many maths problems that seem insurmountable might not be.

Knots and science have been, ahem, entangled for quite a while. In the dying decades of the 19th century, scientists were grappling with how to understand atoms. One hypothesis saw them as little vortices of fluid that became stable when knotted. Lord Kelvin, who went on to become the president of the UK’s Royal Society, was the first to suggest that each chemical element corresponded to a different type of knot.

The idea was abandoned after the discovery of the electron, but at the time it seemed vital to understand knots. Physicist Peter Guthrie Tait was the first to make a stab at creating a comprehensive list of them. Cards on the table: there are an infinite number of possible knots, because you can keep on adding extra knotty flourishes forever. The question mathematicians are interested in is more subtle. A defining feature of a knot is its crossing number, the number of times the strings cross. The question is, for a given number of crossings, how many different knots are possible? In 1885, Tait, working with mathematician Thomas Kirkman, considered all the knots with up to and including 10 crossings. Drawing them all by hand, he tabulated 364 configurations.

Try to go further and things quickly get a lot more difficult. As you allow more crossings, the number of possible knots rapidly increases. The last major extension to the knot tables was published in 1998. Mathematicians Jim Hoste, Morwen Thistlethwaite and Jeff Weeks recorded all the knots up to and including 16 crossings – all 1.7 million of them.

Going beyond this has, until recently, been unfeasible. To see why, we need to know a bit more about how mathematicians think about knots (see “When is a knot not a knot?“). Unlike real-world knots that are often pieces of string with loose ends, mathematical knots are closed. Imagine tying a knot in a piece of spaghetti, then melding the free ends.

Stretch, twist, bend

Mathematicians treat knots according to the rules of topology, a branch of geometry. These say that a knot remains fundamentally the same if you stretch, twist or bend the strands – but making cuts or new joins is a no-no. This leads to the concept of a prime knot, a knot that can’t be mathematically broken down into two or more simpler knots.

The job of tabulating knots, then, boils down to comparing elaborate tangles to see if they are really the same prime knot. Don’t underestimate how tricky this can be. For 75 years, two knots with 10 crossings were listed separately in tables. But in 1973, Kenneth Perko, a New York lawyer who had studied maths, realised that if you take the mirror image of one, you can manipulate it to become equivalent to the other. The knots are known as the “Perko pair”.

When dealing with millions of knots, the job of identifying doppelgangers becomes so time-consuming that even the fastest supercomputers shouldn’t be theoretically capable of it in a reasonable amount of time. Still, in 2019, Burton, who is based at the University of Queensland in Australia, decided the time was ripe to take a punt.

He knew there is a difference between a knot-sorting algorithm in the abstract and how that algorithm is implemented in computer code. The art of bridging this gap is known as algorithm engineering and Burton thought that with the right tricks, the problem wouldn’t be as hard as it seemed. “Part of what motivated me was creating a showpiece to see if the tools were good enough,” he says.

He began by setting a computer the task of dreaming up all the possible ways that strings can be knotted with up to and including 19 crossings. This is a comparatively simple task and the computer spat out 21 billion knot candidates after just a few days.

The job was then to check each knot was prime and distinct. This involves storing a full topological description of the knots in a computer’s working memory or RAM. A data set of 21 billion knots would require more than 1 terabyte of RAM to store, and computers with that amount are rare. To get around this, Burton made use of a software package he has developed called Regina. It can convert knots into “knot signatures”, strings of letters that capture each tangle’s defining topological properties. The signatures can be stored far more economically than a knot description. Plus, knots that are tangled differently but are really equivalent have the same signature, making it easy to weed out duplicates. Burton managed to whittle down the candidate knots to about 7 billion in a day.

This method wasn’t powerful enough to identify every last duplicate, however. Burton’s next tactic involved calculating what is known as a knot invariant for each candidate knot. Invariants are mathematical objects that capture the essence of a knot – if two knots have different invariants, they are different knots. Invariants are more powerful than signatures, but they are harder to compute: it would have taken Burton’s computer more than a year to calculate these for all the remaining knots. By sorting them into groups and running them on parallel supercomputers, he got through them in days. The pool was down to 370 million.

Burton also had to grapple with non-hyperbolic knots, an especially challenging category. But after several more rounds of sorting, the supposedly impossible became possible. Burton’s census extended the knot tables to cover everything up to and including 19 crossings. The final tally: 352,152,252 knots.

Ian Agol at the University of California, Berkeley, says he is impressed with the calculations. “It will likely be useful to mathematicians searching for knots with specific properties, or who have a knot that they would like to identify,” he says.

He and other mathematicians think that Burton’s work is also an impressive example of algorithm engineering. “This is a growing trend in pure mathematics, where computational methods are being used in more creative ways,” says Hans Boden at McMaster University in Ontario, Canada.

Lorries and cameras

There are plenty of practical problems where algorithm engineering is used to find efficient solutions to hard problems. Some of the most important are in logistics, where optimised routing can save time and huge sums of money. Others include working out how to cover a space with CCTV cameras. In many cases, a perfect solution is impossible to obtain in a reasonable time frame.

This is also the case when it comes to mapping complex evolutionary histories, particularly for plants and bacteria. Algorithms are used to link DNA data in phylogenetic trees, graphs that group closely related species more closely than distantly related ones. However, sometimes genes don’t pass down from parent to offspring, but rather through what is known as horizontal gene transfer from one species to another. Algorithms designed to map these transfers can be very slow.

One algorithm engineering hack to get around this involves parametrising data inputs to make them smaller. For instance, if you fix the number of potential mutations you are considering, the problem can be solved efficiently. Recently, Edwin Jacox, now at the University of California, Santa Cruz, and his colleagues applied this method to cyanobacteria phylogenetic trees. Cyanobacteria played a major role in Earth’s changing biosphere more than 2 billion years ago. The researchers developed a parametrised algorithm that rebuilds the cyanobacteria phylogenetic trees in just 15 seconds.

Whether it is reconstructing evolutionary trees or listing knots, it is clear than the toughest computing problems can be made tractable. “With algorithm engineering and heuristics, things that are slow in practice turn out to be remarkably, surprisingly fast,” says Burton. It means that even the trickiest problems don’t have to leave us in an inescapable tangle.

When is a knot not a knot?

Here is a problem that needs unpicking: if you pull at a messy tangle of wires, how do you know if it will just get more tangled or come apart into a simple loop? This is a problem mathematicans call “unknot recognition” and it is one we might soon solve. Unknot recognition fascinates mathematicians and computer scientists alike because it is part of the famous “P vs NP” question.

To see how it works, take the classic travelling salesperson problem, where we must work out the shortest route for a salesperson to visit a number of cities. If an algorithm designed to solve this problem doesn’t get exponentially harder as more cities are included, it is said to be “P”. This means it is solvable in reasonable – or polynomial, in maths speak – amounts of time. Once you have an answer, you need to check it is correct. If it is easy to check your answer, the problem is classed as “NP”. The big question for mathematicians is whether all NP problems are also P. This is one of the Millennium Prize Problems – answer it conclusively and you will win $1 million.

Unknot recognition dwells in the twilight zone of P vs NP. We already know that this problem is definitely “NP”, or easy to check. If your algorithm can produce an unknotted loop, you can immediately see it has worked. Mathematicians also think it is likely to be P, and we are tantalisingly close to proving it.

In 2014, Benjamin Burton at the University of Queensland in Australia developed an unknotting algorithm that, in practice, solves in polynomial time. His algorithm has held up “for every knot we’ve thrown at it so far”, he says. More recently, Marc Lackenby at the University of Oxford developed an unknot recognition algorithm that is “quasi-P” (which, in maths parlance, means “almost there”). It is unlikely to be converted into executable computer code because it is so complicated, but Lackenby is confident that a simplified version “is going to be genuinely practical”.

Showing that unknot recognition is both P and NP won’t solve the wider Millennium Prize Problem, though it could give us useful pointers. Still, it is an important milestone and mathematicians will be celebrating once we get there.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Larissa Fedunik*


How Spiral Search Patterns And Lateral Thinking Cracked Our Puzzle

Rob Eastaway and Brian Hobbs take over our maths column to reveal who solved their puzzle and won a copy of their New Scientist puzzle book, Headscratchers.

When solving problems in the real world, it is rare that the solution is purely mathematical, but maths is often a key ingredient. The puzzle we set a few weeks ago (New Scientist, 30 September, p 45) embraced this by encouraging readers to come up with ingenious solutions that didn’t have to be exclusively maths-based.

Here is a reminder of the problem: Prince Golightly found himself tied to a chair near the centre of a square room, in the dark, with chained monsters in the four corners and an escape door in the middle of one wall. With him, he had a broom, a dictionary, some duct tape, a kitchen clock and a bucket of water with a russet fish.

John Offord was one of several readers to spot an ambiguity in our wording. Four monsters in each corner? Did this mean 16 monsters? John suggested the dictionary might help the captors brush up on their grammar.

The russet fish was deliberately inserted as a red herring (geddit?), but we loved that some readers found ways to incorporate it, either as a way of distracting the monsters or as a source of valuable protein for a hungry prince. Dave Wilson contrived a delightful monster detector, while Glenn Reid composed a limerick with the solution of turning off the computer game and going to bed.

And so to more practical solutions. Arlo Harding and Ed Schulz both suggested ways of creating a torch by igniting assorted materials with an electric spark from the light cable. But Ben Haller and Chris Armstrong had the cleverest mathematical approach. After locating the light fitting in the room’s centre with the broom, they used duct tape and rope to circle the centre, increasing the radius until they touched the wall at what must be its centre, and then continued circling to each wall till they found the escape door. Meanwhile, the duo of Denise and Emory (aged 11) used Pythagoras’s theorem to confirm the monsters in the corners would be safely beyond reach. They, plus Ben and Chris, win a copy of our New Scientist puzzle book Headscratchers.

It is unlikely you will ever have to escape monsters in this way, but spiral search patterns when visibility is limited are employed in various real-world scenarios: rescuers probing for survivors in avalanches, divers performing underwater searches and detectives examining crime scenes, for example. Some telescopes have automated spiral search algorithms that help locate celestial objects. These patterns allow for thorough searches while ensuring you don’t stray too far from your starting point.

Of course, like all real-world problems, mathematical nous isn’t enough. As our readers have displayed, lateral thinking and the ability to improvise are human skills that help us find the creative solutions an algorithm would miss.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Rob Eastaway and Brian Hobbs*


Algebraic Elements Are Like Limit Points!

When you hear the word closure, what do you think of? I think of wholeness – you know, tying loose ends, wrapping things up, filling in the missing parts. This same idea is behind the mathematician’s notion of closure, as in the phrase “taking the closure” of a set. Intuitively this just means adding in any missing pieces so that the result is complete, whole. For instance, the circle on the left is open because it’s missing its boundary. But when we take its closure and include the boundary, we say the circle is closed.

As another example, consider the set of all real numbers strictly between 0 and 1, i.e. the open interval (0,1). Notice that we can get arbitrarily close to 0, but we can’t quite reach it. In some sense, we feel that 0 might as well be included in set, right? I mean, come on, 0.0000000000000000000000000000000000000001 is basically 0, right? So by not considering 0 as an element in our set, we feel like something’s missing. The same goes for 1.

We say an element is a limit point of a given set if that element is “close” to the set,* and we say the set’s closure is the set together with its limit points. (So 0 and 1 are both limit points of (0,1) and its closure is [0,1].) It turns out the word closure is also used in algebra, specifically the algebraic closure of a field, but there it has a completely different definition which has to do with roots of polynomials, called algebraic elementsNow why would mathematicians use the same word to describe two seemingly different things? The purpose of today’s post is to make the observation that they’re not so different after all! This may be somewhat obvious, but it wasn’t until after a recent conversation with a friend that I saw the connection:

 

‍algebraic elements of a field

are like

limit points of a sequence!

(Note: I’m not claiming any theorems here, this is just a student’s simple observation.)

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


Mathematicians Are Bitterly Divided Over A Controversial Proof

An attempt to settle a decade-long argument over a controversial proof by mathematician Shinichi Mochizuki has seen a war of words on both sides, with Mochizuki dubbing the latest effort as akin to a “hallucination” produced by ChatGPT,

An attempt to fix problems with a controversial mathematical proof has itself become mired in controversy, in the latest twist in a saga that has been running for over a decade and has seen mathematicians trading unusually pointed barbs.

The story began in 2012, when Shinichi Mochizuki at Kyoto University, Japan, published a 500-page proof of a problem called the ABC conjecture. The conjecture concerns prime numbers involved in solutions to the equation a + b = c, and despite its seemingly simple form, it provides deep insights into the nature of numbers. Mochizuki published a series of papers claiming to have proved ABC using new mathematical tools he collectively called Inter-universal Teichmüller (IUT) theory, but many mathematicians found the initial proof baffling and incomprehensible.

While a small number of mathematicians have since accepted that Mochizuki’s papers prove the conjecture, other researchers say there are holes in his argument and it needs further work, dividing the mathematical community in two and prompting a prize of up to $1 million for a resolution to the quandary.

Now, Kirti Joshi at the University of Arizona has published a proposed proof that he says fixes the problems with IUT and proves the ABC conjecture. But Mochizuki and his supporters, as well as mathematicians who critiqued Mochizuki’s original papers, remain unconvinced, with Mochizuki declaring that Joshi’s proposal doesn’t contain “any meaningful mathematical content whatsoever”.

Central to Joshi’s work is an apparent problem, previously identified by Peter Scholze at the University of Bonn, Germany, and Jakob Stix at Goethe University Frankfurt, Germany, with a part of Mochizuki’s proof called Conjecture 3.12. The conjecture involves comparing two mathematical objects, which Scholze and Stix say Mochizuki did incorrectly. Joshi claims to have found a more satisfactory way to make the comparison.

Joshi also says that his theory goes beyond Mochizuki’s and establishes a “new and radical way of thinking about arithmetic of number fields”. The paper, which hasn’t been peer-reviewed, is the culmination of several smaller papers on ABC that Joshi has published over several years, describing them as a “Rosetta Stone” for understanding Mochizuki’s impenetrable maths.

Neither Joshi nor Mochizuki responded to a request for comment on this article, and, indeed, the two seem reluctant to communicate directly with each other. In his paper, Joshi says Mochizuki hasn’t responded to his emails, calling the situation “truly unfortunate”. And yet, several days after the paper was posted online, Mochizuki published a 10-page response, saying that Joshi’s work was “mathematically meaningless” and that it reminded him of “hallucinations produced by artificial intelligence algorithms, such as ChatGPT”.

Mathematicians who support Mochizuki’s original proof express a similar sentiment. “There is nothing to talk about, since his [Joshi’s] proof is totally flawed,” says Ivan Fesenko at Westlake University in China. “He has no expertise in IUT whatsoever. No experts in IUT, and the number is in two digits, takes his preprints seriously,” he says. “It won’t pass peer review.”

And Mochizuki’s critics also disagree with Joshi. “Unfortunately, this paper and its predecessors does not introduce any powerful mathematical technology, and falls far short of giving a proof of ABC,” says Scholze, who has emailed Joshi to discuss the work further. For now, the saga continues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Getting Projections Right: Predicting Future Climate

Region by region projections of how climate is likely to change over the coming decades help to make the prospect of global warming more tangible and relevant.

Picturing the climate we are likely to have with unabated increases in greenhouse gas concentrations in, say, Melbourne, Sydney, or the Murray Darling, lets us weigh up the costs and benefits of actions to reduce greenhouse gas emissions.

Regional projections also let us plan how to adapt to any unavoidable changes in our climate. Planning changes to farming practices, water supply or natural ecosystem management, for example, requires some idea of what our future regional climate is likely to be.

Here in Australia we have had a long history of national climate change projections. Since 1990, CSIRO has released five updates of projected changes in temperature, rainfall, extreme events and many other key aspects of our climate system.

CSIRO’s last release was done with the Bureau of Meteorology in 2007. It provided the most detailed product available up to that time.

This release included the innovation (a world first amongst national projections at the time) of providing probabilities for the projected changes.

Why modelling?

The complexity of the climate system means that we cannot simply extrapolate past trends to forecast future conditions. Instead, we use climate models developed and utilised extensively over recent decades.

These are mathematical representations of the climate systems based on the laws of physics.

Results from all of the climate modelling centres around the world are considered in preparing Australian projections. We place greatest weight on the models that are best in representing our historical climate.

Global climate modelling has continued to develop over recent years. Most of the modelling centres are now running improved versions of their models compared to what was available in 2007.

As part of an international coordinated effort, a new database of the latest climate model output is being assembled for researchers to use ahead of the next report of the Intergovernmental Panel on Climate Change (IPCC). It is many times richer than any previously available.

Analysing this massive resource will be a focus of research of a large number of scientists in CSIRO, BoM and the universities over the next few years.

Putting the models to good use

While the science has been developing, so have the demands of users of this projection information. Policymakers at all levels of government, natural resource planners, industry, non-government organisations and individuals all are placing demands on climate projection science. These are growing in volume and complexity.

For example, researchers want regionally specific scenarios for changes in the frequency of hot days, extreme rainfall, fire, drought, cyclones, hail, evaporation, sunshine, coral bleaching temperatures, ocean acidification and sea level rise.

This type of information is particularly useful for risk assessments that can inform policy development and implementation.

For example, assessing future climate risks to infrastructure can place quite different demands on climate projection science compared to, say, assessing risks to agricultural enterprises.

Given these developments, the time is coming for the Australian climate research community to update and expand their projections. Planning has begun for a release in 2014. This will be just after the completion of the next IPCC assessment.

At that time, Australians will have the latest climate projections for the 21st century for a range of factors, including sea levels, seasonal-average temperatures and rainfall, as well as extreme weather events.

Resources permitting, these new projections will also include online services which will enable users to generate climate scenarios to suit the specific needs of many risk assessments.

Finding out more about summer rainfall

As climate scientists start to analyse these new model data, a major focus of attention will be simulated changes to summer rainfall over Australia.

Models have consistently indicated a drying trend for the winter rainfall regions in southern Australia and this is a result which also aligns with other evidence such as observed trends.

On the other hand, models give inconsistent projections for summer rainfall change, ranging from large increase to large decrease. Researchers will be hoping to reduce this key uncertainty as they begin to analyse the results.

However, when it comes to projecting our future climate, there will always be some uncertainty to deal with.

Dealing with uncertainty

Climate projection scientists have to clearly convey the uncertainties while not letting these overwhelm the robust findings about regional climate change that the science provides.

Climate projection uncertainties can be presented in many different ways, such as through ranges of plausible change, as probabilistic estimates, or as alternative scenarios.

We shouldn’t necessarily be most interested in the most likely future. In some cases, it may be more prudent to plan for less likely, but higher risk, future climates.

It can be difficult to make a complex message as relevant as possible to a wide range of decision-makers. CSIRO climate scientists are tackling this by working with social scientists to help develop new and more effective communication methods. These should be ready in time for the next projections release.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Penny Whetton*


Theorem of Everything: The Secret That Links Numbers and Shapes

For millennia mathematicians have struggled to unify arithmetic and geometry. Now one young genius could have brought them in sight of the ultimate prize.

IF JOEY was Chloe’s age when he was twice as old as Zoe was, how many times older will Zoe be when Chloe is twice as old as Joey is now?

Or try this one for size. Two farmers inherit a square field containing a crop planted in a circle. Without knowing the exact size of the field or crop, or the crop’s position within the field, how can they draw a single line to divide both the crop and field equally?

You’ve either fallen into a cold sweat or you’re sharpening your pencil (if you can’t wait for the answer, you can check the bottom of this page). Either way, although both problems count as “maths” – or “math” if you insist – they are clearly very different. One is arithmetic, which deals with the properties of whole numbers: 1, 2, 3 and so on as far as you can count. It cares about how many separate things there are, but not what they look like or how they behave. The other is geometry, a discipline built on ideas of continuity: of lines, shapes and other objects that can be measured, and the spatial relationships between them.

Mathematicians have long sought to build bridges between these two ancient subjects, and construct something like a “grand unified theory” of their discipline. Just recently, one brilliant young researcher might have brought them decisively closer. His radical new geometrical insights might not only unite mathematics, but also help solve one of the deepest number problems of them all: the riddle of the primes. With the biggest prizes in mathematics, the Fields medals, to be awarded this August, he is beginning to look like a shoo-in.

The ancient Greek philosopher and mathematician Aristotle once wrote, “We cannot… prove geometrical truths by arithmetic.” He left little doubt he believed geometry couldn’t help with numbers, either. It was hardly a controversial thought for the time. The geometrical proofs of Aristotle’s near-contemporary Euclid, often called the father of geometry, relied not on numbers, but logical axioms extended into proofs by drawing lines and shapes. Numbers existed on an entirely different, more abstract plane, inaccessible to geometers’ tools.

And so it largely remained until, in the 1600s, the Frenchman René Descartes used the techniques of algebra – of equation-solving and the manipulation of abstract symbols – to put Euclid’s geometry on a completely new footing. By introducing the notion that geometrical points, lines and shapes could all be described by numerical coordinates on an underlying grid, he allowed geometers to make use of arithmetic’s toolkit, and solve problems numerically.

This was a moonshot that let us, eventually, do things like send rockets into space or pinpoint positions to needle-sharp accuracy on Earth. But to a pure mathematician it is only a halfway house. A circle, for instance, can be perfectly encapsulated by an algebraic equation. But a circle drawn on graph paper, produced by plotting out the equation’s solutions, would only ever capture a fragment of that truth. Change the system of numbers you use, for example – as a pure mathematician might do – and the equation remains valid, while the drawing may no longer be helpful.

Wind forward to 1940 and another Frenchman was deeply exercised by the divide between geometry and numbers. André Weil was being held as a conscientious objector in a prison just outside Rouen, having refused to enlist in the months preceding the German occupation of France – a lucky break, as it turned out. In a letter to his wife, he wrote: “If it’s only in prison that I work so well, will I have to arrange to spend two or three months locked up every year?”

Weil hoped to find a Rosetta stone between algebra and geometry, a reference work that would allow truths in one field to be translated into the other. While behind bars, he found a fragment.

It had to do with the Riemann hypothesis, a notorious problem concerning how those most fascinating numbers, the primes, are distributed (see below). There had already been hints that the hypothesis might have geometrical parallels. Back in the 1930s, a variant had been proved for objects known as elliptic curves. Instead of trying to work out how prime numbers are distributed, says mathematician Ana Caraiani at Imperial College London, “you can relate it to asking how many points a curve has”.

Weil proved that this Riemann-hypothesis equivalent applied for a range of more complicated curves too. The wall that had stood between the two disciplines since Ancient Greek times finally seemed to be crumbling. “Weil’s proof marks the beginning of the science with the most un-Aristotelian name of arithmetic geometry,” says Michael Harris of Columbia University in New York.

The Riemann Hypothesis: The million-dollar question

The prime numbers are the atoms of the number system, integers indivisible into smaller whole numbers other than one. There are an infinite number of them and there is no discernible pattern to their appearance along the number line. But their frequency can be measured – and the Riemann hypothesis, formulated by Bernhard Riemann in 1859, predicts that this frequency follows a simple rule set out by a mathematical expression now known as the Riemann zeta function.

Since then, the validity of Riemann’s hypothesis has been demonstrated for the first 10 trillion primes, but an absolute proof has yet to emerge. As a mark of the problem’s importance, it was included in the list of seven Millennium Problems set by the Clay Mathematics Institute in New Hampshire in 2000. Any mathematician who can tame it stands to win $1 million.

In the post-war years, in the more comfortable setting of the University of Chicago, Weil tried to apply his insight to the broader riddle of the primes, without success. The torch was taken up by Alexander Grothendieck, a mathematician ranked as one of the greatest of the 20th century. In the 1960s, he redefined arithmetic geometry.

Among other innovations, Grothendieck gave the set of whole numbers what he called a “spectrum”, for short Spec(Z). The points of this undrawable geometrical entity were intimately connected to the prime numbers. If you could ever work out its overall shape, you might gain insights into the prime numbers’ distribution. You would have built a bridge between arithmetic and geometry that ran straight through the Riemann hypothesis.

The shape Grothendieck was seeking for Spec(Z) was entirely different from any geometrical form we might be familiar with: Euclid’s circles and triangles, or Descartes’s parabolas and ellipses drawn on graph paper. In a Euclidean or Cartesian plane, a point is just a dot on a flat surface, says Harris, “but a Grothendieck point is more like a way of thinking about the plane”. It encompasses all the potential uses to which a plane could be put, such as the possibility of drawing a triangle or an ellipse on its surface, or even wrapping it map-like around a sphere.

If that leaves you lost, you are in good company. Even Grothendieck didn’t manage to work out the geometry of Spec(Z), let alone solve the Riemann hypothesis. That’s where Peter Scholze enters the story.

“Even the majority of mathematicians find most of the work unintelligible”

Born in Dresden in what was then East Germany in 1987, Scholze is currently, at the age of 30, a professor at the University of Bonn. He laid the first bricks for his bridge linking arithmetic and geometry in his PhD dissertation, published in 2012 when he was 24. In it, he introduced an extension of Grothendieck-style geometry, which he termed perfectoid geometry. His construction is built on a system of numbers known as the p-adics that are intimately connected with the prime numbers (see “The p-adics: A different way of doing numbers”). The key point is that in Scholze’s perfectoid geometry, a prime number, represented by its associated p-adics, can be made to behave like a variable in an equation, allowing geometrical methods to be applied in an arithmetical setting.

It’s not easy to explain much more. Scholze’s innovation represents “one of the most difficult notions ever introduced in arithmetic geometry, which has a long tradition of difficult notions”, says Harris. Even the majority of working mathematicians find most of it unintelligible, he adds.

Be that as it may, in the past few years, Scholze and a few initiates have used the approach to solve or clarify many problems in arithmetic geometry, to great acclaim. “He’s really unique as a mathematician,” says Caraiani, who has been collaborating with him. “It’s very exciting to be a mathematician working in the same field.”

This August, the world’s mathematicians are set to gather in Rio de Janeiro, Brazil, for their latest international congress, a jamboree held every four years. A centrepiece of the event is the awarding of the Fields medals. Up to four of these awards are given each time to mathematicians under the age of 40, and this time round there is one name everyone expects to be on the list. “I suspect the only way he can escape getting a Fields medal this year is if the committee decides he’s young enough to wait another four years,” says Marcus du Sautoy at the University of Oxford.

 

Peter Scholze, 30, looks like a shoo-in for mathematics’s highest accolade this summer

With so many grand vistas opening up, the question of Spec(Z) and the Riemann hypothesis almost becomes a sideshow. But Scholze’s new methods have allowed him to study the geometry, in the sense Grothendieck pioneered, that you would see if you examined the curve Spec(Z) under a microscope around the point corresponding to a prime number p. That is still a long way from understanding the curve as a whole, or proving the Riemann hypothesis, but his work has given mathematicians hope that this distant goal might yet be reached. “Even this is a huge breakthrough,” says Caraiani.

Scholze’s perfectoid spaces have enabled bridges to be built in entirely different directions, too. A half-century ago, in 1967, the then 30-year-old Princeton mathematician Robert Langlands wrote a tentative letter to Weil outlining a grand new idea. “If you are willing to read it as pure speculation I would appreciate that,” he wrote. “If not – I am sure you have a waste basket handy.”

In his letter, Langlands suggested that two entirely distinct branches of mathematics, number theory and harmonic analysis, might be related. It contained the seeds of what became known as the Langlands program, a vastly influential series of conjectures some mathematicians have taken to calling a grand unified theory capable of linking the three core mathematical disciplines: arithmetic, geometry and analysis, a broad field that we encounter in school in the form of calculus. Hundreds of mathematicians around the world, including Scholze, are committed to its completion.

The full slate of Langlands conjectures is no more likely than the original Riemann hypothesis to be proved soon. But spectacular discoveries could lie in store: Fermat’s last theorem, which took 350 years to prove before the British mathematician Andrew Wiles finally did so in 1994, represents just one particular consequence of its conjectures. Recently, the French mathematician Laurent Fargues proposed a way to build on Scholze’s work to understand aspects of the Langlands program concerned with p-adics. It is rumoured that a partial solution could appear in time for the Rio meeting.

In March, Langlands won the other great mathematical award, the Abel prize, for his lifetime’s work. “It took a long time for the importance of Langlands’s ideas to be recognised,” says Caraiani, “and they were overdue for a major award.” Scholze seems unlikely to have to wait so long.

The p-adics: A different way of doing numbers

Key to the latest work in unifying arithmetic and geometry are p-adic numbers.

These are an alternative way of representing numbers in terms of any given prime number p. To make a p-adic number from any positive integer, for example, you write that number in base p, and reverse it. So to write 20 in 2-adic form, say, you take its binary, or base-2, representation – 10100 – and write it backwards, 00101. Similarly 20’s 3-adic equivalent is 202, and as a 4-adic it is written 011.

The rules for manipulating p-adics are a little different, too. Most notably, numbers become closer as their difference grows more divisible by whatever p is. In the 5-adic numbers, for example, the equivalents of 11 and 36 are very close because their difference is divisible by 5, whereas the equivalents of 10 and 11 are further apart.

For decades after their invention in the 1890s, the p-adics were just a pretty mathematical toy: fun to play with, but of no practical use. But in 1920, the German mathematician Helmut Hasse came across the concept in a pamphlet in a second-hand bookshop, and became fascinated. He realised that the p-adics provided a way of harnessing the unfactorisability of the primes – the fact they can’t be divided by other numbers – that turned into a shortcut to solving complicated proofs.

Since then, p-adics have played a pivotal part in the branch of maths called number theory. When Andrew Wiles proved Fermat’s infamous last theorem (that the equation xn + yn = zn has no solutions when x, y and z are positive integers and n is an integer greater than 2) in the early 1990s, practically every step in the proof involved p-adic numbers.

  • Answers: Zoe will be three times as old as she is now. The farmers should draw a line across the field that connects the centre points of the field and the crop.

This article appeared in print under the headline “The shape of numbers”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*