Mathematician Cracks Centuries-Old Problem About The Number 33

The number 33 has surprising depth

Add three cubed numbers, and what do you get? It is a question that has puzzled mathematicians for centuries.

In 1825, a mathematician known as S. Ryley proved that any fraction could be represented as the sum of three cubes of fractions. In the 1950s, mathematician Louis Mordell asked whether the same could be done for integers, or whole numbers. In other words, are there integers k, x, y and z such that k = x3 + y3 + z3 for each possible value of k?

We still don’t know. “It’s long been clear that there are maths problems that are easy to state, but fiendishly hard to solve,” says Andrew Booker at the University of Bristol, UK – Fermat’s last theorem is a famous example.

Booker has now made another dent in the cube problem by finding a sum for the number 33, previously the lowest unsolved example. He used a computer algorithm to search for a solution:

33 = 8,866,128,975,287,5283 + (-8,778,405,442,862,239)3 + (-2,736,111,468,807,040)3

To cut down calculation time, the program eliminated certain combinations of numbers. “For instance, if x, y and z are all positive and large, then there’s no way that x3 + y3 + z3 is going to be a small number,” says Booker. Even so, it took 15 years of computer-processing time and three weeks of real time to come up with the result.

For some numbers, finding a solution to the equation k = x3 + y3 + z3 is simple, but others involve huge strings of digits. “It’s really easy to find solutions for 29, and we know a solution for 30, but that wasn’t found until 1999, and the numbers were in the millions,” says Booker.

Another example is for the number 3, which has two simple solutions: 1+ 1+ 1 and 4+ 4+ (-5) 3 . “But to this day, we still don’t know whether there are more,” he says.

There are certain numbers that we know definitely can’t be the sum of three cubes, including 4, 5, 13, 14 and infinitely many more.

The solution to 74 was only found in 2016, which leaves 42 as the only number less than 100 without a possible solution. There are still 12 unsolved numbers less than 1000.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Donna Lu*


Magic Numbers: The Beauty Of Decimal Notation

While adding up your grocery bill in the supermarket, you’re probably not thinking how important or sophisticated our number system is.

But the discovery of the present system, by unknown mathematicians in India roughly 2,000 years ago – and shared with Europe from the 13th century onwards – was pivotal to the development of our modern world.

Now, what if our “decimal” arithmetic, often called the Indo-Arabic system, had been discovered earlier? Or what if it had been shared with the Western world earlier than the 13th century?

First, let’s define “decimal” arithmetic: we’re talking about the combination of zero, the digits one through nine, positional notation, and efficient rules for arithmetic.

“Positional notation” means that the value represented by a digit depends both on its value and position in a string of digits.

Thus 7,654 means:

(7 × 1000) + (6 × 100) + (5 × 10) + 4 = 7,654

The benefit of this positional notation system is that we need no new symbols or calculation schemes for tens, hundreds or thousands, as was needed when manipulating Roman numerals.

While numerals for the counting numbers one, two and three were seen in all ancient civilisations – and some form of zero appeared in two or three of those civilisations (including India) – the crucial combination of zero and positional notation arose only in India and Central America.

Importantly, only the Indian system was suitable for efficient calculation.

Positional arithmetic can be in base-ten (or decimal) for humans, or in base-two (binary) for computers.

In binary, 10101 means:

(1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + 1

Which, in the more-familiar decimal notation, is 21.

The rules we learned in primary school for addition, subtraction, multiplication and division can be easily extended to binary.

The binary system has been implemented in electronic circuits on computers, mostly because the multiplication table for binary arithmetic is much simpler than the decimal system.

Of course, computers can readily convert binary results to decimal notation for us humans.

As easy as counting from one to ten

Perhaps because we learn decimal arithmetic so early, we consider it “trivial”.

Indeed the discovery of decimal arithmetic is given disappointingly brief mention in most western histories of mathematics.

In reality, decimal arithmetic is anything but “trivial” since it eluded the best minds of the ancient world including Greek mathematical super-genius Archimedes of Syracuse.

Archimedes – who lived in the 3rd century BCE – saw far beyond the mathematics of his time, even anticipating numerous key ideas of modern calculus. He also used mathematics in engineering applications.

Nonetheless, he used a cumbersome Greek numeral system that hobbled his calculations.

Imagine trying to multiply the Roman numerals XXXI (31) and XIV (14).

First, one must rewrite the second argument as XIIII, then multiply the second by each letter of the first to obtain CXXXX CXXXX CXXXX XIIII.

These numerals can then be sorted by magnitude to arrive at CCCXXXXXXXXXXXXXIIII.

This can then be rewritten to yield CDXXXIV (434).

(For a bit of fun, try adding MCMLXXXIV and MMXI. First person to comment with the correct answer and their method gets a jelly bean.)

Thus, while possible, calculation with Roman numerals is significantly more time-consuming and error prone than our decimal system (although it is harder to alter the amount payable on a Roman cheque).

History lesson

Although decimal arithmetic was known in the Arab world by the 9th century, it took many centuries to make its way to Europe.

Italian mathematician Leonardo Fibonacci travelled the Mediterranean world in the 13th century, learning from the best Arab mathematicians of the time. Even then, it was several more centuries until decimal arithmetic was fully established in Europe.

Johannes Kepler and Isaac Newton – both giants in the world of physics – relied heavily on extensive decimal calculations (by hand) to devise their theories of planetary motion.

In a similar way, present-day scientists rely on massive computer calculations to test hypotheses and design products. Even our mobile phones do surprisingly sophisticated calculations to process voice and video.

But let us indulge in some alternate history of mathematics. What if decimal arithmetic had been discovered in India even earlier, say 300 BCE? (There are indications it was known by this date, just not well documented.)

And what if a cultural connection along the silk-road had been made between Indian mathematicians and Greek mathematicians at the time?

Such an exchange would have greatly enhanced both worlds, resulting in advances beyond the reach of each system on its own.

For example, a fusion of Indian arithmetic and Greek geometry might well have led to full-fledged trigonometry and calculus, thus enabling ancient astronomers to deduce the laws of motion and gravitation nearly two millennia before Newton.

In fact, the combination of mathematics, efficient arithmetic and physics might have accelerated the development of modern technology by more than two millennia.

It is clear from history that without mathematics, real progress in science and technology is not possible (try building a mobile phone without mathematics). But it’s also clear that mathematics alone is not sufficient.

The prodigious computational skills of ancient Indian mathematicians never flowered into advanced technology, nor did the great mathematical achievements of the Greeks, or many developments in China.

On the other hand, the Romans, who were not known for their mathematics, still managed to develop some impressive technology.

But a combination of advanced mathematics, computation, and technology makes a huge difference.

Our bodies and our brains today are virtually indistinguishable from those of ancient times.

With the earlier adoption of Indo-Arabic decimal arithmetic, the modern technological world of today might – for better or worse – have been achieved centuries ago.

And that’s something worth thinking about next time you’re out grocery shopping.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Millennium Prize: the Birch and Swinnerton-Dyer Conjecture

Elliptic curves have a long and distinguished history that can be traced back to antiquity. They are prevalent in many branches of modern mathematics, foremost of which is number theory.

In simplest terms, one can describe these curves by using a cubic equation of the form

where A and B are fixed rational numbers (to ensure the curve E is nice and smooth everywhere, one also needs to assume that its discriminant 4A3 + 27B2 is non-zero).

To illustrate, let’s consider an example: choosing A=-1 and B=0, we obtain the following picture:

At this point it becomes clear that, despite their name, elliptic curves have nothing whatsoever to do with ellipses! The reason for this historical confusion is that these curves have a strong connection to elliptic integrals, which arise when describing the motion of planetary bodies in space.

The ancient Greek mathematician Diophantus is considered by many to be the father of algebra. His major mathematical work was written up in the tome Arithmetica which was essentially a school textbook for geniuses. Within it, he outlined many tools for studying solutions to polynomial equations with several variables, termed Diophantine Equations in his honour.

One of the main problems Diophantus considered was to find all solutions to a particular polynomial equation that lie in the field of rational numbers Q. For equations of “degree two” (circles, ellipses, parabolas, hyperbolas) we now have a complete answer to this problem. This answer is thanks to the late German mathematician Helmut Hasse, and allows one to find all such points, should they exist at all.

Returning to our elliptic curve E, the analogous problem is to find all the rational solutions (x,y) which satisfy the equation defining E. If we call this set of points E(Q), then we are asking if there exists an algorithm that allows us to obtain all points (x,y) belonging to E(Q).

At this juncture we need to introduce a group law on E, which gives an eccentric way of fusing together two points (p₁ and p₂) on the curve, to obtain a brand new point (p₄). This mimics the addition law for numbers we learn from childhood (i.e. the sum or difference of any two numbers is still a number). There’s an illustration of this rule below:

Under this geometric model, the point p₄ is defined to be the sum of p₁ and p₂ (it’s easy to see that the addition law does not depend on the order of the points p₁, p₂). Moreover the set of rational points is preserved by this notion of addition; in other words, the sum of two rational points is again a rational point.

Louis Mordell, who was Sadleirian Professor of Pure Mathematics at Cambridge University from 1945 to 1953, was the first to determine the structure of this group of rational points. In 1922 he proved

where the number of copies of the integers Z above is called the “rank r(E) of the elliptic curve E”. The finite group ΤE(Q) on the end is uninteresting, as it never has more than 16 elements.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Daniel Delbourgo*


Mathematicians Are Bitterly Divided Over A Controversial Proof

An attempt to settle a decade-long argument over a controversial proof by mathematician Shinichi Mochizuki has seen a war of words on both sides, with Mochizuki dubbing the latest effort as akin to a “hallucination” produced by ChatGPT,

An attempt to fix problems with a controversial mathematical proof has itself become mired in controversy, in the latest twist in a saga that has been running for over a decade and has seen mathematicians trading unusually pointed barbs.

The story began in 2012, when Shinichi Mochizuki at Kyoto University, Japan, published a 500-page proof of a problem called the ABC conjecture. The conjecture concerns prime numbers involved in solutions to the equation a + b = c, and despite its seemingly simple form, it provides deep insights into the nature of numbers. Mochizuki published a series of papers claiming to have proved ABC using new mathematical tools he collectively called Inter-universal Teichmüller (IUT) theory, but many mathematicians found the initial proof baffling and incomprehensible.

While a small number of mathematicians have since accepted that Mochizuki’s papers prove the conjecture, other researchers say there are holes in his argument and it needs further work, dividing the mathematical community in two and prompting a prize of up to $1 million for a resolution to the quandary.

Now, Kirti Joshi at the University of Arizona has published a proposed proof that he says fixes the problems with IUT and proves the ABC conjecture. But Mochizuki and his supporters, as well as mathematicians who critiqued Mochizuki’s original papers, remain unconvinced, with Mochizuki declaring that Joshi’s proposal doesn’t contain “any meaningful mathematical content whatsoever”.

Central to Joshi’s work is an apparent problem, previously identified by Peter Scholze at the University of Bonn, Germany, and Jakob Stix at Goethe University Frankfurt, Germany, with a part of Mochizuki’s proof called Conjecture 3.12. The conjecture involves comparing two mathematical objects, which Scholze and Stix say Mochizuki did incorrectly. Joshi claims to have found a more satisfactory way to make the comparison.

Joshi also says that his theory goes beyond Mochizuki’s and establishes a “new and radical way of thinking about arithmetic of number fields”. The paper, which hasn’t been peer-reviewed, is the culmination of several smaller papers on ABC that Joshi has published over several years, describing them as a “Rosetta Stone” for understanding Mochizuki’s impenetrable maths.

Neither Joshi nor Mochizuki responded to a request for comment on this article, and, indeed, the two seem reluctant to communicate directly with each other. In his paper, Joshi says Mochizuki hasn’t responded to his emails, calling the situation “truly unfortunate”. And yet, several days after the paper was posted online, Mochizuki published a 10-page response, saying that Joshi’s work was “mathematically meaningless” and that it reminded him of “hallucinations produced by artificial intelligence algorithms, such as ChatGPT”.

Mathematicians who support Mochizuki’s original proof express a similar sentiment. “There is nothing to talk about, since his [Joshi’s] proof is totally flawed,” says Ivan Fesenko at Westlake University in China. “He has no expertise in IUT whatsoever. No experts in IUT, and the number is in two digits, takes his preprints seriously,” he says. “It won’t pass peer review.”

And Mochizuki’s critics also disagree with Joshi. “Unfortunately, this paper and its predecessors does not introduce any powerful mathematical technology, and falls far short of giving a proof of ABC,” says Scholze, who has emailed Joshi to discuss the work further. For now, the saga continues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Mathematicians Have Found a New Way to Multiply Two Numbers Together

It’s a bit more complicated than this

Forget your times tables – mathematicians have found a new, faster way to multiply two numbers together. The method, which works only for whole numbers, is a landmark result in computer science. “This is big news,” says Joshua Cooper at the University of South Carolina.

To understand the new technique, which was devised by David Harvey at the University of New South Wales, Australia, and Joris van der Hoeven at the Ecole Polytechnique near Paris, France, it helps to think back to the longhand multiplication you learned at school.

We write down two numbers, one on top of the other, and then painstakingly multiply each digit of one by each digit of the other, before adding all the results together. “This is an ancient algorithm,” says Cooper.

If your two numbers each have n digits, this way of multiplying will require roughly n2 individual calculations. “The question is, can you do better?” says Cooper.

Lots of logs

Starting in the 1960s, mathematicians began to prove that they could. First Anatoly Karatsuba found an algorithm that could turn out an answer in no more than n1.58 steps, and in 1971, Arnold Schönhage and Volker Strassen found a way to peg the number of steps to the complicated expression n*(log(n))*log(log(n)) – here “log” is short for logarithm.

These advances had a major impact on computing. Whereas a computer using the longhand multiplication method would take about six months to multiply two billion-digit numbers together, says Harvey, the Schönhage-Strassen algorithm can do it in 26 seconds.

The landmark 1971 paper also suggested a possible improvement, a tantalising prediction that multiplication might one day be possible in no more than n*log(n) steps. Now Harvey and van der Hoeven appear to have proved this is the case. “It finally appears to be possible,” says Cooper. “It passes the smell test.”

“If the result is correct, it’s a major achievement in computational complexity theory,” says Fredrik Johansson at INRIA, the French research institute for digital sciences, in Bordeaux. “The new ideas in this work are likely to inspire further research and could lead to practical improvements down the road.”

Cooper also praises the originality of the research, although stresses the complexity of the mathematics involved. “You think, jeez, I’m just multiplying two integers, how complicated can it get?” says Cooper. “But boy, it gets complicated.”

So, will this make calculating your tax returns any easier? “For human beings working with pencil and paper, absolutely not,” says Harvey. Indeed, their version of the proof only works for numbers with more than 10 to the power of 200 trillion trillion trillion digits. “The word ‘astronomical’ falls comically short in trying to describe this number,” says Harvey.

While future improvements to the algorithm may extend the proof to more humdrum numbers only a few trillion digits long, Cooper thinks its real value lies elsewhere. From a theoretical perspective, he says, this work allows programmers to provide a definitive guarantee of how long a certain algorithm will take. “We are optimistic that our new paper will allow us to achieve further practical speed-ups,” says van der Hoeven.

Harvey thinks this may well be the end of the story, with no future algorithm capable of beating n*log(n). “I would be extremely surprised if this turned out to be wrong,” he says, “but stranger things have happened.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Mathematicians Discover Impossible Problem In Super Mario Games

Using the tools of computational complexity, researchers have discovered it is impossible to figure out whether certain Super Mario Bros levels can be beaten without playing them, even if you use the world’s most powerful supercomputer.

Figuring out whether certain levels in the Super Mario Bros series of video games can be completed before you play them is mathematically impossible, even if you had several years and the world’s most powerful supercomputer to hand, researchers have found.

“We don’t know how to prove that a game is fun, we don’t know what that means mathematically, but we can prove that it’s hard and that maybe gives some insight into why it’s fun,” says Erik Demaine at the Massachusetts Institute of Technology. “I like to think of hard as a proxy for fun.”

To prove this, Demaine and his colleagues use tools from the field of computational complexity – the study of how difficult and time-consuming various problems are to solve algorithmically. They have previously proven that figuring out whether it is possible to complete certain levels in Mario games is a task that belongs to a group of problems known as NP-hard, where the complexity grows exponentially. This category is extremely difficult to compute for all but the smallest problems.

Now, Demaine and his team have gone one step further by showing that, for certain levels in Super Mario games, answering this question is not only hard, but impossible. This is the case for several titles in the series, including New Super Mario Bros and Super Mario Maker. “You can’t get any harder than this,” he says. “Can you get to the finish? There is no algorithm that can answer that question in a finite amount of time.”

While it may seem counterintuitive, problems in this undecidable category, known as RE-complete, simply cannot be solved by a computer, no matter how powerful, no matter how long you let it work.

Demaine concedes that a small amount of trickery was needed to make Mario levels fit this category. Firstly, the research looks at custom-made levels that allowed the team to place hundreds or thousands of enemies on a single spot. To do this they had to remove the limits placed by the game publishers on the number of enemies that can be present in a level.

They were then able to use the placement of enemies within the level to create an abstract mathematical tool called a counter machine, essentially creating a functional computer within the game.

That trick allowed the team to invoke another conundrum known as the halting problem, which says that, in general, there is no way to determine if a given computer program will ever terminate, or simply run forever, other than running it and seeing what happens.

These layers of mathematical concepts finally allowed the team to prove that no analysis of the game level can say for sure whether or not it can ever be completed. “The idea is that you’ll be able to solve this Mario level only if this particular computation will terminate, and we know that there’s no way to determine that, and so there’s no way to determine whether you can solve the level,” says Demaine.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Why Maths, Our Best Tool To Describe The Universe, May Be Fallible

Our laws of nature are written in the language of mathematics. But maths itself is only as dependable as the axioms it is built on, and we have to assume those axioms are true.

You might think that mathematics is the most trustworthy thing humans have ever come up with. It is the basis of scientific rigour and the bedrock of much of our other knowledge too. And you might be right. But be careful: maths isn’t all it seems. “The trustworthiness of mathematics is limited,” says Penelope Maddy, a philosopher of mathematics at the University of California, Irvine.

Maddy is no conspiracy theorist. All mathematicians know her statement to be true because their subject is built on “axioms” – and try as they might, they can never prove these axioms to be true.

An axiom is essentially an assumption based on observations of how things are. Scientists observe a phenomenon, formalise it and write down a law of nature. In a similar way, mathematicians use their observations to create an axiom. One example is the observation that there always seems to be a unique straight line that can be drawn between two points. Assume this to be universally true and you can build up the rules of Euclidean geometry. Another is that 1 + 2 is the same as 2 + 1, an assumption that allows us to do arithmetic. “The fact that maths is built on unprovable axioms is not that surprising,” says mathematician Vera Fischer at the University of Vienna in Austria.

These axioms might seem self-evident, but maths goes a lot further than arithmetic. Mathematicians aim to uncover things like the properties of numbers, the ways in which they are all related to one another and how they can be used to model the real world. These more complex tasks are still worked out through theorems and proofs built on axioms, but the relevant axioms might have to change. Lines between points have different properties on curved surfaces than flat ones, for example, which means the underlying axioms have to be different in different geometries. We always have to be careful that our axioms are reliable and reflect the world we are trying to model with our maths.

Set theory

The gold standard for mathematical reliability is set theory, which describes the properties of collections of things, including numbers themselves. Beginning in the early 1900s, mathematicians developed a set of underpinning axioms for set theory known as ZFC (for “Zermelo-Fraenkel”, from two of its initiators, Ernst Zermelo and Abraham Fraenkel, plus something called the “axiom of choice”).

ZFC is a powerful foundation. “If it could be guaranteed that ZFC is consistent, all uncertainty about mathematics could be dispelled,” says Maddy. But, brutally, that is impossible. “Alas, it soon became clear that the consistency of those axioms could be proved only by assuming even stronger axioms,” she says, “which obviously defeats the purpose.”

Maddy is untroubled by the limits: “Set theorists have been proving theorems from ZFC for 100 years with no hint of a contradiction.” It has been hugely productive, she says, allowing mathematicians to create no end of interesting results, and they have even been able to develop mathematically precise measures of just how much trust we can put in theories derived from ZFC.

In the end, then, mathematicians might be providing the bedrock on which much scientific knowledge is built, but they can’t offer cast-iron guarantees that it won’t ever shift or change. In general, they don’t worry about it: they shrug their shoulders and turn up to work like everybody else. “The aim of obtaining a perfect axiomatic system is exactly as feasible as the aim of obtaining a perfect understanding of our physical universe,” says Fischer.

At least mathematicians are fully aware of the futility of seeking perfection, thanks to the “incompleteness” theorems laid out by Kurt Gödel in the 1930s. These show that, in any domain of mathematics, a useful theory will generate statements about this domain that can’t be proved true or false. A limit to reliable knowledge is therefore inescapable. “This is a fact of life mathematicians have learned to live with,” says David Aspero at the University of East Anglia, UK.

All in all, maths is in pretty good shape despite this – and nobody is too bothered. “Go to any mathematics department and talk to anyone who’s not a logician, and they’ll say, ‘Oh, the axioms are just there’. That’s it. And that’s how it should be. It’s a very healthy approach,” says Fischer. In fact, the limits are in some ways what makes it fun, she says. “The possibility of development, of getting better, is exactly what makes mathematics an absolutely fascinating subject.”

HOW BIG IS INFINITY?

Infinity is infinitely big, right? Sadly, it isn’t that simple. We have long known that there are different sizes of infinity. In the 19th century, mathematician Georg Cantor showed that there are two types of infinity. The “natural numbers” (1, 2, 3 and so on forever) are a countable infinity. But between each natural number, there is a continuum of “real numbers” (such as 1.234567… with digits that go on forever). Real number infinities turn out not to be countable. And so, overall, Cantor concluded that there are two types of infinity, each of a different size.

In the everyday world, we never encounter anything infinite. We have to content ourselves with saying that the infinite “goes on forever” without truly grasping conceptually what that means. This matters, of course, because infinities crop up all the time in physics equations, most notably in those that describe the big bang and black holes. You might have expected mathematicians to have a better grasp of this concept, then – but it remains tricky.

This is especially true when you consider that Cantor suggested there might be another size of infinity nestled between the two he identified, an idea known as the continuum hypothesis. Traditionally, mathematicians thought that it would be impossible to decide whether this was true, but work on the foundations of mathematics has recently shown that there may be hope of finding out either way after all.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Brooks*


Mathematicians Discover ‘Soft Cell’ Shapes Behind The Natural World

The mathematical study of how repeating tiles fit together usually involves pointed shapes like triangles or squares, but these aren’t normally found in the natural world.

The chambers of a nautilus shell are an example of a soft cell in nature

A new class of mathematical shapes called soft cells can be used to describe how a remarkable variety of patterns in living organisms – such as muscle cells and nautilus shells – form and grow.

Mathematicians have long studied how tiles fit together and cover surfaces, but they have largely focused on simple shapes that fit together without gaps, such as squares and triangles, because these are easier to work with.

It is rare, however, for nature to use perfectly straight lines and sharp points. Some natural objects are similar enough to straight-edged tiles, known as polyhedrons, that they can be described by polyhedral models, such as a collection of bubbles in a foam or the cracked surface of Mars. But there are some curved shapes, such as three-dimensional polygons found in the epithelial cells that tile the lining of blood vessels and organs, that are harder to describe.

Now, Gábor Domokos at the Budapest University of Technology, Hungary, and his colleagues have discovered a class of shapes that describe tilings with curved edges, which they call soft cells. The key to these shapes is that they contain as few sharp corners as possible, while also fitting together as snugly as they can.

“These shapes emerge in art, but also in biology,” says Domokos. “If you look at sections of muscle tissue, you’ll see the cells having just two sharp corners, which is one less than the triangle – it is a very special kind of tiling.”

In two dimensions, soft cells have just two sharp points connected by curved edges and can take on an infinite number of different forms. But in three dimensions, these shapes have no sharp points, or corners, at all. It isn’t obvious how many of these 3D soft cells, which Domokos and his team call z-cells, there might be or how to easily make them, he says.

After defining soft cells mathematically, Domokos and his team looked for examples in nature and discovered they were widespread. “We found that architects have found these kinds of shapes intuitively when they wanted to avoid corners,” says Domokos. They also found z-cells were common in biological processes that grow from the tip of an object.

One of the clearest examples of z-cells was in seashells made from multiple chambers, such as the nautilus shell, which is an object of fascination for mathematicians because its structure follows a logarithmic pattern.

Domokos and his team noticed that the two-dimensional slices of each of the shell’s chambers looked like a soft cell, so they examined nautilus shells with a CT scanner to measure the chambers in three dimensions. “We saw no corners,” says Domokos, which suggested that the chambers were like the z-cells they had described mathematically.

“They’ve come up with a language for describing cellular materials that might be more physically realistic than the strict polyhedral model that mathematicians have been playing with for millennia,” says Chaim Goodman-Strauss at the University of Arkansas. These models could improve our understanding of how the geometry of biological systems, like in soft tissues, affects their material properties, says Goodman-Strauss. “The way that geometry influences the mechanical properties of tissue is really very poorly understood.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


How Maths Reveals The Best Time to Add Milk For Hotter Tea

If you want your cup of tea to stay as hot as possible, should you put milk in immediately, or wait until you are ready to drink it? Katie Steckles does the sums.

Picture the scene: you are making a cup of tea for a friend who is on their way and won’t be arriving for a little while. But – disaster – you have already poured hot water onto a teabag! The question is, if you don’t want their tea to be too cold when they come to drink it, do you add the cold milk straight away or wait until your friend arrives?

Luckily, maths has the answer. When a hot object like a cup of tea is exposed to cooler air, it will cool down by losing heat. This is the kind of situation we can describe using a mathematical model – in this case, one that represents cooling. The rate at which heat is lost depends on many factors, but since most have only a small effect, for simplicity we can base our model on the difference in temperature between the cup of tea and the cool air around it.

A bigger difference between these temperatures results in a much faster rate of cooling. So, as the tea and the surrounding air approach the same temperature, the heat transfer between them, and therefore cooling of the tea, slows down. This means that the crucial factor in this situation is the starting condition. In other words, the initial temperature of the tea relative to the temperature of the room will determine exactly how the cooling plays out.

When you put cold milk into the hot tea, it will also cause a drop in temperature. Your instinct might be to hold off putting milk into the tea, because that will cool it down and you want it to stay as hot as possible until your friend comes to drink it. But does this fit with the model?

Let’s say your tea starts off at around 80°C (176°F): if you put milk in straight away, the tea will drop to around 60°C (140°F), which is closer in temperature to the surrounding air. This means the rate of cooling will be much slower for the milky tea when compared with a cup of non-milky tea, which would have continued to lose heat at a faster rate. In either situation, the graph (pictured above) will show exponential decay, but adding milk at different times will lead to differences in the steepness of the curve.

Once your friend arrives, if you didn’t put milk in initially, their tea may well have cooled to about 55°C (131°F) – and now adding milk will cause another temperature drop, to around 45°C (113°F). By contrast, the tea that had milk put in straight away will have cooled much more slowly and will generally be hotter than if the milk had been added at a later stage.

Mathematicians use their knowledge of the rate at which objects cool to study the heat from stars, planets and even the human body, and there are further applications of this in chemistry, geology and architecture. But the same mathematical principles apply to them as to a cup of tea cooling on your table. Listening to the model will mean your friend’s tea stays as hot as possible.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


Incredible Maths Proof Is So Complex That Almost No One Can Explain It

Mathematicians are celebrating a 1000-page proof of the geometric Langlands conjecture, a problem so complicated that even other mathematicians struggle to understand it. Despite that, it is hoped the proof can provide key insights across maths and physics.

The Langlands programme aims to link different areas of mathematics

Mathematicians have proved a key building block of the Langlands programme, sometimes referred to as a “grand unified theory” of maths due to the deep links it proposes between seemingly distant disciplines within the field.

While the proof is the culmination of decades of work by dozens of mathematicians and is being hailed as a dazzling achievement, it is also so obscure and complex that it is “impossible to explain the significance of the result to non-mathematicians”, says Vladimir Drinfeld at the University of Chicago. “To tell the truth, explaining this to mathematicians is also very hard, almost impossible.”

The programme has its origins in a 1967 letter from Robert Langlands to fellow mathematician Andre Weil that proposed the radical idea that two apparently distinct areas of mathematics, number theory and harmonic analysis, were in fact deeply linked. But Langlands couldn’t actually prove this, and was unsure whether he was right. “If you are willing to read it as pure speculation I would appreciate that,” wrote Langlands. “If not — I am sure you have a waste basket handy.”

This mysterious link promised answers to problems that mathematicians were struggling with, says Edward Frenkel at the University of California, Berkeley. “Langlands had an insight that difficult questions in number theory could be formulated as more tractable questions in harmonic analysis,” he says.

In other words, translating a problem from one area of maths to another, via Langlands’s proposed connections, could provide real breakthroughs. Such translation has a long history in maths – for example, Pythagoras’s theorem relating the three sides of a triangle can be proved using geometry, by looking at shapes, or with algebra, by manipulating equations.

As such, proving Langlands’s proposed connections has become the goal for multiple generations of researchers and led to countless discoveries, including the mathematical toolkit used by Andrew Wiles to prove the infamous Fermat’s last theorem. It has also inspired mathematicians to look elsewhere for analogous links that might help. “A lot of people would love to understand the original formulation of the Langlands programme, but it’s hard and we still don’t know how to do it,” says Frenkel.

One analogy that has yielded progress is reformulating Langlands’s idea into one written in the mathematics of geometry, called the geometric Langlands conjecture. However, even this reformulation has baffled mathematicians for decades and was itself considered fiendishly difficult to prove.

Now, Sam Raskin at Yale University and his colleagues claim to have proved the conjecture in a series of five papers that total more than 1000 pages. “It’s really a tremendous amount of work,” says Frenkel.

The conjecture concerns objects that are similar to those in one half of the original Langlands programme, harmonic analysis, which describes how complex structures can be mathematically broken down into their component parts, like picking individual instruments out of an orchestra. But instead of looking at these with harmonic analysis, it uses other mathematical ideas, such as sheaves and moduli stacks, that describe concepts relating to shapes like spheres and doughnuts.

While it wasn’t in the setting that Langlands originally envisioned, it is a sign that his original hunch was correct, says Raskin. “Something I find exciting about the work is it’s a kind of validation of the Langlands programme more broadly.”

“It’s the first time we have a really complete understanding of one corner of the Langlands programme, and that’s inspiring,” says David Ben-Zvi at the University of Texas, who wasn’t involved in the work. “That kind of gives you confidence that we understand what its main issues are. There are a lot of subtleties and bells and whistles and complications that appear, and this is the first place where they’ve all been kind of systematically resolved.”

Proving this conjecture will give confidence to other mathematicians hoping to make inroads on the original Langlands programme, says Ben-Zvi, but it might also attract the attention of theoretical physicists, he says. This is because in 2007, physicists Edward Witten and Anton Kapustin found that the geometric Langlands conjecture appeared to describe an apparent symmetry between certain physical forces or theories, called S-duality.

The most basic example of this in the real world is in electricity and magnetism, which are mirror images of one another and interchangeable in many scenarios, but S-duality was also used by Witten to famously unite five competing string theory models into a single theory called M-theory.

But before anything like that, there is much more work to be done, including helping other mathematicians to actually understand the proof. “Currently, there’s a very small group of people who can really understand all the details here. But that changes the game, that changes the whole expectation and changes what you think is possible,” says Ben-Zvi.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*