Tiny Balls Fit Best Inside A Sausage, Physicists Confirm

Mathematicians have long been fascinated by the most efficient way of packing spheres in a space, and now physicists have confirmed that the best place to put them is into a sausage shape, at least for small numbers of balls.

Simulations show microscopic plastic balls within a cell membrane

What is the most space-efficient way to pack tennis balls or oranges? Mathematicians have studied this “sphere-packing” problem for centuries, but surprisingly little attention has been paid to replicating the results in the real world. Now, physical experiments involving microscopic plastic balls have confirmed what mathematicians had long suspected – with a small number of balls, it is best to stick them in a sausage.

Kepler was the first person to tackle sphere packing, suggesting in 1611 that a pyramid would be the best way to pack cannonballs for long voyages, but this answer was only fully proven by mathematicians in 2014.

This proof only considers the best way of arranging an infinite number of spheres, however. For finite sphere packings, simply placing the balls in a line, or sausage, is more efficient until there are around 56 spheres. At this point, the balls experience what mathematicians call the “sausage catastrophe” and something closer to pyramid packing becomes more efficient.

But what about back in the real world? Sphere-packing theories assume that the balls are perfectly hard and don’t attract or repel each other, but this is rarely true in real life – think of the squish of a tennis ball or an orange.

One exception is microscopic polystyrene balls, which are very hard and basically inert. Hanumantha Rao Vutukuri at the University of Twente in the Netherlands and his team, who were unaware of mathematical sphere-packing theories, were experimenting with inserting these balls into empty cell membranes and were surprised to find them forming sausages.

“One of my students observed a linear packing, but it was quite puzzling,” says Vutukuri. “We thought that there was some fluke, so he repeated it a couple of times and every time he observed similar results,” says Vutukuri. “I was wondering, ‘why is this happening?’ It’s a bit counterintuitive.”

After reading up on sphere packing, Vutukuri and his team decided to investigate and carried out simulations for different numbers of polystyrene balls in a bag. They then compared their predictions with experiments using up to nine real polystyrene balls that had been squeezed into cell membranes immersed in a liquid solution. They could then shrink-wrap the balls by changing the concentration of the solution, causing the membranes to tighten, and see what formation the balls settled in using a microscope.

“For up to nine spheres, we showed, both experimentally and in simulations, that the sausage is the best packed,” says team member Marjolein Dijkstra at Utrecht University, the Netherlands. With more than nine balls, the membrane became deformed by the pressure of the balls. The team ran simulations for up to 150 balls and reproduced the sausage catastrophe, where it suddenly becomes more efficient to pack things in polyhedrons, with between 56  and 70 balls.

The sausage formation for a small number of balls is unintuitive, says Erich Müller at Imperial College London, but makes sense because of the large surface area of the membrane with respect to the balls at low numbers. “When dimensions become really, really small, then the wall effects become very important,” he says.

The findings could have applications in drug delivery, such as how to most efficiently fit hard antibiotic molecules, like gold, inside cell-like membranes, but the work doesn’t obviously translate at this point, says Müller.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Make Mine a Double: Moore’s Law And The Future of Mathematics

Our present achievements will look like child’s play in a few years.

What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law.

Moore’s Law stems from a seminal 1965 article by Intel founder Gordon Moore. He wrote:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”

Moore noted that in 1965 engineering advances were enabling a doubling in semiconductor density every 12 months, but this rate was later modified to roughly 18 months. Informally, we may think of this as doubling computer performance.

In any event, Moore’s Law has now continued unabated for 45 years, defying several confident predictions it would soon come to a halt, and represents a sustained exponential rate of progress that is without peer in the history of human technology. Here is a graph of Moore’s Law, shown with the transistor count of various computer processors:

Where we’re at with Moore’s Law

At the present time, researchers are struggling to keep Moore’s Law on track. Processor clock rates have stalled, as chip designers have struggled to control energy costs and heat dissipation, but the industry’s response has been straightforward — simply increase the number of processor “cores” on a single chip, together with associated cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections.

The capacity of leading-edge DRAM main memory chips continues to advance apace with Moore’s Law. The current state of the art in computer memory devices is a 3D design, which will be jointly produced by IBM and Micron Technology, according to a December 2011 announcement by IBM representatives.

As things stand, the best bet for the future of Moore’s Law are nanotubes — submicroscopic tubes of carbon atoms that have remarkable properties.

According to a recent New York Times article, Stanford researchers have created prototype electronic devices by first growing billions of carbon nanotubes on a quartz surface, then coating them with an extremely fine layer of gold atoms. They then used a piece of tape (literally!) to pick the gold atoms up and transfer them to a silicon wafer. The researchers believe that commercial devices could be made with these components as early as 2017.

Moore’s Law in science and maths

So what does this mean for researchers in science and mathematics?

Plenty, as it turns out. A scientific laboratory typically uses hundreds of high-precision devices that rely crucially on electronic designs, and with each step of Moore’s Law, these devices become ever cheaper and more powerful. One prominent case is DNA sequencers. When scientists first completed sequencing a human genome in 2001, at a cost of several hundred million US dollars, observers were jubilant at the advances in equipment that had made this possible.

Now, only ten years later, researchers expect to reduce this cost to only US$1,000 within two years and genome sequencing may well become a standard part of medical practice. This astounding improvement is even faster than Moore’s Law!

Applied mathematicians have benefited from Moore’s Law in the form of scientific supercomputers, which typically employ hundreds of thousands of state-of-the-art components. These systems are used for tasks such as climate modelling, product design and biological structure calculations.

Today, the world’s most powerful system is a Japanese supercomputer that recently ran the industry-standard Linpack benchmark test at more than ten “petaflops,” or, in other words, 10 quadrillion floating-point operations per second.

Below is a graph of the Linpack performance of the world’s leading-edge systems over the time period 1993-2011, courtesy of the website Top 500. Note that over this 18-year period, the performance of the world’s number one system has advanced more than five orders of magnitude. The current number one system is more powerful than the sum of the world’s top 500 supercomputers just four years ago.

 

Linpack performance over time.

Pure mathematicians have been a relative latecomer to the world of high-performance computing. The present authors well remember the era, just a decade or two ago, when the prevailing opinion in the community was that “real mathematicians don’t compute.”

But thanks to a new generation of mathematical software tools, not to mention the ingenuity of thousands of young, computer-savvy mathematicians worldwide, remarkable progress has been achieved in this arena as well (see our 2011 AMS Notices article on exploratory experimentation in mathematics).

In 1963 Daniel Shanks, who had calculated pi to 100,000 digits, declared that computing one billion digits would be “forever impossible.” Yet this level was reached in 1989. In 1989, famous British physicist Roger Penrose, in the first edition of his best-selling book The Emperor’s New Mind, declared that humankind would likely never know whether a string of ten consecutive sevens occurs in the decimal expansion of pi. Yet this was found just eight years later, in 1997.

Computers are certainly being used for more than just computing and analysing digits of pi. In 2003, the American mathematician Thomas Hales completed a computer-based proof of Kepler’s conjecture, namely the long-hypothesised fact that the simple way the grocer stacks oranges is in fact the optimal packing for equal-diameter spheres. Many other examples could be cited.

Future prospects

So what does the future hold? Assuming that Moore’s Law continues unabated at approximately the same rate as the present, and that obstacles in areas such as power management and system software can be overcome, we will see, by the year 2021, large-scale supercomputers that are 1,000 times more powerful and capacious than today’s state-of-the-art systems — “exaflops” computers (see NAS Report). Applied mathematicians eagerly await these systems for calculations, such as advanced climate models, that cannot be done on today’s systems.

Pure mathematicians will use these systems as well to intuit patterns, compute integrals, search the space of mathematical identities, and solve intricate symbolic equations. If, as one of us discussed in a recent Conversation article, such facilities can be combined with machine intelligence, such as a variation of the hardware and software that enabled an IBM system to defeat the top human contestants in the North American TV game show Jeopardy!, we may see a qualitative advance in mathematical discovery and even theory formation.

It is not a big leap to imagine that within the next ten years tailored and massively more powerful versions of Siri (Apple’s new iPhone assistant) will be an integral part of mathematics, not to mention medicine, law and just about every other part of human life.

Some observers, such as those in the Singularity movement, are even more expansive, predicting a time just a few decades hence when technology will advance so fast that at the present time we cannot possibly conceive or predict the outcome.

Your present authors do not subscribe to such optimistic projections, but even if more conservative predictions are realised, it is clear that the digital future looks very bright indeed. We will likely look back at the present day with the same technological disdain with which we currently view the 1960s.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon) and David H. Bailey*

 


Commutative Diagrams Explained

Have you ever come across the words “commutative diagram” before? Perhaps you’ve read or heard someone utter a sentence that went something like

“For every [bla bla] there existsa [yadda yadda] such that the following diagram commutes.”

and perhaps it left you wondering what it all meant. I was certainly mystified when I first came across a commutative diagram. And it didn’t take long to realize that the phrase “the following diagram commutes” was (and is) quite ubiquitous. It appears in theorems, propositions, lemmas and corollaries almost everywhere!

So what’s the big deal with diagrams? And what does commute mean anyway?? It turns out the answer is quite simple.

Do you know what a composition of two functions is?

Then you know what a commutative diagram is!

A commutative diagram is simply the picture behind function composition.

Truly, it is that simple. To see this, suppose AA and BB are sets and ff is a function from AA to BB. Since ff maps (i.e. assigns) elements in AA to elements in BB, it is often helpful to denote that process by an arrow.

And there you go. That’s an example of a diagram. But suppose we have another function gg from sets BB to CC, and suppose ff and gg are composable. Let’s denote their composition by h=g∘fh=g∘f. Then both gg and hh can be depicted as arrows, too.

But what is the arrow A→CA→C really? I mean, really? Really it’s just the arrows ff and gg lined up side-by-side.

But maybe we think that drawing hh’s arrow curved upwards like that takes up too much space, so let’s bend the diagram a bit and redraw it like this:

This little triangle is the paradigm example of a commutative diagram. It’s a diagram because it’s a schematic picture of arrows that represent functions. And it commutes because the diagonal function IS EQUAL TO the composition of the vertical and horizontal functions, i.e. h(a)=g(f(a))h(a)=g(f(a)) for every a∈Aa∈A. So a diagram “commutes” if all paths that share a common starting and ending point are the same. In other words, your diagram commutes if it doesn’t matter how you commute from one location to another in the diagram.

But be careful.

Not every diagram is a commutative diagram.

The picture on the right is a bona fide diagram of real-valued functions, but it is defintitely not commutative. If we trace the number 11 around the diagram, it maps to 00 along the diagonal arrow, but it maps to 11 itself if we take the horizontal-then-vertical route. And 0≠10≠1. So to indicate if/when a given diagram is commutative, we have to say it explicitly. Or sometimes folks will use the symbols shown below to indicate commutativity:

I think now is a good time to decode another phrase that often accompanies the commutative-diagram parlance. Returning to our f,g,hf,g,h example, we assumed that f,gf,g and h=g∘fh=g∘f already existed. But suppose we only knew about the existence of f:A→Bf:A→B and some other map, say, z:A→Cz:A→C. Then we might like to know, “Does there exist a map g:B→Cg:B→C such that z=g∘fz=g∘f? Perhaps the answer is no. Or perhaps the answer is yes, but only under certain hypotheses.* Well, if such a gg does exists, then we’ll say “…there exists a map gg such that the following diagram commutes:

but folks might also say

“…there exists a map gg such that zz factors through gg”

The word “factors” means just what you think it means. The diagram commutes if and only if z=g∘fz=g∘f, and that notation suggests that we may think of gg as a factor of zz, analogous to how 22 is a factor of 66 since 6=2⋅36=2⋅3.

By the way, we’ve only chatted about sets and functions so far, but diagrams make sense in any context in which you have mathematical objects and arrows. So we can talk about diagrams of groups and group homomorphisms, or vector spaces and linear transformations, or topological spaces and continuous maps, or smooth manifolds and smooth maps, or functors and natural transformations and so on. Diagrams make sense in any category. And as you can imagine, there are more complicated diagrams than triangular ones. For instance, suppose we have two more maps i:A→Di:A→D and j:D→Cj:D→C such that hh is equal to not only g∘fg∘f but also j∘ij∘i. Then we can express the equality g∘f=h=j∘ig∘f=h=j∘i by a square:

Again, commutativity simply tells us that the three ways of getting from AA to CC are all equivalent. And diagrams can get really crazy and involve other shapes too. They can even be three-dimensional! Here are some possibilities where I’ve used bullets in lieu of letters for the source and target of the arrows.

No matter the shape, the idea is the same: Any map can be thought of as a path or a process from AA to BB, from start to finish. And we use diagrams to capitalize on that by literally writing down “AA” and “BB” (or “∙∙” and “∙∙”) and by literally drawing a path—in the form of an arrow—between them.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


Peer Review: The Fallacy of Fine-Tuning

We are a product of evolution, and are not surprised that our bodies seem to be well-suited to the environment.

Our leg bones are strong enough to allow for Earth’s gravitational pull – not too weak to shatter, not so massively over-engineered as to be wasteful.

But it could also be claimed we are special and the environment was formed and shaped for us.

This, as we know, is the basis of many religious ideas.

In recent years, such ideas have been expanded beyond Earth to look at the entire universe and our place within it.

The so-called Fine-Tuning Argument – that the laws of physics have been specially-tuned, potentially by some Supreme Being, to allow human life to arise – is the focus of Victor J. Stenger’s book.

Stenger presents the mathematics underpinning cosmic evolution, the lifetime of stars, the quantum nature of atoms and so on. His central is that “fine-tuning” claims are fatally flawed.

He points out that some key areas of physics – such as the equality of the charges on the electron and proton – are set by conservation laws determined by symmetries in the universe, and so are not free to play with.

Some flaws in the theory, he argues, run deeper.

A key component of the fine-tuning argument is that there are many parameters governing our universe, and that changing any one of these would likely produce a sterile universe unlike our own.

But think of baking a cake. Arbitrarily doubling only the flour, or sugar or vanilla essence may end in a cooking disaster, but doubling all the ingredients results in a perfectly tasty cake.

The interrelationships between the laws of physics are somewhat more complicated, but the idea is the same.

A hypothetical universe in which gravity was stronger, the masses of the fundamental particles smaller and electomagnetic force weaker may well result in the following: a universe that appears a little different to our own, but is still capable of producing long-lived stars and heavy chemical elements, the basic requirements for complex life.

Stenger backs up such points with his own research, and provides access to a web-based program he wrote called MonkeyGod.

The program allows you to conjure up universes with differing underlying physics. And, as Stenger shows, randomly plucking universe parameters from thin air can still produce universes quite capable of harbouring life.

This book is a good read for those wanting to understand the fine-tuning issues in cosmology, and it’s clear Stenger really understands the science.

But while many of the discussions are robust, I felt that in places some elements of the fine-tuning argument were brushed aside with little real justification.

As a case in point, Stenger falls back on multiverse theory and the anthropic principle, whereby we occupy but one of an almost infinite sea of different universes, each with a different law of physics.

In multiverse theory, most universes would be sterile (though we should not be surprised to find ourselves in a habitable universe).

While such a multiverse – the staple of superstring and brane ideas of the cosmos – is often sold as science fact, it actually lies much closer to the world of science speculation (or, to many, fiction).

We are not out of the fine-tuning waters yet, but Stenger’s book is a good place to start getting to grips with the issues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Geraint Lewis*


Where is Everybody? Doing the Maths on Extraterrestrial Life

Are we getting closer to solving one of life’s greatest mysteries?

During a lunch in the summer of 1950, physicists Enrico Fermi, Edward Teller and Herbert York were chatting about a recent New Yorker cartoon depicting aliens abducting trash cans in flying saucers. Suddenly, Fermi blurted out, “Where is everybody?”

He reasoned: “Since there are likely many other technological civilisations in the Milky Way galaxy, and since in a few tens of thousands of years at most they could have explored or even colonised many distant planets, why don’t we see any evidence of even a single extraterrestrial civilisation?”

This has come to be known as Fermi’s Paradox.

Clearly the question of whether other civilisations exist is one of the most important questions of modern science. Any discovery of a distant civilisation – say by analysis of microwave data – would rank as among the most far-reaching of all scientific discoveries.

Drake equation

At a 1960 conference regarding extraterrestrial intelligence, Frank Drake (1930 —) sketched out what is now the Drake equation, estimating the number of civilisations in the Milky Way with which we could potentially communicate:

where

N = number of civilisations in our galaxy that can communicate.

R* = average rate of star formation per year in galaxy.

fp = fraction of those stars that have planets.

ne = average number of planets that can support life, per star that has planets.

fl = fraction of the above that eventually develop life.

fi = fraction of the above that eventually develop intelligent life.

fc = fraction of civilisations that develop technology that signals existence into space.

L = length of time such civilisations release detectable signals into space.

The result? Drake estimated ten such civilisations were out there somewhere in the Milky Way.

This analysis, led to the Search for Extraterrestrial Intelligence (SETI) project, looking for radio transmissions in a region of the electromagnetic spectrum thought best suited for interstellar communication.

But after 50 years of searching, using increasingly powerful equipment, nothing has been found.

So where is everybody?

Proposed solutions to Fermi’s paradox

Numerous scientists have examined Fermi’s paradox and proposed solutions. The following is a list of some of the proposed solutions, and common rejoinders:

  • Such civilisations are here, or are observing us, but are under orders not to disclose their existence.

Common rejoinder: This explanation (known as the “zookeeper’s theory”) is preferred by some scientists including, for instance, the late Carl Sagan. But it falls prey to the fact that it would take just one member of an extraterrestrial society to break the pact of silence – and this would seem inevitable.

  • Such civilisations have been here and planted seeds of life, or perhaps left messages in DNA.

Common rejoinder: The notion that life began on Earth from bacterial spores or the like that originated elsewhere, known as the “panspermia theory”, only pushes the origin of life problem to some other star system – scientists see no evidence in DNA sequences of anything artificial.

  • Such civilisations exist, but are too far away.

Common rejoinder: A sufficiently advanced civilisation could send probes to distant stars, which could scout out suitable planets, land and construct copies of themselves, using the latest software beamed from home.

So the entire Milky Way galaxy could be explored within, at most, a few million years.

  • Such civilisations exist, but have lost interest in interstellar engagement.

Common rejoinder: As with the zookeeper theory, this would require each civilisation to forever lack interest in communication and transportation – and someone would most likely break the pact of silence.

  • Such civilisations are calling, but we don’t recognise the signal.

Common rejoinder: This explanation doesn’t apply to signals sent with the direct purpose of communicating to nascent technological societies. Again, it is hard to see how a galactic society could enforce a global ban.

  • Civilisations invariably self-destruct.

Common rejoinder: This contingency is already figured into the Drake equation (the L term, above). In any event, we have survived at least 100 years of technological adolescence, and have managed (until now) not to destroy ourselves in a nuclear or biological apocalypse.

Relatively soon we will colonise the moon and Mars, and our long-term survival will no longer rely on Earth.

  • Earth is a unique planet in fostering long-lived ecosystems resulting in intelligent life.

Common rejoinder: Perhaps, but the latest studies, in particular the detections of extrasolar planets point in the opposite direction. Environments like ours appear quite common.

  • We are alone in the Milky Way galaxy. Some scientists further conclude we are alone in the entire observable universe.

Common rejoinder: This conclusion flies in the face of the “principle of mediocrity,” namely the presumption, popular since the time of Copernicus, that there’s nothing special about human society or environment.

Numerous other proposed solutions and rejoinders are given in by Stephen Webb in his 2002 book, If the Universe Is Teeming with Aliens … Where is Everybody?.

Two of Drake’s key terms – fp (the fraction of stars that have planets) and ne (the average number of planets that can support life, per star that has planets) are subject to measurement.

Scientists once thought stable planetary systems and Earth-like planets were a rarity. But recent evidence suggests otherwise.

Thanks to Kepler and other projects, these two terms have been found to have reasonable values, although not quite as optimistic as Drake and his colleagues first estimated.

With every new research finding in the area of extrasolar planets and possible extraterrestrial living organisms, the mystery of Fermi’s paradox deepens.

“Where is everybody?” is a question that now carries even greater resonance.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Are We All Related To Henry VIII’s Master Of The Mint?

After discovering that a Ralph Rowlett was in charge of the Royal Mint in 1540, Peter Rowlett runs the genealogy calculations to find out if he could be related.

In 1540, Henry VIII’s coins were made in the Tower of London. One of the Masters of the Mint was Ralph Rowlett, a goldsmith from St Albans with six children. I wondered: am I descended from Ralph? My Rowlett ancestors were Sheffield steelworkers, ever since my three-times great grandfather moved north in search of work. The trail goes cold in a line of Bedfordshire farm labourers in the 18th century, offering no evidence of a direct relationship.

My instincts as a mathematician led me to investigate this in a more mathematical way. I have two parents. They each have two parents, so I have four grandparents. So, I have eight great-grandparents, 16 great-great-grandparents and 2n ancestors n generations ago. This exponential growth doubles each generation and takes 20 generations to reach a million ancestors.

Ralph lived 20 to 25 generations before me in an England of about 2 million people. The exponential growth argument says I have several million ancestors in his generation, so, because we run out of people otherwise, he is one of them.

But this model is based on the assumption that everyone is equally likely to reproduce with anyone else. In reality, especially at certain points in history, people were likely to reproduce with someone from the same geographic area and demographic group as themselves.

But I am not sure this makes a huge difference here because we are dealing with something called a small-world network: most people are in highly clustered groups, tending to pair up with nearby people, but a small number are connected over greater distances. An illegitimate child of a nobleman would have a different social class to their father. A migrant seeking work could reproduce in a different geographic area.

We don’t need many of these more remote connections to allow a great amount of spread around the network. This is the origin of the six degrees of separation concept – that you can link two people through a surprisingly short chain of friend-of-a-friend relationships.

I ran a simulation with 15 towns of a thousand people, where everyone has only a 5 per cent chance of moving to another town to reproduce. It took about 20 generations for everyone to be descended from a specific person in the first generation. I ran the same simulation with 15,000 people living in one town, and the spread took about 18 generations. So the 15-town structure slowed the spread, but only slightly.

What does this mean for Ralph and me? There is a very good chance we are related, whether through Rowletts or another route. And if you have recent ancestors from England, there is a good chance you are too.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


AI Translates Maths Problems Into Code To Make Them Easier To Solve

An artificial intelligence that can turn mathematical concepts written in English into a formal proving language for computers could make problems easier for other AIs to solve.

Maths can be difficult for a computer to understand

An artificial intelligence can translate maths problems written in plain English to formal code, making them easier for computers to solve in a crucial step towards building a machine capable of discovering new maths.

Computers have been used to verify mathematical proofs for some time, but they can only do it if the problems have been prepared in a specifically designed proving language, rather than for the mix of mathematical notation and written text used by mathematicians. This process, known as formalisation, can take years of work for just a single proof, so only a small fraction of mathematical knowledge has been formalised and then proved by a machine.

Yuhuai Wu at Google and his colleagues used a neural network called Codex created by AI research company OpenAI. It has been trained on large amounts of text and programming data from the web and can be used by programmers to generate workable code.

Proving languages share similarities with programming languages, so the team decided to see if Codex could formalise a bank of 12,500 secondary school maths competition problems. It was able to translate a quarter of all problems into a format that was compatible with a formal proof solver program called Isabelle. Many of the unsuccessful translations were the result of the system not understanding certain mathematical concepts, says Wu. “If you show the model with an example that explains that concept, the model can then quickly pick it up.”

To test the effectiveness of this auto-formalisation process, the team then applied Codex to a set of problems that had already been formalised by humans. Codex generated its own formal versions of these problems, and the team used another AI called MiniF2F to solve both versions.

The auto-formalised problems improved MiniF2F’s success rate from 29 per cent to 35 per cent, suggesting that Codex was better at formalising these problems than the humans were.

It is a modest improvement, but Wu says the team’s work is only a proof of concept. “If the goal is to train a machine that is capable of doing the same level of mathematics as the best humans, then auto-formalisation seems to be a very crucial path towards it,” says Wu.

Improving the success rate further would allow AIs to compete with human mathematicians, says team member Albert Jiang at the University of Cambridge. “If we get to 100 per cent, we will definitely be creating an artificial intelligence agent that’s able to win an International Maths Olympiad gold medal,” he says, referring to the top prize in a leading maths competition.

While the immediate goal is to improve the auto-formalisation models, and automated proving machines, there could be larger implications. Eventually, says Wu, the models could uncover areas of mathematics currently unknown to humans.

The capacity for reasoning in such a machine could also make it well-suited for verification tasks in a wide range of fields. “You can verify whether a piece of software is doing exactly what you asked it to do, or you can verify hardware chips, so it has applications in financial trading algorithms and hardware design,” says Jiang.

It is an exciting development for using machines to find new mathematics, says Yang-Hui He at the London Institute for Mathematical Sciences, but the real challenge will be in using the model on mathematical research, much of which is written in LaTeX, a typesetting system. “We only use LaTeX because it types nicely, but it’s a natural language in some sense, it has its own rules,” says He.

Users can define their own functions and symbols in LaTeX that might only be used in a single mathematical paper, which could be tricky for a neural network to tackle that has only been trained on the plain text, says He.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Mathematicians Invent New Way to Slice Pizza into Exotic Shapes

Here’s one thing to impress your friends with the next time you order a takeaway: new and exotic ways to slice a pizza.

Most of us divide a pizza using straight cuts that all meet in the middle. But what if the centre of the pizza has a topping that some people would rather avoid, while others desperately want crust for dipping?

Mathematicians had previously come up with a recipe for slicing – formally known as a monohedral disc tiling – that gives you 12 identically shaped pieces, six of which form a star extending out from the centre, while the other six divide up the crusty remainder. You start by cutting curved three-sided slices across the pizza, then dividing these slices in two to get the inside and outside groups, as shown below.

Now Joel Haddley and Stephen Worsley of the University of Liverpool, UK, have generalised the technique to create even more ways to slice. The pair have proved you can create similar tilings from curved pieces with any odd number of sides – known as 5-gons, 7-gons and so on (shaded below) – then dividing them in two as before. “Mathematically there is no limit whatsoever,” says Haddley, though you might find it impractical to carry out the scheme beyond 9-gon pieces.

Haddley and Worsley went one further by cutting wedges in the corners of their shapes, creating bizarre, spikey pieces that still form a circle (the image below shows this happening with 5-gons). “It’s really surprising,” says Haddley.

 

As with many mathematical results, its usefulness isn’t immediately obvious. The same is true of another pizza theorem, which looks at what happens when a pizza is haphazardly cut off-centre.

“I’ve no idea whether there are any applications at all to our work outside of pizza-cutting,” says Haddley, who has actually tried slicing a pizza in this way for real (see below). But the results are “interesting mathematically, and you can produce some nice pictures”.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


Win $12k By Rediscovering The Secret Phrases That Secure The Internet

Five secret phrases used to create the encryption algorithms that secure everything from online banking to email have been lost to history – but now cryptographers are offering a bounty to rediscover them.

Could you solve a cryptography mystery?

Secret phrases that lie at the heart of modern data encryption standards were accidentally forgotten decades ago – but now cryptographers are offering a cash bounty for anyone who can figure them out. While this won’t allow anyone to break these encryption methods, it could solve a long-standing puzzle in the history of cryptography.

“This thing is used everywhere, and it’s an interesting question; what’s the full story? Where did they come from?” says cryptographer Filippo Valsorda. “Let’s help the trust in this important tool of cryptography, and let’s fill out this page of history that got torn off.”

The tool in question is a set of widely-used encryption algorithms that rely on mathematical objects called elliptic curves. In theory, any of an infinite number of curves can be used in the algorithms, but in the late 1990s the US National Security Agency (NSA), which is devoted to protecting domestic communications and cracking foreign transmissions, chose five specific curves it recommended for use. These were then included in official US encryption standards laid down in 2000, which are still used worldwide today.

Exactly why the NSA chose these particular curves is unclear, with the agency saying only that they were chosen at random. This led some people to believe that the NSA had secretly selected curves that were weak in some way, allowing the agency to crack them. Although there is no evidence that the elliptic curves in use today have been cracked, the story persists.

In the intervening years, it has been confirmed that the curves were chosen by an NSA cryptographer named Jerry Solinas, who died earlier this year. Anonymous sources have suggested that Solinas chose the curves by transforming English phrases into a string of numbers, or hashes, that served as a parameter in the curves.

It is thought the phrases were along the lines of “Jerry deserves a raise”. But rumours suggest Solinas’s computer was replaced shortly after making the choice, and keeping no record of them, he couldn’t figure out the specific phrases that produced the hashes used in the curves. Turning a phrase into a hash is a one-way process, meaning that recovering them was impossible with the computing power available at the time.

Dustin Moody at the US National Institute of Standards and Technology, which sets US encryption standards, confirmed the stories to New Scientist: “I asked Jerry Solinas once, and he said he didn’t remember what they were. Jerry did seem to wish he remembered, as he could tell it would be useful for people to know exactly how the generation had gone. I think that when they were created, nobody [thought] that the provenance was a big deal.”

Now, Valsorda and other backers have offered a $12,288 bounty for cracking these five hashes – which will be tripled if the recipient chooses to donate it to charity. Half of the sum will go to the person who finds the first seed phrase, and the other half to whoever can find the remaining four.

Valsorda says that finding the hashes won’t weaken elliptic curve cryptography – because it is the nature of the curves that protects data, not the mathematical description of those curves – but that doing so will “help fill in a page of cryptographic history”. He believes that nobody in the 1990s considered that the phrases would be of interest in the future, and that the NSA couldn’t have released them anyway once they discovered that they were jokey phrases about one of their staff wanting a raise.

There are two main ways someone could claim the prize. The first is brute force – simply trying vast numbers of possible seeds, and checking the values created by hashing them against the known curves, which is more feasible than in the 1990s because of advances in computing power. 

But Valsorda says someone may already have the phrases written down. “Some of the people who did this work, or were in the same office as the people who did this work, probably are still around and remember some details,” he says. “The people who are involved in history sometimes don’t realise the importance of what they remember. But I’m not actually suggesting anybody, like, goes stalking NSA analysts.”

Keith Martin at Royal Holloway, University of London, says that the NSA itself would be best-equipped to crack the problem, but probably has other priorities, and anybody else will struggle to find the resources.

“I would be surprised if they’re successful,” he says. “But on the other hand, I can’t say for sure what hardware is out there and what hardware will be devoted to this problem. If someone does find the [phrases], what would be really interesting is how did they do it, rather than that they’ve done it.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


The Number That Is Too Big For The Universe

TREE(3) is a number that turns up easily from just playing a simple mathematical game. Yet, it is so colossally large that it couldn’t conceivably fit in our universe, writes Antonio Padilla.

There are many numbers that fit quite naturally into our everyday lives. For example, the number five counts the founding members of popular UK band One Direction, the number 31 million, how many followers they have on Twitter and the number zero, the number of followers that actually have decent taste in music (sorry!).

But there are also numbers which are important to mathematicians that can never fit into our everyday lives. There are even those that could never fit into the universe. Like TREE(3). Let me explain.

TREE(3) is a colossus, a number so large that it dwarfs some of its gargantuan cousins like a googol (ten to the one hundred), or a googolplex (ten to the googol), or even the dreaded Graham’s number (too big to write). TREE(3) emerges, quite spectacularly, from a mathematical game known as the Games of Trees. The idea of the game is to build a forest of trees from different combinations of seeds. Mathematically, the trees are just coloured blobs (the seeds) connected by lines (the branches). As you build the forest, your first tree must only have at most one seed, your second tree must have at most two seeds, and so on. The forest dies whenever you build a tree that contains one of the older trees. There is a precise mathematical meaning to “contains one of the older trees”, but essentially you aren’t allowed to write down any combinations of blobs and branches that have gone before.

At the turn of the 1960s, the Game of Trees had piqued the interest of the great gossiping Hungarian mathematician Paul Erdős. Erdős is known for being a prolific collaborator, writing papers with over 500 other mathematicians. He was also an eccentric who would show up at the homes of his collaborators without warning. He would expect food and lodging and dismiss their children as “epsilons”, the term mathematicians often use for something infinitesimal. But Erdős would also be armed with a compendium of interesting mathematical problems, and if he had arrived at your door, chances are he thought you could solve it. In this particular story, Erdős was asking anyone who cared to listen if the Game of Trees could last forever. At Princeton University, a young mathematician who had just completed his doctorate was keen to take on Erdős’ latest problem. His name was Joseph Kruskal and he was able to prove that the Games of Trees could never last an eternity, but it could go on for a very long time.

So how long can the game actually last? This depends on how many different types of seed you have. If you only have one seed type, the forest cannot have more than one tree. For two types of seed, you have a maximum of three trees. As soon as we add a third type of seed, the game explodes. The maximum number of trees defies all comprehension, leaping towards a true numerical leviathan known as TREE(3).

Games like the Game of Trees are important. They can often be crucial in understanding processes that involve some sort of branching, such as decision algorithms in computer science, or the evolution of viruses and antibodies in epidemiology. And yet, despite these real-world applications, they can also generate a number that is too big for the universe.

TREE(3) really is that big. To see why, imagine you sit down with a friend and decide to play the Game of Trees with three different types of seed.  You know the game can last a while so you play as fast as you can without breaking up the space-time continuum. In other words, you draw a tree every 0.00000000000000000000000000000000000000000005 seconds. That’s equivalent to the Planck time, beyond which the fabric of space and time is overwhelmed by quantum effects.

After a year you will have drawn more than a trillion trillion trillion trillion trees, but you will be nowhere near the end of the game. You play for a lifetime before each of you is replaced by state-of-the-art artificial intelligence that shares your thoughts and personality. The game goes on. The AI mind-clones, powered using solar technology, continue playing long after humanity has destroyed itself through war or climate change or some other madness we haven’t even thought of yet.

After 300 million years, with the world’s continents now merged into one supercontinent and the sun noticeably brighter than before, AI you and your AI friend continue to play at breakneck speed. After 600 million years, the brightening sun has destroyed the Earth’s carbon cycle. Trees and forests can no longer grow, and the oxygen level begins to fall. The sun’s deadly ultraviolet radiation begins to break through Earth’s atmosphere, and by 800 million years, all complex life has been destroyed, except for the two AIs, who continue to play the Game of Trees.

After about 1.5 billion years, with Earth gripped by a runaway greenhouse effect, the Milky Way and Andromeda galaxies collide. The two AIs are too engrossed in their game to notice as the solar system is kicked unceremoniously out of the galaxy as a result of the collision. Billions of years pass as the sun runs out of fuel, turning into a red giant that comes dangerously close to swallowing Earth. Its outer layers drift away and the sun ends its life as a feeble white dwarf, barely bigger than Earth is now. The AIs are now struggling for a reliable source of energy but they continue to play. After a quadrillion years, the sun stops shining altogether. The AIs, starved of energy, have been replaced by an even more advanced technology, drawing energy from the bath of photons left over from the big bang, in the cosmic microwave background radiation. This technology continues to play the Game of Trees. The game is far from over, still some way short of its limit, at TREE(3) moves.

Between around 1040 years and the googolannum (a googol years), the game continues against the backdrop of a spectacular era of black hole dominance, in which all matter has been guzzled by an army of black holes that march relentlessly across the universe. Beyond the googolannum, those black holes have decayed via a process known as Hawking radiation, leaving behind a cold and empty universe, warmed ever so slightly by a gentle bath of radiated photons. And yet, despite all that has passed, the Game of Trees continues.

Can it reach the limit of TREE(3) moves?

It cannot.

After 10 to the 10 to the 122 years, long before the Game of Trees is complete, the universe undergoes a Poincaré recurrence. It resets itself. This is because our universe is thought to be a finite system that can only exist in a finite number of quantum states. Poincaré recurrence, named after the celebrated French mathematician Henri Poincaré, is a property of any finite system, whether it’s the universe or a pack of playing cards. It says that as you move through the system at random, you will return, inevitably, to where you began. With a pack of cards, you shuffle and shuffle, and then after a long wait you eventually shuffle the pack so that all the cards are lined up just as they were when you first opened them. With our universe, it shuffles and shuffles between its various quantum states, and after around 10 to the 10 to the 122 years, it finds itself back in its primordial state.

The Game of Trees could never finish but it did demonstrate our ability to comprehend the incomprehensible, to go to places with mathematics that the physical world could never achieve. The truth is TREE(3) wasn’t too big for Erdős or Kruskal or any of the other mathematicians who contemplated it, but it was too big for the universe.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Antonio Padilla*