Rubik’s Cube Solution Unlocked By Memorising 3915 Final Move Sequences

For the first time, a speedcuber has demonstrated a solution to the Rubik’s cube that combines the two final steps of the puzzle’s solution into one.

A Rubik’s cube solver has become the first person to show proof of successfully combining the final two steps of solving the mechanical puzzle into one move. The feat required the memorisation of thousands of possible sequences for the final step.

Most skilled speedcubers – people who compete to solve Rubik’s cubes with the most speed and efficiency – choose to solve the final layer of the cube with two separate moves that involve 57 possible sequences for the penultimate step and 21 possible sequences for the final move.

Combining those two separate actions into a single move requires a person to memorise 3915 possible sequences. These sequences were previously known to be possible, but nobody is reported to have successfully achieved this so-called “Full 1 Look Last Layer” (Full 1LLL) move until a speedcuber going by the online username “edmarter” shared a YouTube video demonstrating that accomplishment.

Edmarter says he decided to take up the challenge after seeing notable speedcubers try and fail. Over the course of about a year, he spent 10 hours each weekend and any free time during the week practising and memorising the necessary sequences, he told New Scientist. That often involved memorising 144 movement sequences in a single day.

All that effort paid off on 4 August 2022 when edmarter uploaded a video demonstrating the Full 1LLL over the course of 100 separate puzzle solves. He also posted his accomplishment to Reddit’s r/Cubers community.

His average solve time for each Rubik’s cube over the course of that video demonstration run was 14.79 seconds. He says he had an average solve time as low as 12.50 seconds during two practice runs before recording the video.

The Rubik’s cube community has reacted with overwhelming enthusiasm and awe. The top-voted comment on his Reddit post detailing the achievement simply reads: “This is absolutely insane.”

But he is not resting on his laurels. Next up, he plans to try practising some other methods for finishing the Rubik’s cube that have only previously been mastered by a handful of people.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jeremy Hsu*


How Many Knots Exist? A New Computing Trick Is Untangling The Answer

Finding how many knots there are for a given number of string crossings was thought to be an impossibly complex task. Now, algorithm engineering is cracking it – and showing us how to solve other fiendishly intricate maths problems.

IT USED to be one of the most frustrating part of any journey on public transport. You squeeze your way past the other bodies, sit down and fish your earphones out of your pocket. You didn’t bother to wind up the wires into a neat loop the last time you used them, and so – sigh – you now need to spend the next 5 minutes untangling this knot. Thank goodness for the invention of wireless earbuds.

Knots aren’t just an everyday annoyance, though. They are also a source of endless inspiration for researchers. Take mathematician Benjamin Burton, who is fascinated by one simple question: how many knots are there? “There is something tantalising about problems that you can describe to a 10-year-old, but that mathematicians have not yet solved,” he says.

Taking a census of knots is one of those problems that ought to be impossible to solve because of its complexity. There are so many ways the strings can be crossed and looped that even the fastest computer could never catalogue them all. Yet Burton has been giving it a shot, and along the way he is showing that, with a few clever computational tricks, many maths problems that seem insurmountable might not be.

Knots and science have been, ahem, entangled for quite a while. In the dying decades of the 19th century, scientists were grappling with how to understand atoms. One hypothesis saw them as little vortices of fluid that became stable when knotted. Lord Kelvin, who went on to become the president of the UK’s Royal Society, was the first to suggest that each chemical element corresponded to a different type of knot.

The idea was abandoned after the discovery of the electron, but at the time it seemed vital to understand knots. Physicist Peter Guthrie Tait was the first to make a stab at creating a comprehensive list of them. Cards on the table: there are an infinite number of possible knots, because you can keep on adding extra knotty flourishes forever. The question mathematicians are interested in is more subtle. A defining feature of a knot is its crossing number, the number of times the strings cross. The question is, for a given number of crossings, how many different knots are possible? In 1885, Tait, working with mathematician Thomas Kirkman, considered all the knots with up to and including 10 crossings. Drawing them all by hand, he tabulated 364 configurations.

Try to go further and things quickly get a lot more difficult. As you allow more crossings, the number of possible knots rapidly increases. The last major extension to the knot tables was published in 1998. Mathematicians Jim Hoste, Morwen Thistlethwaite and Jeff Weeks recorded all the knots up to and including 16 crossings – all 1.7 million of them.

Going beyond this has, until recently, been unfeasible. To see why, we need to know a bit more about how mathematicians think about knots (see “When is a knot not a knot?“). Unlike real-world knots that are often pieces of string with loose ends, mathematical knots are closed. Imagine tying a knot in a piece of spaghetti, then melding the free ends.

Stretch, twist, bend

Mathematicians treat knots according to the rules of topology, a branch of geometry. These say that a knot remains fundamentally the same if you stretch, twist or bend the strands – but making cuts or new joins is a no-no. This leads to the concept of a prime knot, a knot that can’t be mathematically broken down into two or more simpler knots.

The job of tabulating knots, then, boils down to comparing elaborate tangles to see if they are really the same prime knot. Don’t underestimate how tricky this can be. For 75 years, two knots with 10 crossings were listed separately in tables. But in 1973, Kenneth Perko, a New York lawyer who had studied maths, realised that if you take the mirror image of one, you can manipulate it to become equivalent to the other. The knots are known as the “Perko pair”.

When dealing with millions of knots, the job of identifying doppelgangers becomes so time-consuming that even the fastest supercomputers shouldn’t be theoretically capable of it in a reasonable amount of time. Still, in 2019, Burton, who is based at the University of Queensland in Australia, decided the time was ripe to take a punt.

He knew there is a difference between a knot-sorting algorithm in the abstract and how that algorithm is implemented in computer code. The art of bridging this gap is known as algorithm engineering and Burton thought that with the right tricks, the problem wouldn’t be as hard as it seemed. “Part of what motivated me was creating a showpiece to see if the tools were good enough,” he says.

He began by setting a computer the task of dreaming up all the possible ways that strings can be knotted with up to and including 19 crossings. This is a comparatively simple task and the computer spat out 21 billion knot candidates after just a few days.

The job was then to check each knot was prime and distinct. This involves storing a full topological description of the knots in a computer’s working memory or RAM. A data set of 21 billion knots would require more than 1 terabyte of RAM to store, and computers with that amount are rare. To get around this, Burton made use of a software package he has developed called Regina. It can convert knots into “knot signatures”, strings of letters that capture each tangle’s defining topological properties. The signatures can be stored far more economically than a knot description. Plus, knots that are tangled differently but are really equivalent have the same signature, making it easy to weed out duplicates. Burton managed to whittle down the candidate knots to about 7 billion in a day.

This method wasn’t powerful enough to identify every last duplicate, however. Burton’s next tactic involved calculating what is known as a knot invariant for each candidate knot. Invariants are mathematical objects that capture the essence of a knot – if two knots have different invariants, they are different knots. Invariants are more powerful than signatures, but they are harder to compute: it would have taken Burton’s computer more than a year to calculate these for all the remaining knots. By sorting them into groups and running them on parallel supercomputers, he got through them in days. The pool was down to 370 million.

Burton also had to grapple with non-hyperbolic knots, an especially challenging category. But after several more rounds of sorting, the supposedly impossible became possible. Burton’s census extended the knot tables to cover everything up to and including 19 crossings. The final tally: 352,152,252 knots.

Ian Agol at the University of California, Berkeley, says he is impressed with the calculations. “It will likely be useful to mathematicians searching for knots with specific properties, or who have a knot that they would like to identify,” he says.

He and other mathematicians think that Burton’s work is also an impressive example of algorithm engineering. “This is a growing trend in pure mathematics, where computational methods are being used in more creative ways,” says Hans Boden at McMaster University in Ontario, Canada.

Lorries and cameras

There are plenty of practical problems where algorithm engineering is used to find efficient solutions to hard problems. Some of the most important are in logistics, where optimised routing can save time and huge sums of money. Others include working out how to cover a space with CCTV cameras. In many cases, a perfect solution is impossible to obtain in a reasonable time frame.

This is also the case when it comes to mapping complex evolutionary histories, particularly for plants and bacteria. Algorithms are used to link DNA data in phylogenetic trees, graphs that group closely related species more closely than distantly related ones. However, sometimes genes don’t pass down from parent to offspring, but rather through what is known as horizontal gene transfer from one species to another. Algorithms designed to map these transfers can be very slow.

One algorithm engineering hack to get around this involves parametrising data inputs to make them smaller. For instance, if you fix the number of potential mutations you are considering, the problem can be solved efficiently. Recently, Edwin Jacox, now at the University of California, Santa Cruz, and his colleagues applied this method to cyanobacteria phylogenetic trees. Cyanobacteria played a major role in Earth’s changing biosphere more than 2 billion years ago. The researchers developed a parametrised algorithm that rebuilds the cyanobacteria phylogenetic trees in just 15 seconds.

Whether it is reconstructing evolutionary trees or listing knots, it is clear than the toughest computing problems can be made tractable. “With algorithm engineering and heuristics, things that are slow in practice turn out to be remarkably, surprisingly fast,” says Burton. It means that even the trickiest problems don’t have to leave us in an inescapable tangle.

When is a knot not a knot?

Here is a problem that needs unpicking: if you pull at a messy tangle of wires, how do you know if it will just get more tangled or come apart into a simple loop? This is a problem mathematicans call “unknot recognition” and it is one we might soon solve. Unknot recognition fascinates mathematicians and computer scientists alike because it is part of the famous “P vs NP” question.

To see how it works, take the classic travelling salesperson problem, where we must work out the shortest route for a salesperson to visit a number of cities. If an algorithm designed to solve this problem doesn’t get exponentially harder as more cities are included, it is said to be “P”. This means it is solvable in reasonable – or polynomial, in maths speak – amounts of time. Once you have an answer, you need to check it is correct. If it is easy to check your answer, the problem is classed as “NP”. The big question for mathematicians is whether all NP problems are also P. This is one of the Millennium Prize Problems – answer it conclusively and you will win $1 million.

Unknot recognition dwells in the twilight zone of P vs NP. We already know that this problem is definitely “NP”, or easy to check. If your algorithm can produce an unknotted loop, you can immediately see it has worked. Mathematicians also think it is likely to be P, and we are tantalisingly close to proving it.

In 2014, Benjamin Burton at the University of Queensland in Australia developed an unknotting algorithm that, in practice, solves in polynomial time. His algorithm has held up “for every knot we’ve thrown at it so far”, he says. More recently, Marc Lackenby at the University of Oxford developed an unknot recognition algorithm that is “quasi-P” (which, in maths parlance, means “almost there”). It is unlikely to be converted into executable computer code because it is so complicated, but Lackenby is confident that a simplified version “is going to be genuinely practical”.

Showing that unknot recognition is both P and NP won’t solve the wider Millennium Prize Problem, though it could give us useful pointers. Still, it is an important milestone and mathematicians will be celebrating once we get there.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Larissa Fedunik*


How Spiral Search Patterns And Lateral Thinking Cracked Our Puzzle

Rob Eastaway and Brian Hobbs take over our maths column to reveal who solved their puzzle and won a copy of their New Scientist puzzle book, Headscratchers.

When solving problems in the real world, it is rare that the solution is purely mathematical, but maths is often a key ingredient. The puzzle we set a few weeks ago (New Scientist, 30 September, p 45) embraced this by encouraging readers to come up with ingenious solutions that didn’t have to be exclusively maths-based.

Here is a reminder of the problem: Prince Golightly found himself tied to a chair near the centre of a square room, in the dark, with chained monsters in the four corners and an escape door in the middle of one wall. With him, he had a broom, a dictionary, some duct tape, a kitchen clock and a bucket of water with a russet fish.

John Offord was one of several readers to spot an ambiguity in our wording. Four monsters in each corner? Did this mean 16 monsters? John suggested the dictionary might help the captors brush up on their grammar.

The russet fish was deliberately inserted as a red herring (geddit?), but we loved that some readers found ways to incorporate it, either as a way of distracting the monsters or as a source of valuable protein for a hungry prince. Dave Wilson contrived a delightful monster detector, while Glenn Reid composed a limerick with the solution of turning off the computer game and going to bed.

And so to more practical solutions. Arlo Harding and Ed Schulz both suggested ways of creating a torch by igniting assorted materials with an electric spark from the light cable. But Ben Haller and Chris Armstrong had the cleverest mathematical approach. After locating the light fitting in the room’s centre with the broom, they used duct tape and rope to circle the centre, increasing the radius until they touched the wall at what must be its centre, and then continued circling to each wall till they found the escape door. Meanwhile, the duo of Denise and Emory (aged 11) used Pythagoras’s theorem to confirm the monsters in the corners would be safely beyond reach. They, plus Ben and Chris, win a copy of our New Scientist puzzle book Headscratchers.

It is unlikely you will ever have to escape monsters in this way, but spiral search patterns when visibility is limited are employed in various real-world scenarios: rescuers probing for survivors in avalanches, divers performing underwater searches and detectives examining crime scenes, for example. Some telescopes have automated spiral search algorithms that help locate celestial objects. These patterns allow for thorough searches while ensuring you don’t stray too far from your starting point.

Of course, like all real-world problems, mathematical nous isn’t enough. As our readers have displayed, lateral thinking and the ability to improvise are human skills that help us find the creative solutions an algorithm would miss.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Rob Eastaway and Brian Hobbs*


Algebraic Elements Are Like Limit Points!

When you hear the word closure, what do you think of? I think of wholeness – you know, tying loose ends, wrapping things up, filling in the missing parts. This same idea is behind the mathematician’s notion of closure, as in the phrase “taking the closure” of a set. Intuitively this just means adding in any missing pieces so that the result is complete, whole. For instance, the circle on the left is open because it’s missing its boundary. But when we take its closure and include the boundary, we say the circle is closed.

As another example, consider the set of all real numbers strictly between 0 and 1, i.e. the open interval (0,1). Notice that we can get arbitrarily close to 0, but we can’t quite reach it. In some sense, we feel that 0 might as well be included in set, right? I mean, come on, 0.0000000000000000000000000000000000000001 is basically 0, right? So by not considering 0 as an element in our set, we feel like something’s missing. The same goes for 1.

We say an element is a limit point of a given set if that element is “close” to the set,* and we say the set’s closure is the set together with its limit points. (So 0 and 1 are both limit points of (0,1) and its closure is [0,1].) It turns out the word closure is also used in algebra, specifically the algebraic closure of a field, but there it has a completely different definition which has to do with roots of polynomials, called algebraic elementsNow why would mathematicians use the same word to describe two seemingly different things? The purpose of today’s post is to make the observation that they’re not so different after all! This may be somewhat obvious, but it wasn’t until after a recent conversation with a friend that I saw the connection:

 

‍algebraic elements of a field

are like

limit points of a sequence!

(Note: I’m not claiming any theorems here, this is just a student’s simple observation.)

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


Getting Projections Right: Predicting Future Climate

Region by region projections of how climate is likely to change over the coming decades help to make the prospect of global warming more tangible and relevant.

Picturing the climate we are likely to have with unabated increases in greenhouse gas concentrations in, say, Melbourne, Sydney, or the Murray Darling, lets us weigh up the costs and benefits of actions to reduce greenhouse gas emissions.

Regional projections also let us plan how to adapt to any unavoidable changes in our climate. Planning changes to farming practices, water supply or natural ecosystem management, for example, requires some idea of what our future regional climate is likely to be.

Here in Australia we have had a long history of national climate change projections. Since 1990, CSIRO has released five updates of projected changes in temperature, rainfall, extreme events and many other key aspects of our climate system.

CSIRO’s last release was done with the Bureau of Meteorology in 2007. It provided the most detailed product available up to that time.

This release included the innovation (a world first amongst national projections at the time) of providing probabilities for the projected changes.

Why modelling?

The complexity of the climate system means that we cannot simply extrapolate past trends to forecast future conditions. Instead, we use climate models developed and utilised extensively over recent decades.

These are mathematical representations of the climate systems based on the laws of physics.

Results from all of the climate modelling centres around the world are considered in preparing Australian projections. We place greatest weight on the models that are best in representing our historical climate.

Global climate modelling has continued to develop over recent years. Most of the modelling centres are now running improved versions of their models compared to what was available in 2007.

As part of an international coordinated effort, a new database of the latest climate model output is being assembled for researchers to use ahead of the next report of the Intergovernmental Panel on Climate Change (IPCC). It is many times richer than any previously available.

Analysing this massive resource will be a focus of research of a large number of scientists in CSIRO, BoM and the universities over the next few years.

Putting the models to good use

While the science has been developing, so have the demands of users of this projection information. Policymakers at all levels of government, natural resource planners, industry, non-government organisations and individuals all are placing demands on climate projection science. These are growing in volume and complexity.

For example, researchers want regionally specific scenarios for changes in the frequency of hot days, extreme rainfall, fire, drought, cyclones, hail, evaporation, sunshine, coral bleaching temperatures, ocean acidification and sea level rise.

This type of information is particularly useful for risk assessments that can inform policy development and implementation.

For example, assessing future climate risks to infrastructure can place quite different demands on climate projection science compared to, say, assessing risks to agricultural enterprises.

Given these developments, the time is coming for the Australian climate research community to update and expand their projections. Planning has begun for a release in 2014. This will be just after the completion of the next IPCC assessment.

At that time, Australians will have the latest climate projections for the 21st century for a range of factors, including sea levels, seasonal-average temperatures and rainfall, as well as extreme weather events.

Resources permitting, these new projections will also include online services which will enable users to generate climate scenarios to suit the specific needs of many risk assessments.

Finding out more about summer rainfall

As climate scientists start to analyse these new model data, a major focus of attention will be simulated changes to summer rainfall over Australia.

Models have consistently indicated a drying trend for the winter rainfall regions in southern Australia and this is a result which also aligns with other evidence such as observed trends.

On the other hand, models give inconsistent projections for summer rainfall change, ranging from large increase to large decrease. Researchers will be hoping to reduce this key uncertainty as they begin to analyse the results.

However, when it comes to projecting our future climate, there will always be some uncertainty to deal with.

Dealing with uncertainty

Climate projection scientists have to clearly convey the uncertainties while not letting these overwhelm the robust findings about regional climate change that the science provides.

Climate projection uncertainties can be presented in many different ways, such as through ranges of plausible change, as probabilistic estimates, or as alternative scenarios.

We shouldn’t necessarily be most interested in the most likely future. In some cases, it may be more prudent to plan for less likely, but higher risk, future climates.

It can be difficult to make a complex message as relevant as possible to a wide range of decision-makers. CSIRO climate scientists are tackling this by working with social scientists to help develop new and more effective communication methods. These should be ready in time for the next projections release.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Penny Whetton*


Theorem of Everything: The Secret That Links Numbers and Shapes

For millennia mathematicians have struggled to unify arithmetic and geometry. Now one young genius could have brought them in sight of the ultimate prize.

IF JOEY was Chloe’s age when he was twice as old as Zoe was, how many times older will Zoe be when Chloe is twice as old as Joey is now?

Or try this one for size. Two farmers inherit a square field containing a crop planted in a circle. Without knowing the exact size of the field or crop, or the crop’s position within the field, how can they draw a single line to divide both the crop and field equally?

You’ve either fallen into a cold sweat or you’re sharpening your pencil (if you can’t wait for the answer, you can check the bottom of this page). Either way, although both problems count as “maths” – or “math” if you insist – they are clearly very different. One is arithmetic, which deals with the properties of whole numbers: 1, 2, 3 and so on as far as you can count. It cares about how many separate things there are, but not what they look like or how they behave. The other is geometry, a discipline built on ideas of continuity: of lines, shapes and other objects that can be measured, and the spatial relationships between them.

Mathematicians have long sought to build bridges between these two ancient subjects, and construct something like a “grand unified theory” of their discipline. Just recently, one brilliant young researcher might have brought them decisively closer. His radical new geometrical insights might not only unite mathematics, but also help solve one of the deepest number problems of them all: the riddle of the primes. With the biggest prizes in mathematics, the Fields medals, to be awarded this August, he is beginning to look like a shoo-in.

The ancient Greek philosopher and mathematician Aristotle once wrote, “We cannot… prove geometrical truths by arithmetic.” He left little doubt he believed geometry couldn’t help with numbers, either. It was hardly a controversial thought for the time. The geometrical proofs of Aristotle’s near-contemporary Euclid, often called the father of geometry, relied not on numbers, but logical axioms extended into proofs by drawing lines and shapes. Numbers existed on an entirely different, more abstract plane, inaccessible to geometers’ tools.

And so it largely remained until, in the 1600s, the Frenchman René Descartes used the techniques of algebra – of equation-solving and the manipulation of abstract symbols – to put Euclid’s geometry on a completely new footing. By introducing the notion that geometrical points, lines and shapes could all be described by numerical coordinates on an underlying grid, he allowed geometers to make use of arithmetic’s toolkit, and solve problems numerically.

This was a moonshot that let us, eventually, do things like send rockets into space or pinpoint positions to needle-sharp accuracy on Earth. But to a pure mathematician it is only a halfway house. A circle, for instance, can be perfectly encapsulated by an algebraic equation. But a circle drawn on graph paper, produced by plotting out the equation’s solutions, would only ever capture a fragment of that truth. Change the system of numbers you use, for example – as a pure mathematician might do – and the equation remains valid, while the drawing may no longer be helpful.

Wind forward to 1940 and another Frenchman was deeply exercised by the divide between geometry and numbers. André Weil was being held as a conscientious objector in a prison just outside Rouen, having refused to enlist in the months preceding the German occupation of France – a lucky break, as it turned out. In a letter to his wife, he wrote: “If it’s only in prison that I work so well, will I have to arrange to spend two or three months locked up every year?”

Weil hoped to find a Rosetta stone between algebra and geometry, a reference work that would allow truths in one field to be translated into the other. While behind bars, he found a fragment.

It had to do with the Riemann hypothesis, a notorious problem concerning how those most fascinating numbers, the primes, are distributed (see below). There had already been hints that the hypothesis might have geometrical parallels. Back in the 1930s, a variant had been proved for objects known as elliptic curves. Instead of trying to work out how prime numbers are distributed, says mathematician Ana Caraiani at Imperial College London, “you can relate it to asking how many points a curve has”.

Weil proved that this Riemann-hypothesis equivalent applied for a range of more complicated curves too. The wall that had stood between the two disciplines since Ancient Greek times finally seemed to be crumbling. “Weil’s proof marks the beginning of the science with the most un-Aristotelian name of arithmetic geometry,” says Michael Harris of Columbia University in New York.

The Riemann Hypothesis: The million-dollar question

The prime numbers are the atoms of the number system, integers indivisible into smaller whole numbers other than one. There are an infinite number of them and there is no discernible pattern to their appearance along the number line. But their frequency can be measured – and the Riemann hypothesis, formulated by Bernhard Riemann in 1859, predicts that this frequency follows a simple rule set out by a mathematical expression now known as the Riemann zeta function.

Since then, the validity of Riemann’s hypothesis has been demonstrated for the first 10 trillion primes, but an absolute proof has yet to emerge. As a mark of the problem’s importance, it was included in the list of seven Millennium Problems set by the Clay Mathematics Institute in New Hampshire in 2000. Any mathematician who can tame it stands to win $1 million.

In the post-war years, in the more comfortable setting of the University of Chicago, Weil tried to apply his insight to the broader riddle of the primes, without success. The torch was taken up by Alexander Grothendieck, a mathematician ranked as one of the greatest of the 20th century. In the 1960s, he redefined arithmetic geometry.

Among other innovations, Grothendieck gave the set of whole numbers what he called a “spectrum”, for short Spec(Z). The points of this undrawable geometrical entity were intimately connected to the prime numbers. If you could ever work out its overall shape, you might gain insights into the prime numbers’ distribution. You would have built a bridge between arithmetic and geometry that ran straight through the Riemann hypothesis.

The shape Grothendieck was seeking for Spec(Z) was entirely different from any geometrical form we might be familiar with: Euclid’s circles and triangles, or Descartes’s parabolas and ellipses drawn on graph paper. In a Euclidean or Cartesian plane, a point is just a dot on a flat surface, says Harris, “but a Grothendieck point is more like a way of thinking about the plane”. It encompasses all the potential uses to which a plane could be put, such as the possibility of drawing a triangle or an ellipse on its surface, or even wrapping it map-like around a sphere.

If that leaves you lost, you are in good company. Even Grothendieck didn’t manage to work out the geometry of Spec(Z), let alone solve the Riemann hypothesis. That’s where Peter Scholze enters the story.

“Even the majority of mathematicians find most of the work unintelligible”

Born in Dresden in what was then East Germany in 1987, Scholze is currently, at the age of 30, a professor at the University of Bonn. He laid the first bricks for his bridge linking arithmetic and geometry in his PhD dissertation, published in 2012 when he was 24. In it, he introduced an extension of Grothendieck-style geometry, which he termed perfectoid geometry. His construction is built on a system of numbers known as the p-adics that are intimately connected with the prime numbers (see “The p-adics: A different way of doing numbers”). The key point is that in Scholze’s perfectoid geometry, a prime number, represented by its associated p-adics, can be made to behave like a variable in an equation, allowing geometrical methods to be applied in an arithmetical setting.

It’s not easy to explain much more. Scholze’s innovation represents “one of the most difficult notions ever introduced in arithmetic geometry, which has a long tradition of difficult notions”, says Harris. Even the majority of working mathematicians find most of it unintelligible, he adds.

Be that as it may, in the past few years, Scholze and a few initiates have used the approach to solve or clarify many problems in arithmetic geometry, to great acclaim. “He’s really unique as a mathematician,” says Caraiani, who has been collaborating with him. “It’s very exciting to be a mathematician working in the same field.”

This August, the world’s mathematicians are set to gather in Rio de Janeiro, Brazil, for their latest international congress, a jamboree held every four years. A centrepiece of the event is the awarding of the Fields medals. Up to four of these awards are given each time to mathematicians under the age of 40, and this time round there is one name everyone expects to be on the list. “I suspect the only way he can escape getting a Fields medal this year is if the committee decides he’s young enough to wait another four years,” says Marcus du Sautoy at the University of Oxford.

 

Peter Scholze, 30, looks like a shoo-in for mathematics’s highest accolade this summer

With so many grand vistas opening up, the question of Spec(Z) and the Riemann hypothesis almost becomes a sideshow. But Scholze’s new methods have allowed him to study the geometry, in the sense Grothendieck pioneered, that you would see if you examined the curve Spec(Z) under a microscope around the point corresponding to a prime number p. That is still a long way from understanding the curve as a whole, or proving the Riemann hypothesis, but his work has given mathematicians hope that this distant goal might yet be reached. “Even this is a huge breakthrough,” says Caraiani.

Scholze’s perfectoid spaces have enabled bridges to be built in entirely different directions, too. A half-century ago, in 1967, the then 30-year-old Princeton mathematician Robert Langlands wrote a tentative letter to Weil outlining a grand new idea. “If you are willing to read it as pure speculation I would appreciate that,” he wrote. “If not – I am sure you have a waste basket handy.”

In his letter, Langlands suggested that two entirely distinct branches of mathematics, number theory and harmonic analysis, might be related. It contained the seeds of what became known as the Langlands program, a vastly influential series of conjectures some mathematicians have taken to calling a grand unified theory capable of linking the three core mathematical disciplines: arithmetic, geometry and analysis, a broad field that we encounter in school in the form of calculus. Hundreds of mathematicians around the world, including Scholze, are committed to its completion.

The full slate of Langlands conjectures is no more likely than the original Riemann hypothesis to be proved soon. But spectacular discoveries could lie in store: Fermat’s last theorem, which took 350 years to prove before the British mathematician Andrew Wiles finally did so in 1994, represents just one particular consequence of its conjectures. Recently, the French mathematician Laurent Fargues proposed a way to build on Scholze’s work to understand aspects of the Langlands program concerned with p-adics. It is rumoured that a partial solution could appear in time for the Rio meeting.

In March, Langlands won the other great mathematical award, the Abel prize, for his lifetime’s work. “It took a long time for the importance of Langlands’s ideas to be recognised,” says Caraiani, “and they were overdue for a major award.” Scholze seems unlikely to have to wait so long.

The p-adics: A different way of doing numbers

Key to the latest work in unifying arithmetic and geometry are p-adic numbers.

These are an alternative way of representing numbers in terms of any given prime number p. To make a p-adic number from any positive integer, for example, you write that number in base p, and reverse it. So to write 20 in 2-adic form, say, you take its binary, or base-2, representation – 10100 – and write it backwards, 00101. Similarly 20’s 3-adic equivalent is 202, and as a 4-adic it is written 011.

The rules for manipulating p-adics are a little different, too. Most notably, numbers become closer as their difference grows more divisible by whatever p is. In the 5-adic numbers, for example, the equivalents of 11 and 36 are very close because their difference is divisible by 5, whereas the equivalents of 10 and 11 are further apart.

For decades after their invention in the 1890s, the p-adics were just a pretty mathematical toy: fun to play with, but of no practical use. But in 1920, the German mathematician Helmut Hasse came across the concept in a pamphlet in a second-hand bookshop, and became fascinated. He realised that the p-adics provided a way of harnessing the unfactorisability of the primes – the fact they can’t be divided by other numbers – that turned into a shortcut to solving complicated proofs.

Since then, p-adics have played a pivotal part in the branch of maths called number theory. When Andrew Wiles proved Fermat’s infamous last theorem (that the equation xn + yn = zn has no solutions when x, y and z are positive integers and n is an integer greater than 2) in the early 1990s, practically every step in the proof involved p-adic numbers.

  • Answers: Zoe will be three times as old as she is now. The farmers should draw a line across the field that connects the centre points of the field and the crop.

This article appeared in print under the headline “The shape of numbers”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Infinity War: The Ongoing Battle Over The World’s Hardest Maths Proof

Is there an error in there somewhere?

It’s the stuff of Hollywood. Somebody somewhere is surely selling the movie rights to what’s become the biggest spat in maths: a misunderstood genius, a 500-page proof almost nobody can understand and a supporting cast squabbling over what it all means. At stake: nothing less than the future of pure mathematics.

In 2012, Shinichi Mochizuki at Kyoto University in Japan produced a proof of a long-standing problem called the ABC conjecture. Six years later the jury is still out on whether it’s correct. But in a new twist, Peter Scholze at the University of Bonn – who was awarded the Fields Medal, the highest honour in maths, in August – and Jakob Stix at Goethe University Frankfurt – who is an expert in the type of maths used by Mochizuki – claim to have found an error at the heart of Mochizuki’s proof.

Roll credits? Not so fast. The pairs’ reputation means that their claim is a serious blow for Mochizuki. And a handful of other mathematicians claim to have lost the thread of the proof at the same point Scholze and Stix say there is an error. But there is still room for dispute.

a + b = c?

The ABC conjecture was first proposed in the 1980s and concerns a fundamental property of numbers, based around the simple equation a + b = c. For a long time, mathematicians believed that the conjecture was true but nobody had ever been able to prove it.

To tackle the problem, Mochizuki had to invent a fiendish type of maths called Inter-universal Teichmüller (IUT) theory. In an effort to understand IUT better, Scholze and Stix spent a week with Mochizuki in Tokyo in March. By the end of the week, they claim to have found an error.

The alleged flaw comes in Conjecture 3.12, which many see as the crux of the proof. This section involves measuring an equivalence between different mathematical objects. In effect, Scholze and Stix claim that Mochizuki changes the length of the measuring stick in the middle of the process.

No proof

“We came to the conclusion that there is no proof,” they write in their report, which was posted online on 20 September.

But Ivan Fesenko at the University of Nottingham, UK, who says he is one of only 15 people around the world who actually understand Mochizuki’s theory, thinks Scholze and Stix are jumping the gun. “They spent much less time than all of us who have been studying this for many years,” says Fesenko.

Mochizuki has tried to help others understand his work, taking part in seminars and answering questions. Mochizuki was even the one who posted Scholze and Stix’s critical report. “We have this paradoxical situation in which the victim has published the report of the villain,” says Fesenko with a laugh. “This is an unprecedented event in mathematics.”

So is the proof wrong or just badly explained? Fesenko thinks that the six-year dispute exposes something rotten at the heart of pure mathematics. These days mathematicians work in very narrow niches, he says. “People just do not understand what the mathematician in the next office to you is doing.”

This means that mathematicians will increasingly have to accept others’ proofs without actually understanding them – something Fesenko describes as a fundamental problem for the future development of mathematics.

This suggests the story of Mochizuki’s proof may forever lack a satisfactory ending – becoming a war between mathematicians that is doomed to spiral into infinity. “My honest answer is that we will never have consensus about it,” says Fesenko.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Douglas Heaven*

 


Hot And Bothered: The Uncertain Mathematics Of Global Warming

Uncertainty exists – but that’s no excuse for a lack of action.

These are painful times for those hoping to see an international consensus and substantive action on global warming.

In the US, Republican presidential front-runner Mitt Romney said in June 2011: “The world is getting warmer” and “humans have contributed” but in October 2011 he backtracked to: “My view is that we don’t know what’s causing climate change on this planet.”

His Republican challenger Rick Santorum added: “We have learned to be sceptical of ‘scientific’ claims, particularly those at war with our common sense” and Rick Perry, who suspended his campaign to become the Republican presidential candidate last month, stated flatly: “It’s all one contrived phony mess that is falling apart under its own weight.”

Meanwhile, the scientific consensus has moved in the opposite direction. In a study published in October 2011, 97% of climate scientists surveyed agreed global temperatures have risen over the past 100 years. Only 5% disagreed that human activity is a significant cause of global warming.

The study concluded in the following way: “We found disagreement over the future effects of climate change, but not over the existence of anthropogenic global warming.

“Indeed, it is possible that the growing public perception of scientific disagreement over the existence of anthropocentric warming, which was stimulated by press accounts of [the UK’s] ”Climategate“ is actually a misperception of the normal range of disagreements that may persist within a broad scientific consensus.”

More progress has been made in Europe, where the EU has established targets to reduce emissions by 20% (from 1990 levels) by 2020. The UK, which has been beset by similar denial movements, was nonetheless able to establish, as a legally binding target, an 80% reduction by 2050 and is a world leader on abatement.

In Australia, any prospect for consensus was lost when Tony Abbott used opposition to the Labor government’s proposed carbon market to replace Malcolm Turnbull as leader of the Federal Opposition in late 2009.

It used to be possible to hear right-wing politicians in Australia or the USA echo the Democratic congressman Henry Waxman who said last year:

“If my doctor told me I had cancer, I wouldn’t scour the country to find someone to tell me that I don’t need to worry about it.”

But such rationality has largely left the debate in both the US and Oz. In Australia, a reformulated carbon tax policy was enacted in November only after a highly partisan debate.

In Canada, the debate is a tad more balanced. The centre-right Liberal government in British Columbia passed the first carbon tax in North America in 2008, but the governing Federal Conservative party now offers a reliable “anti-Kyoto” partnership with Washington.

Overviews of the evidence for global warming, together with responses to common questions, are available from various sources, including:

  • Seven Answers to Climate Contrarian Nonsense, in Scientific American
  • Climate change: A Guide for the Perplexed, in New Scientist
  • Cooling the Warming Debate: Major New Analysis Confirms That Global Warming Is Real, in Science Daily
  • Remind me again: how does climate change work?, on The Conversation

It should be acknowledged in these analyses that all projections are based on mathematical models with a significant level of uncertainty regarding highly complex and only partially understood systems.

As 2011 Australian Nobel-Prize-winner Brian Schmidt explained while addressing a National Forum on Mathematical Education:

“Climate models have uncertainty and the earth has natural variation … which not only varies year to year, but correlates decade to decade and even century to century. It is really hard to design a figure that shows this in a fair way — our brain cannot deal with the correlations easily.

“But we do have mathematical ways of dealing with this problem. The Australian academy reports currently indicate that the models with the effects of CO₂ are with 90% statistical certainty better at explaining the data than those without.

“Most of us who work with uncertainty know that 90% statistical uncertainty cannot be easily shown within a figure — it is too hard to see …”

“ … Since predicting the exact effects of climate change is not yet possible, we have to live with uncertainty and take the consensus view that warming can cover a wide range of possibilities, and that the view might change as we learn more.”

But uncertainty is no excuse for inaction. The proposed counter-measures (e.g. infrastructure renewal and modernisation, large-scale solar and wind power, better soil remediation and water management, not to mention carbon taxation) are affordable and most can be justified on their own merits, while the worst-case scenario — do nothing while the oceans rise and the climate changes wildly — is unthinkable.

Some in the first world protest that any green energy efforts are dwarfed by expanding energy consumption in China and elsewhere. Sure, China’s future energy needs are prodigious, but China also now leads the world in green energy investment.

By blaiming others and focusing the debate on the level of human responsibility for warming and about the accuracy of predictions, the deniers have managed to derail long-term action in favour of short-term economic policies.

Who in the scientific community is promoting the denial of global warming? As it turns out, the leading figures in this movement have ties to conservative research institutes funded mostly by large corporations, and have a history of opposing the scientific consensus on issues such as tobacco and acid rain.

What’s more, those who lead the global warming denial movement – along with creationists, intelligent design writers and the “mathematicians” who flood our email inboxes with claims that pi is rational or other similar nonsense – are operating well outside the established boundaries of peer-reviewed science.

Austrian-born American physicist Fred Singer, arguably the leading figure of the denial movement, has only six peer-reviewed publications in the climate science field, and none since 1997.

After all, when issues such as these are “debated” in any setting other than a peer-reviewed journal or conference, one must ask: “If the author really has a solid argument, why isn’t he or she back in the office furiously writing up this material for submission to a leading journal, thereby assuring worldwide fame and glory, not to mention influence?”

In most cases, those who attempt to grab public attention through other means are themselves aware they are short-circuiting the normal process, and that they do not yet have the sort of solid data and airtight arguments that could withstand the withering scrutiny of scientific peer review.

When they press their views in public to a populace that does not understand how the scientific enterprise operates, they are being disingenuous.

With regards to claims scientists are engaged in a “conspiracy” to hide the “truth” on an issue such as global warming or evolution, one should ask how a secret “conspiracy” could be maintained in a worldwide, multicultural community of hundreds of thousands of competitive researchers.

As Benjamin Franklin wrote in his Poor Richard’s Almanac: “Three can keep a secret, provided two of them are dead.” Or as one of your present authors quipped, tongue-in-cheek, in response to a state legislator who was skeptical of evolution: “You have no idea how humiliating this is to me — there is a secret conspiracy among leading scientists, but no-one deemed me important enough to be included!”

There’s another way to think about such claims: we have tens-of-thousands of senior scientists in their late-fifties or early-sixties who have seen their retirement savings decimated by the recent stock market plunge. These are scientists who now wonder if the day will ever come when they are financially well-off-enough to do their research without the constant stress and distraction of applying for grants (the majority of which are never funded).

All one of these scientists has to do to garner both worldwide fame and considerable fortune (through book contracts, the lecture circuit and TV deals) is to call a news conference and expose “the truth”. So why isn’t this happening?

The system of peer-reviewed journals and conferences sponsored by major professional societies is the only proper forum for the presentation and debate of new ideas, in any field of science or mathematics.

It has been stunningly successful: errors have been uncovered, fraud has been rooted out and bogus scientific claims (such as the 1903 N-ray claim, the 1989 cold fusion claim, and the more-recent assertion of an autism-vaccination link) have been debunked.

This all occurs with a level of reliability and at a speed that is hard to imagine in other human endeavours. Those who attempt to short-circuit this system are doing potentially irreparable harm to the integrity of the system.

They may enrich themselves or their friends, but they are doing grievous damage to society at large.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon) and David H. Bailey*

 


Good at Sudoku? Here’s Some You’ll Never Complete

There’s far more to the popular maths puzzle than putting numbers in a box.

Last month, a team led by Gary McGuire from University College Dublin in Ireland made an announcement: they had proven you can’t have a solvable Sudoku puzzle with less than 17 numbers already filled in.

Unlike most mathematical announcements, this was quickly picked up by the popular scientific media. Within a few days, the new finding had been announced in Nature and other outlets.

So where did this problem come from and why is its resolution interesting?

As you probably know, the aim of a Sudoku puzzle is to complete a partially-filled nine-by-nine grid of numbers. There are some guidelines: the numbers one to nine must appear exactly once each in every row, column and three-by-three sub-grid.

As with a crossword, a valid Sudoku puzzle must have a unique solution. There’s only one way to go from the initial configuration (with some numbers already filled in) to a completed grid.

Newspapers often grade their puzzles as easy, medium or hard, which will depend on how easy it is at every stage of solving the puzzle to fill in the “next” number. While a puzzle with a huge number of initial clues will usually be easy, it is not necessarily the case that a puzzle with few initial clues is difficult.

Reckon you can complete a 17-clue Sudoku puzzle? (answer below) Gordon Royle

When Sudoku-mania swept the globe in the mid-2000s, many mathematicians, programmers and computer scientists – amateur and professional – started to investigate Sudoku itself. They were less interested in solving individual puzzles, and more focused on asking and answering mathematical and/or computational questions about the entire universe of Sudoku puzzles and solutions.

As a mathematician specialising in the area of combinatorics (which can very loosely be defined as the mathematics of counting configurations and patterns), I was drawn to combinatorial questions about Sudoku.

I was particularly interested in the question of the smallest number of clues possible in a valid puzzle (that is, a puzzle with a unique solution).

In early 2005, I found a handful of 17-clue puzzles on a long-since forgotten Japanese-language website. By slightly altering these initial puzzles, I found a few more, then more, and gradually built up a “library” of 17-clue Sudoku puzzles which I made available online at the time.

Other people started to send me their 17-clue puzzles and I added any new ones to the list until, after a few years, I had collected more than 49,000 different 17-clue Sudoku puzzles.

By this time, new ones were few and far between, and I was convinced we had found almost all of the 17-clue puzzles. I was also convinced there was no 16-clue puzzle. I thought that demonstrating this would either require some new theoretical insight or clever programming combined with massive computational power, or both.

Either way, I thought proving the non-existence of a 16-clue puzzle was likely to be too difficult a challenge.

They key to McGuire’s approach was to tackle the problem indirectly. The total number of completed puzzles (that is, completely filled-in grids) is astronomical – 5,472,730,538 – and trying to test each of these to see if any choice of 16 cells from the completed grid forms a valid puzzle is far too time-consuming.

Instead, McGuire and colleagues used a different, indirect approach.

An “unavoidable set” in a completed Sudoku grid is a subset of the clues whose entries can be rearranged to leave another valid completed Sudoku grid. For a puzzle to be uniquely completable, it must contain at least one entry from every unavoidable set.

If a completed grid contains the ten-clue configuration in the left picture, then any valid Sudoku puzzle must contain at least one of those ten clues. If it did not, then in any completed puzzle, those ten positions could either contain the left-hand configuration or the right-hand configuration and so the solution would not be unique.

Gordon Royle

While finding all the unavoidable sets in a given grid is difficult, it’s only necessary to find enough unavoidable sets to show that no 16 clues can “hit” them all. In the process of resolving this question, McGuire’s team developed new techniques for solving the “hitting set” problem.

It’s a problem that has many other applications – any situation in which a small set of resources must be allocated while still ensuring that all needs are met by at least one of the selected resources (i.e. “hit”) can be modelled as a hitting set problem.

Once the theory and software was in place, it was then a matter of running the programs for each of the 5.5 billion completed grids. As you can imagine, this required substantial computing power.

After 7 million core-CPU hours on a supercomputer (the equivalent of a single computer running for 7 million hours) and a year of actual elapsed time, the result was announced a few weeks ago, on New Year’s Day.

So is it correct?

The results of any huge computation should be evaluated with some caution, if not outright suspicion, especially when the answer is simply “no, doesn’t exist”, because there are many possible sources of error.

But in this case, I feel the result is far more likely to be correct than otherwise, and I expect it to be independently-verified before too long. In addition, McGuire’s team built on many different ideas, discussions and computer programs that were thrashed out between interested contributors to various online forums devoted to the mathematics of Sudoku. In this respect, many of the basic components of their work have already been thoroughly tested.

Solution to the 17-clue Sudoku puzzle, above. Gordon Royle

And so back to the question: why is the resolution of this problem interesting? And is it important?

Certainly, knowing that the smallest Sudoku puzzles have 17 clues is not in itself important. But the immense popularity of Sudoku meant that this question was popularised in a way that many similar questions have never been, and so it took on a special role as a “challenge question” testing the limits of human knowledge.

The school students to whom I often give outreach talks have no real concept of the limitations of computers and mathematics. In my past talks, these students were almost always astonished to know that the answer to such a simple question was just not known.

And now, in my future outreach talks, I will describe how online collaboration, theoretical development and significant computational power were combined to solve this problem, and how this process promises to play an increasing role in the future development of mathematics.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gordon Royle*

 


Calls For a Posthumous Pardon … But Who was Alan Turing?

Momentum is gathering behind calls to pardon the father of computer science. BinaryApe

You may have read the British Government is being petitioned to grant a posthumous pardon to one of the world’s greatest mathematicians and most successful codebreakers, Alan Turing. You may also have read that Turing was was convicted of gross indecency in 1952 and died tragically two years later.

But who, exactly, was he?

Born in London in 1912, Turing helped lay the foundations of the “information age” we live in.

He did his first degree at King’s College, Cambridge, and then became a Fellow there. His first big contribution was his development of a mathematical model of computation in 1936. This became known as the Turing Machine.

It was not the first time a computer had been envisaged: that distinction belonged to Charles Babbage, a 19th century mathematician who designed a computer based on mechanical technology and built parts of it (some of which may be seen at the Science Museum in London or Powerhouse Museum in Sydney, for example).

But Babbage’s design was necessarily complicated, as he aimed for a working device using specific technology. Turing’s design was independent of any particular technology, and was not intended to be built.

The now iconic shot of Alan Turing.

It was very simple, and would be very inefficient and impractical as a device for doing real computations. But its simplicity meant it could be used to do mathematical reasoning about computation.

Turing used his abstract machines to investigate what kinds of things could be computed. He found some tasks which, although perfectly well defined and mathematically precise, are uncomputable. The first of these is known as the halting problem, which asks, for any given computation, whether it will ever stop. Turing showed that this was uncomputable: there is no systematic method that always gives the right answer.

So, if you have ever wanted a program that can run on your laptop and test all your other software to determine which of them might cause your laptop to “hang” or get stuck in a never-ending loop, the bad news is such a comprehensive testing program cannot be written.

Uncomputability is not confined to questions about the behaviour of computer programs. Since Turing’s work, many problems in mainstream mathematics have been found to be uncomputable. For example, the Russian mathematician and computer scientist, Yuri Matiyasevich, showed in 1970 that determining if a polynomial equation with several variables has a solution consisting only of whole numbers is also an uncomputable problem.

Turing machines have been used to define measures of the efficiency of computations. They underpin formal statements of the P vs NP problem, one of the Millennium Prize problems.

Another important feature of Turing’s model is its capacity to treat programs as data. This means the programs that tell computers what to do can themselves, after being represented in symbolic form, be given as input to other programs. Turing Machines that can take any program as input, and run that program on some input data, are called Universal Turing Machines.

These are really conceptual precursors of today’s computers, which are stored-program computers, in that they can treat programs as data in this sense. The oldest surviving intact computer in the world, in this most complete sense of the term, is CSIRAC at Melbourne Museum.

 

CSIRAC was Australia’s first digital computer, and the fourth “stored program” computer in the world. Melbourne Museum

It seems a mathematical model of computation was an idea whose time had come. In 1936, the year of Turing’s result, another model of computation was published by Alonzo Church of Princeton University. Although Turing and Church took quite different routes, they ended up at the same place, in that the two models give exactly the same notion of computability.

In other words, the classification of tasks into computable and uncomputable is independent of which of these two models is used.

Other models of computation have been proposed, but mostly they seem to lead to the same view of what is and is not computable. The Church-Turing Thesis states that this class of computable functions does indeed capture exactly those things which can be computed in principle (say by a human with unlimited time, paper and ink, who works methodically and makes no mistakes).

It implies Turing Machines give a faithful mathematical model of computation. This is not a formal mathematical result, but rather a working assumption which is now widely accepted.

Turing went to Princeton and completed his PhD under Church, returning to Britain in 1938.

Early in the Second World War, Turing joined the British codebreaking operation at Bletchley Park, north-west of London. He became one of its most valuable assets. He was known by the nickname “Prof” and was described by colleague Jack Good as “a deep rather than a fast thinker”.

One of the famous Enigma machines decrypted at Bletchley Park. Keir David

At the time, Germany was using an encryption device known as Enigma for much of its communications. This was widely regarded as completely secure. The British had already obtained an Enigma machine, from the Poles, and building on their work, Turing and colleague Gordon Welchman worked out how the Enigma-encrypted messages collected by the British could be decrypted.

Turing designed a machine called the Bombe, named after a Polish ice cream, which worked by testing large numbers of combinations of Enigma machine configurations, in order to help decrypt secret messages. These messages yielded information of incalculable value to the British. Winston Churchill described the Bletchley Park codebreakers as “geese that laid the golden eggs but never cackled”.

In 1945, after the war, Turing joined the National Physical Laboratory (NPL), where he wrote a report on how to construct an electronic computer, this time a general-purpose one unlike the machines dedicated to cryptanalysis which he helped to design at Bletchley Park.

This report led to the construction of an early computer (Pilot ACE) at NPL in 1950. By then, Turing had already moved on to Manchester University, where he worked on the first general-purpose stored-program computer in the world, the Manchester “Baby”.

The remade Bombe machine at Bletchley Park, England, features miles of circuitry. Keir David

In their early days, computers were often called “electronic brains”. Turing began to consider whether a computer could be programmed to simulate human intelligence, which remains a major research challenge today and helped to initiate the field of artificial intelligence.

A fundamental issue in such research is: how do you know if you have succeeded? What test can you apply to a program to determine if it has intelligence? Turing proposed that a program be deemed intelligent if, in its interaction with a human, the human is unable to detect whether he or she is communicating with another human or a computer program. (The test requires a controlled setting, for example where all communication with the human tester is by typed text.)

His paper on this topic – Computing Machinery and Intelligence – was published in 1950. The artificial intelligence community holds regular competitions to see how good researchers’ programs are at the Turing test.

The honours Turing received during his lifetime included an OBE in 1945 and becoming a Fellow of the Royal Society in 1951.

His wartime contributions remained secret throughout his life and for many years afterwards.

In 1952 he was arrested for homosexuality, which was illegal in Britain at the time. Turing was found guilty and required to undergo “treatment” with drugs. This conviction also meant he lost his security clearance.

In 1954 he ingested some cyanide, probably via an apple, and died. An inquest classified his death as suicide, and this is generally accepted today. But some at the time, including his mother, contended his death was an accidental consequence of poor handling of chemicals during some experiments he was conducting at home in his spare time.

Dino Gravalo.

The irony of Turing losing his security clearance – after the advantage his work had given Britain in the war, in extraordinary secrecy – is clear.

The magnitude of what was done to him has become increasingly plain over time, helped by greater availability of information about the work at Bletchley Park and changing social attitudes to homosexuality.

Next year, 2012, will be the centenary of Turing’s birth – with events planned globally to celebrate the man and his contribution. As this year approached, a movement developed to recognise Turing’s contribution and atone for what was done to him. In 2009, British Prime Minister, Gordon Brown, responding to a petition, issued a formal apology on behalf of the British government for the way Turing was treated.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Graham Farr*