Want To Win at Gambling? Use Your Head

When you know the numbers, things get a whole lot easier. Roberto Bouza

Gambling in Australia – Some say “punting is a mug’s game”. But is this always true, or can an astute gambler make long-term profits?

Certainly not from casino games. Casinos make profits by paying less than they should on winning bets. A roulette wheel has 37 numbers, so a gambler who bets a dollar has a 1/37 chance of winning and should receive back $37 on a winning number.

But the casino pays only $36.

On average, a gambler loses $1 for every $37 they bet: a loss of 2.7%.

This is the cost of playing the game and it’s the profit the casino makes often called the “house percentage”.

Houses of all sizes

For casino games such as roulette, Keno and poker machines, the house percentage can be calculated mathematically, and in spite of many proposed betting systems, is an immutable and unchangeable number. No strategy can be used by the punter to make the game profitable.

While gamblers may experience short-term lucky streaks, in the long run they will lose this predetermined percentage of their wagers. But a sensible casino gambler should at least be familiar with the house percentages:

Betting the win line at craps at 1.4%, or red or black at roulette at 2.7%, might be a better option than Keno or Lotto with a house percentage of over 40%.

Let’s be clear here: for every $100 bet through Tattslotto or Powerball, the “house” only pays out $60, keeping $40 for itself.

But sports betting is different.

In a horse race, the chance of winning (and hence the price for a winning bet) is determined subjectively, either by the bookmaker or by the weight of money invested by the public.

If 20% of the amount a bookmaker takes on a race is for the favourite, the public is effectively estimating that particular horse’s chance of winning at one in five. But the bookmaker might set the horse’s winning price at $4.50 (for every $1 bet, the punter gets $4.50 back), giving the bookie a house percentage of 10%.

But a trainer, or jockey with inside knowledge (or statistician with a mathematical model based on past data), may estimate this same horse’s chances at one in three. If the savvy punter is correct, then for every $3 bet they average $4.50 return.

A logical punter looks for value – bets that pay more than a fair price as determined by their true probability of winning. There are several reasons why sports betting lends itself to punters seeking value bets.

A sporting chance

In general, more outcomes in a game allow for a higher house percentage. With two even outcomes (betting on a head or tail with a single coin toss, say), a fair price would be $2.

The operator might be able to pay out as little $1.90, giving a house percentage of 5%, but anything less than this would probably see little interest from gamblers.

But a Keno game with 20 million outcomes might only pay $1 million for a winning $1 bet, rather than a fair $20,000,000. A payout of $1 million gives a staggering house percentage of 95%.

Traditionally, sports betting was restricted to horse, harness and dog racing – events with several outcomes that allowed house percentages of around 15%-20%.

With the extension into many other team and individual sports, betting on which of the two participants would win reduced a bookmaker’s take to as little as 3%-4%.

Competition reduces this further. Only the state-run totalisator (an automated system which like Tattslotto, determined the winning prices after the event, thus always ensuring the legislated house percentage), and a handful of on-course bookmakers were originally allowed to offer bets on horse racing, whereas countless internet operators now compete.

Betfair even allows punters to bet against each other, effectively creating millions of “bookmakers”.

Head or heart

Many sports punters bet with their hearts, not their heads. This reduces the prices of popular players or teams, thereby increasing the price of their opponents. The low margins and extensive competition even allow punters to sometimes find arbitrage opportunities (where betting on both sides with different bookmakers allows a profit whoever wins).

To overcome their heart, and lack of inside knowledge, many mathematicians create mathematical and statistical models based on past data and results to predict the chances of sports outcomes. They prove the veracity of their models by testing (either on past data or in real time) whether they would profit if the predictions were used for betting.

Academics call the ability to show a profit the “inefficiency of betting markets”, and there are many papers to suggest sports markets are inefficient. Of course the more successful have a vested interest in keeping their methods to themselves and may not publicise their results.

Astute punters can make sports betting profitable in the long term. But the profits made by the plethora of sports bookmakers indicate that most sports punters are not that astute.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Stephen Clarke*


Physicists are Turning to Lewis Carroll For Help With Their Maths

Lewis Caroll was the pen name for mathematician Charles Dodgson

Curiouser and curiouser! Particle physicists could have the author of Alice’s Adventures in Wonderland to thank for simplifying their calculations.

Lewis Carroll, the 19th century children’s author, was the pen name of mathematician Charles Lutwidge Dodgson. While his mathematical contributions mostly proved unremarkable, one particular innovation may have stood the test of time.

Marcel Golz at Humboldt University, Berlin has built on Dodgson’s work to help simplify the complex equations that arise when physicists try to calculate what happens when particles interact. The hope is that it could allow for speedier and more accurate computations, allowing experimentalists at places like the Large Hadron Collider in Geneva, Switzerland to better design their experiments.

Working out the probabilities of different particle interactions is commonly done using Feynman diagrams, named after the Nobel prize winning physicist Richard Feynman. These diagrams are a handy visual aid for encoding the complex processes at play, allowing them to be converted into mathematical notation.

One early way of representing these diagrams was known as the parametric representation, which has since lost favour among physicists owing to its apparent complexity. To mathematicians, however, patterns in the resulting equations suggest that it might be possible to dramatically simplify them in ways not possible for more popular representations. These simplifications could in turn enable new insights. “A lot of this part of physics is constrained by how much you can compute” says Karen Yeats, a mathematician at the university of Waterloo, Canada.

Golz’s work makes use of the Dodgson identity, a mathematical equivalence noted by Dodgson in an 1866 paper, to perform this exact sort of simplification. While much of the connecting mathematics was done by Francis Brown, one of Golz’s tutors at Oxford University, the intellectual lineage can be traced all the way back to Lewis Carroll. “It’s kind of a nice curiosity,” says Golz. “A nice conversation starter.”

In the past, parametric notation was only useful in calculating simplified forms of quantum theory. Thanks to work like Golz’s, these simplifications could be extended to particle behaviour of real interest to experimentalists. “I can say with confidence that these parametric techniques, applied to the right problems, are game-changing,” says Brown.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Mathematicians Find 12,000 Solutions For Fiendish Three-Body Problem

Until recently, working out how three objects can stably orbit each other was nearly impossible, but now mathematicians have found a record number of solutions.

The motion of three objects is more complex than you might think

The question of how three objects can form a stable orbit around each other has troubled mathematicians for more than 300 years, but now researchers have found a record 12,000 orbital arrangements permitted by Isaac Newton’s laws of motion.

While mathematically describing the movement of two orbiting bodies and how each one’s gravity affects the other is relatively simple, the problem becomes vastly more complex once a third object is added. In 2017, researchers found 1223 new solutions to the three-body problem, doubling the number of possibilities then known. Now, Ivan Hristov at Sofia University in Bulgaria and his colleagues have unearthed more than 12,000 further orbits that work.

The team used a supercomputer to run an optimised version of the algorithm used in the 2017 work, discovering 12,392 new solutions. Hristov says that if he repeated the search with even more powerful hardware he could find “five times more”.

All the solutions found by the researchers start with all three bodies being stationary, before entering freefall as they are pulled towards each other by gravity. Their momentum then carries them past each other before they slow down, stop and are attracted together once more. The team found that, assuming there is no friction, this pattern would repeat infinitely.

Solutions to the three-body problem are of interest to astronomers, as they can describe how any three celestial objects – be they stars, planets or moons – can maintain a stable orbit. But it remains to be seen how stable the new solutions are when the tiny influences of additional, distant bodies and other real-world noise are taken into account.

“Their physical and astronomical relevance will be better known after the study of stability – it’s very important,” says Hristov. “But, nevertheless – stable or unstable – they are of great theoretical interest. They have a very beautiful spatial and temporal structure.”

Juhan Frank at Louisiana State University says that finding so many solutions in a precise set of conditions will be of interest to mathematicians, but of limited application in the real world.

“Most, if not all, require such precise initial conditions that they are probably never realised in nature,” says Frank. “After a complex and yet predictable orbital interaction, such three-body systems tend to break into a binary and an escaping third body, usually the least massive of the three.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes *


Is the Universe a Game?

Generations of scientists have compared the universe to a giant, complex game, raising questions about who is doing the playing – and what it would mean to win.

If the universe is a game, then who’s playing it?

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or mathematician to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

Is the universe a game? Famed physicist Richard Feynman certainly thought so: “‘The world’ is something like a great chess game being played by the gods, and we are observers of the game.” As we observe, it is our task as scientists to try to work out the rules of the game.

The 17th-century mathematician Gottfried Wilhelm Leibniz also looked on the universe as a game and even funded the foundation of an academy in Berlin dedicated to the study of games: “I strongly approve of the study of games of reason not for their own sake but because they help us to perfect the art of thinking.”

Our species loves playing games, not just as kids but into adulthood. It is believed to have been an important part of evolutionary development – so much so that the cultural theorist Johan Huizinga proposed we should be called Homo ludens, the playing species, rather than Homo sapiens. Some have suggested that once we realised that the universe is controlled by rules, we started developing games as a way to experiment with the consequences of these rules.

Take, for example, one of the very first board games that we created. The Royal Game of Ur dates back to around 2500 BC and was found in the Sumerian city of Ur, part of Mesopotamia. Tetrahedral-shaped dice are used to race five pieces belonging to each player down a shared sequence of 12 squares. One interpretation of the game is that the 12 squares represent the 12 constellations of the zodiac that form a fixed background to the night sky and the five pieces correspond to the five visible planets that the ancients observed moving through the night sky.

But does the universe itself qualify as a game? Defining what actually constitutes a game has been a subject of heated debate. Logician Ludwig Wittgenstein believed that words could not be pinned down by a dictionary definition and only gained their meaning through the way they were used, in a process he called the “language game”. An example of a word that he believed only got its meaning through use rather than definition was “game”. Every time you try to define the word “game”, you wind up including some things that aren’t games and excluding others you meant to include.

Other philosophers have been less defeatist and have tried to identify the qualities that define a game. Everyone, including Wittgenstein, agrees that one common facet of all games is that they are defined by rules. These rules control what you can or can’t do in the game. It is for this reason that as soon as we understood that the universe is controlled by rules that bound its evolution, the idea of the universe as a game took hold.

In his book Man, Play and Games, theorist Roger Caillois proposed five other key traits that define a game: uncertainty, unproductiveness, separateness, imagination and freedom. So how does the universe match up to these other characteristics?

The role of uncertainty is interesting. We enter a game because there is a chance either side will win – if we know in advance how the game will end, it loses all its power. That is why ensuring ongoing uncertainty for as long as possible is a key component in game design.

Polymath Pierre-Simon Laplace famously declared that Isaac Newton’s identification of the laws of motion had removed all uncertainty from the game of the universe: “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past could be present before its eyes.”

Solved games suffer the same fate. Connect 4 is a solved game in the sense that we now know an algorithm that will always guarantee the first player a win. With perfect play, there is no uncertainty. That is why games of pure strategy sometimes suffer – if one player is much better than their opponent then there is little uncertainty in the outcome. Donald Trump against Garry Kasparov in a game of chess will not be an interesting game.

The revelations of the 20th century, however, have reintroduced the idea of uncertainty back into the rules of the universe. Quantum physics asserts that the outcome of an experiment is not predetermined by its current state. The pieces in the game might head in multiple different directions according to the collapse of the wave function. Despite what Albert Einstein believed, it appears that God is playing a game with dice.

Even if the game were deterministic, the mathematics of chaos theory also implies that players and observers will not be able to know the present state of the game in complete detail and small differences in the current state can result in very different outcomes.

That a game should be unproductive is an interesting quality. If we play a game for money or to teach us something, Caillois believed that the game had become work: a game is “an occasion of pure waste: waste of time, energy, ingenuity, skill”. Unfortunately, unless you believe in some higher power, all evidence points to the ultimate purposelessness of the universe. The universe is not there for a reason. It just is.

The other three qualities that Caillois outlines perhaps apply less to the universe but describe a game as something distinct from the universe, though running parallel to it. A game is separate – it operates outside normal time and space. A game has its own demarcated space in which it is played within a set time limit. It has its own beginning and its own end. A game is a timeout from our universe. It is an escape to a parallel universe.

The fact that a game should have an end is also interesting. There is the concept of an infinite game that philosopher James P. Carse introduced in his book Finite and Infinite Games. You don’t aim to win an infinite game. Winning terminates the game and therefore makes it finite. Instead, the player of the infinite game is tasked with perpetuating the game – making sure it never finishes. Carse concludes his book with the rather cryptic statement, “There is but one infinite game.” One realises that he is referring to the fact that we are all players in the infinite game that is playing out around us, the infinite game that is the universe. Although current physics does posit a final move: the heat death of the universe means that this universe might have an endgame that we can do nothing to avoid.

Caillois’s quality of imagination refers to the idea that games are make-believe. A game consists of creating a second reality that runs in parallel with real life. It is a fictional universe that the players voluntarily summon up independent of the stern reality of the physical universe we are part of.

Finally, Caillois believes that a game demands freedom. Anyone who is forced to play a game is working rather than playing. A game, therefore, connects with another important aspect of human consciousness: our free will.

This raises a question: if the universe is a game, who is it that is playing and what will it mean to win? Are we just pawns in this game rather than players? Some have speculated that our universe is actually a huge simulation. Someone has programmed the rules, input some starting data and has let the simulation run. This is why John Conway’s Game of Life feels closest to the sort of game that the universe might be. In Conway’s game, pixels on an infinite grid are born, live and die according to their environment and the rules of the game. Conway’s success was in creating a set of rules that gave rise to such interesting complexity.

If the universe is a game, then it feels like we too lucked out to find ourselves part of a game that has the perfect balance of simplicity and complexity, chance and strategy, drama and jeopardy to make it interesting. Even when we discover the rules of the game, it promises to be a fascinating match right up to the moment it reaches its endgame.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Marcus Du Sautoy*


Graduate School: Where Grades Don’t Matter

Yesterday I received a disheartening 44/50 on a homework assignment. Okay okay, I know. 88% isn’t bad, but I had turned in my solutions with so much confidence that admittedly, my heart dropped a little (okay, a lot!) when I received the grade. But I quickly had to remind myself, Hey! Grades don’t matter.

The six points were deducted from two problems. (Okay, fine. It was three. But in the third I simply made an air-brained mistake.) In the first, apparently my answer wasn’t explicit enough. How stingy! I thought. Doesn’t our professor know that this is a standard example from the book? I could solve it in my sleep! But after the prof went over his solution in class, I realized that in all my smugness I never actually understood the nuances of the problem. Oops. You bet I’ll be reviewing his solution again. Lesson learned.

In the second, I had written down my solution in the days before and had checked with a classmate and (yes) the internet to see if I was correct. Unfortunately, the odds were against me two-to-one as both sources agreed with each other but not with me. But I just couldn’t see how I could possibly be wrong! Confident that my errors were truths, I submitted my solution anyway, hoping there would be no consequences. But alas, points were taken off.

Honestly though, is a lower grade such a bad thing? I think not. In both cases, I learned exactly where my understanding of the material went awry. And that’s great! It means that my comprehension of the math is clearer now than it was before (and that the chances of passing my third qualifying exam have just increased. Woo!) And that’s precisely why I’m (still, heh…) in school.

So yes, contrary to what the comic above says, grades do exist in grad school, but – and this is what I think the comic is hinting at – they don’t matter. Your thesis committee members aren’t going to say, “Look, your defense was great, but we can’t grant you your PhD. Remember that one homework/midterm/final grade from three years ago?” (They may not use the word “great” either, but that’s another matter.) Of course, we students should still work hard and put in maximum effort! But the emphasis should not be on how well we perform, but rather how much we learn. Focus on the latter and the former will take care of itself. This is true in both graduate school and college, but the lack of emphasis on grades in grad school really brings it home. And personally, I’m very grateful for it because my brain is freed up to focus on other things like, I don’t know, learning math!

So to all my future imperfect homework scores out there: bring it on.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


500-Year-Old Maths Problem Turns Out To Apply To Coffee And Clocks

A centuries-old maths problem asks what shape a circle traces out as it rolls along a line. The answer, dubbed a “cycloid”, turns out to have applications in a variety of scientific fields.

Light reflecting off the round rim creates a mathematically significant shape in this coffee cup

Sarah Hart

The artist Paul Klee famously described drawing as “taking a line for a walk” – but why stop there? Mathematicians have been wondering for five centuries what happens when you take circles and other curves for a walk. Let me tell you about this fascinating story…

A wheel rolling along a road will trace out a series of arches

Imagine a wheel rolling along a road – or, more mathematically, a circle rolling along a line. If you follow the path of a point on that circle, it traces out a series of arches. What exactly is their shape? The first person to give the question serious thought seems to have been Galileo Galilei, who gave the arch-like curve a name – the cycloid. He was fascinated by cycloids, and part of their intriguing mystery was that it seemed impossible to answer the most basic questions we ask about a curve – how long is it and what area does it contain? In this case, what’s the area between the straight line and the arch? Galileo even constructed a cycloid on a sheet of metal, so he could weigh it to get an estimate of the area, but he never managed to solve the problem mathematically.

Within a few years, it seemed like every mathematician in Europe was obsessed with the cycloid. Pierre de Fermat, René Descartes, Marin Mersenne, Isaac Newton and Gottfried Wilhelm Leibniz all studied it. It even brought Blaise Pascal back to mathematics, after he had sworn off it in favour of theology. One night, he had a terrible toothache and, to distract himself from the pain, decided to think about cycloids. It worked – the toothache miraculously disappeared, and naturally Pascal concluded that God must approve of him doing mathematics. He never gave it up again. The statue of Pascal in the Louvre Museum in Paris even shows him with a diagram of a cycloid. The curve became so well known, in fact, that it made its way into several classic works of literature – it gets name-checked in Gulliver’s TravelsTristram Shandy and Moby-Dick.

The question of the cycloid’s area was first solved in the mid-17th century by Gilles de Roberval, and the answer turned out to be delightfully simple – exactly three times the area of the rolling circle. The first person to determine the length of the cycloid was Christopher Wren, who was an extremely good mathematician, though I hear he also dabbled in architecture. It’s another beautifully simple formula: the length is exactly four times the diameter of the generating circle. The beguiling cycloid was so appealing to mathematicians that it was nicknamed “the Helen of Geometry”.

But its beauty wasn’t the only reason for the name. It was responsible for many bitter arguments. When mathematician Evangelista Torricelli independently found the area under the cycloid, Roberval accused him of stealing his work. “Team Roberval” even claimed that Torricelli had died of shame after being unmasked as a plagiarist (though the typhoid he had at the time may have been a contributing factor). Descartes dismissed Fermat’s work on the cycloid as “ridiculous gibberish”. And in response to a challenge from Johann Bernoulli, Isaac Newton grumpily complained about being “teased by foreigners about mathematics”.

An amazing property of the cycloid was discovered by Christiaan Huygens, who designed the first pendulum clock. Pendulums are good for timekeeping because the period of their motion – the time taken for one full swing of the pendulum – is constant, no matter what the angle of release. But in fact, that’s only approximately true – the period does vary slightly. Huygens wondered if he could do better. The end of a pendulum string moves along the arc of a circle, but is there a curved path it could follow so that the bob would reach the bottom of the curve in the same time no matter where it was released? This became known as the “tautochrone problem”. And guess which curve is the solution? An added bonus is its link to the “brachistochrone problem” of finding the curve between any two points along which a particle moving under gravity will descend in the shortest time. There’s no reason at all to think that the same curve could answer both problems, but it does. The solution is the cycloid. It’s a delightful surprise to find it cropping up in situations seemingly so unrelated to where we first encountered it.

When you roll a circle along a line, you get a cycloid. But what happens when you roll a line along a circle? This is an instance of a curve called an involute. To make one, you take a point at the end of a line segment and roll that line along the curve so it’s always just touching it (in other words, it’s a tangent). The involute is the curve traced out by that point. For the involute of a circle, imagine unspooling a thread from a cotton reel and following the end of the thread as it moves. The result is a spiralling curve emerging from the circle’s circumference.

When a line rolls along a circle, it produces a curve called an involute

Huygens was the first person to ask about involutes, as part of his attempts to make more accurate clocks. It’s all very well knowing the cycloid is the perfect tautochrone, but how do you get your string to follow a cycloidal path? You need to find a curve whose involute is a cycloid. The miraculous cycloid, it turns out, has the beautiful property that it is its own involute! But those lovely spiralling circle involutes turn out to be extremely useful too.

A circle with many involutes

My favourite application is one Huygens definitely couldn’t have predicted: in the design of a nuclear reactor that produces high-mass elements for scientific research. This is done by smashing neutrons at high speed into lighter elements, to create heavier ones. Within the cylindrical reactor cores, the uranium oxide fuel is sandwiched in thin layers between strips of aluminium, which are then curved to fit into the cylindrical shape. The heat produced by a quadrillion neutrons hurtling around every square centimetre is considerable, so coolant runs between these strips. It’s vital that they must be a constant distance apart all the way along their curved surfaces, to prevent hotspots. That’s where a useful property of circle involutes comes in. If you draw a set of circle involutes starting at equally spaced points on the circumference of a circle, then the distances between them remain constant along the whole of each curve. So, they are the perfect choice for the fuel strips in the reactor core. What’s more, the circle involute is the only curve for which this is true! I just love that a curve first studied in the context of pendulum clocks turns out to solve a key design question for nuclear reactors.

We’ve rolled circles along lines and lines along circles. Clearly the next step is to roll circles along circles. What happens? Here, we have some choices. What size is the rolling circle? And are we rolling along the inside or the outside of the stationary one? The curve made by a circle rolling along inside of the circle is called a hypocycloid; rolling it along the outside gives you an epicycloid. If you’ve ever played with a Spirograph toy, you’ll almost have drawn hypocycloids. Because your pen is not quite at the rim of the rolling circle, technically you are creating what are called hypotrochoids.

A cardioid (left) and nephroid (right)

Of the epicycloids, the most interesting is the cardioid: the heart-shaped curve resulting when the rolling circle has the same radius as the fixed one. Meanwhile, the kidney-shaped nephroid is produced by a rolling circle half the radius of the fixed one. Cardioids crop up in the most fascinating places. The central region of the Mandelbrot set, a famous fractal, is a cardioid. Sound engineers will be familiar with cardioid microphones, which pick up sound in a cardioid-shaped region. You might also find cardioid-like curves in the light patterns created in coffee cups in some kinds of lighting. If light rays from a fixed source are reflected off a curved mirror, the curve to which each of those reflected rays are tangent will be visible as a concentrated region of light, called a caustic. It turns out that a light source on the circumference of a perfectly circular mirror will result precisely in a cardioid!

Of course, in our coffee cup example, usually the light source isn’t exactly on the rim of the cup, but some way away. If it were very far away, we could assume that the light rays hitting the rim of the cup are parallel. In that situation, it can be shown that the caustic is actually not a cardioid but another epicycloid: the nephroid. Since a strong overhead light is somewhere between these two extremes, the curve we get is usually going to be somewhere between a cardioid and a nephroid. The mathematician Alfréd Rényi once defined a mathematician as “a device for turning coffee into theorems”. That process is nowhere more clearly seen than with our wonderful epicycloids. Check them out if you’re reading this with your morning cuppa!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Sarah Hart*


Mathematicians Can’t Agree What ‘Equals’ Means, And That’s A Problem

What does “equals” mean? For mathematicians, this simple question has more than one answer, which is causing issues when it comes to using computers to check proofs. The solution might be to tear up the foundations of maths.

When you see “2 + 2 = 4”, what does “=” mean? It turns out that’s a complicated question, because mathematicians can’t agree on the definition of what makes two things equal.

While this argument has been quietly simmering for decades, a recent push to make mathematical proofs checkable by computer programs, called formalisation, has given the argument new significance.

“Mathematicians use equality to mean two different things, and I was fine with that,” says Kevin Buzzard at Imperial College London. “Then I started doing maths on a computer.” Working with computer proof assistants made him realise that mathematicians must now confront what was, until recently, a useful ambiguity, he says – and it could force them to completely redefine the foundations of their subject.

The first definition of equality will be a familiar one. Most mathematicians take it to mean that each side of an equation represents the same mathematical object, which can be proven through a series of logical transformations from one side to the other. While “=”, the equals sign, only emerged in the 16th century, this concept of equality dates back to antiquity.

It was the late 19th century when things began to change, with the development of set theory, which provides the logical foundations for most modern mathematics. Set theory deals with collections, or sets, of mathematical objects, and introduced another definition of equality: if two sets contain the same elements, then they are equal, similar to the original mathematical definition. For example, the sets {1, 2, 3} and {3, 2, 1} are equal, because the order of the elements in a set doesn’t matter.

But as set theory developed, mathematicians started saying that two sets were equal if there was an obvious way to map between them, even if they didn’t contain exactly the same elements, says Buzzard.

To understand why, take the sets {1, 2, 3} and {a, b, c}. Clearly, the elements of each set are different, so the sets aren’t equal. But there are also ways of mapping between the two sets, by identifying each letter with a number. Mathematicians call this an isomorphism. In this case, there are multiple isomorphisms because you have a choice of which number to assign to each letter, but in many cases, there is only one clear choice, called the canonical isomorphism.

Because a canonical isomorphism of two sets is the only possible way to link them, many mathematicians now take this to mean they are equal, even though it isn’t technically the same concept of equality that most of us are used to. “These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well,” says Buzzard.

Having two definitions for equality is of no real concern to mathematicians when they write papers or give lectures, as the meaning is always clear from the context, but they present problems for computer programs that need strict, precise instructions, says Chris Birkbeck at the University of East Anglia, UK. “We’re finding that we were a little bit sloppy all along, and that maybe we should fix a few things.”

To address this, Buzzard has been investigating the way some mathematicians widely use canonical isomorphism as equality, and the problems this can cause with formal computer proof systems.

In particular, the work of Alexander Grothendieck, one of the leading mathematicians of the 20th century, is currently extremely difficult to formalise. “None of the systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol,” says Buzzard.

The problem has its roots in the way mathematicians put together proofs. To begin proving anything, you must first make assumptions called axioms that are taken to be true without proof, providing a logical framework to build upon. Since the early 20th century, mathematicians have settled on a collection of axioms within set theory that provide a firm foundation. This means they don’t generally have to use axioms directly in their day-to-day business, because common tools can be assumed to work correctly – in the same way you probably don’t worry about the inner workings of your kitchen before cooking a recipe.

“As a mathematician, you somehow know well enough what you’re doing that you don’t worry too much about it,” says Birkbeck. That falls down, however, when computers get involved, carrying out maths in a way that is similar to building a kitchen from scratch for every meal. “Once you have a computer checking everything you say, you can’t really be vague at all, you really have to be very precise,” says Birkbeck.

To solve the problem, some mathematicians argue we should just redefine the foundations of mathematics to make canonical isomorphisms and equality one and the same. Then, we can make computer programs work around that. “Isomorphism is equality,” says Thorsten Altenkirch at the University of Nottingham, UK. “I mean, what else? If you cannot distinguish two isomorphic objects, what else would it be? What else would you call this relationship?”

Efforts are already under way to do this in a mathematical field called homotopy type theory, in which traditional equality and canonical isomorphism are defined identically. Rather than trying to contort existing proof assistants to fit canonical isomorphism, says Altenkirch, mathematicians should adopt type theory and use alternative proof assistants that work with it directly.

Buzzard isn’t a fan of this suggestion, having already spent considerable effort using current tools to formalise mathematical proofs that are needed to check more advanced work, such as a proof of Fermat’s last theorem. The axioms of mathematics should be left as they are, rather than adopting type theory, and existing systems should be tweaked instead, he says. “Probably the way to fix it is just to leave mathematicians as they are,” says Buzzard. “It’s very difficult to change mathematicians. You have to make the computer systems better.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Physicists Figured Out The Ideal Container Size For Pistachio Shells

A simple experiment and mathematical model suggest that when you snack on pistachios, you may need a surprisingly large bowl to accommodate the discarded shells.

Shelling your favourite snack nuts just got a lot easier: physicists have worked out the exact size of bowl to best fit discarded pistachio shells.

Ruben Zakine and Michael Benzaquen at École Polytechnique in Paris often find themselves discussing science in the cafeteria while eating pistachios. Naturally, they began wondering about the mathematics behind storing their snack refuse.

The researchers stuffed 613 pistachios into a cylindrical container to determine “packing density”, or the fraction of space taken up by whole nuts in their shells. Separately, they measured the packing density of the shells alone. In one experiment setup, the researchers poured the shells into a container and let them fall as they may, and in another they shook them into a denser, more efficient configuration.

Without shaking, the shells had about 73 per cent of the original packing density. Shaking decreased this number to 57 per cent. This suggests that, with any pistachio container, an additional half-sized container will hold shell refuse as long you occasionally shake the container while eating.

Zakine and Benzaquen backed up their findings by modelling pistachios as ellipsoids – three-dimensional shapes resembling squashed spheres – and their shells as hollow half-spheres and calculated their packing densities based on mathematical rules. These results confirmed the real-life experiments and suggested that the same ratios would work for other container shapes.

Despite these similarities, the researchers found about a 10 per cent discrepancy between the calculations and the real-life measurements. Zakine says that this is not surprising because pistachios are not perfect ellipsoids and have natural variations in shape. More broadly, it is tricky to calculate how best to pack objects into containers. So far, mathematics researchers have only had luck with doing calculations for spheres, like marbles, and uniform shapes like M&M’s, he says.

Going forward, the researchers want to run more complex calculations on a computer. But for now, they are looking forward to fielding mathematical questions whenever they serve pistachios at dinner parties.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karmela Padavic-Callaghan*


A Math SaiBlog? Say What?

Yes! I’m writing about math. No! Don’t close your browser window. Hear me out first…

I know very well that math has a bad rap. It’s often taught or thought of as a dry, intimidating, unapproachable, completely boring, who-in-their-right-mind-would-want-to-think-about-this-on-purpose kind of subject. I get it. Math was the last thing on earth I thought I’d study. Seriously.

But my understanding of math has since changed. I used to think it was a mess of equations and formulas, only enjoyed by a small number of masochists. But oh how I was wrong! Mathematics is not just numbers. It is not just strange symbols. And it is certainly not something reserved only for the few elite geniuses of the world.

Mathematics is a language –

a language of ideas, concepts, and notions.

It’s true! Math is a language just like English, French, or Mandarin. And just like some ideas are best communicated in a particular language, other ideas are best communicated “in math.” This is why I’ve started a SaiBlog – as an aid in my own pursuit of becoming more proficient at thinking/speaking/reading mathematics.

One of the main challenges I face in this pursuit is the ability to strip away the intimidation factor

– the cryptic symbols, the elaborate vocabulary, the fancy formalities –

and unveil the true meaning of the text at hand. For me, this unveiling comes by reading and rereading, by working through problem after problem, and by writing. Quite often while learning new (and recalling old) mathematics, I have to stop and ask, “What is the text really saying behind all that jargon?” And if I can proceed to write down the idea in English (i.e. in lingo that’s easy on the brain) then that bit of information becomes engrained in my mind. Or at least it gets stored away in my brain somewhere. And if (or when) I forget it, I find that looking at my own handwritten notes conjures up the memory and the blood, sweat, and tears that went into learning that bit of info, and it all comes right back.

So Math3ma is my online repository as I make my way through this journey. Here’s the plan for now: some of the SaiBlog posts will be divided into two sections, in keeping with the aforementioned thought process:

And some posts will fall into “The Back Pocket” where I’ll keep little tidbits of math for a rainy day (or, perhaps, an exam). As for the actual content, I’m focusing on material found in the initial years of a graduate math program because, well, passing the qualifying exams is next on my agenda. But I think I’ll include some include undergrad material too. And as for future content, who knows? I’m excited to see what Math3ma can turn into.

Thanks for taking the time to peak into my journey as I work to see mathematics for what it really is–a very powerful, very beautiful language inherent in the world all around us!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


Mathematicians Find a Completely New Way to Write The Number 3

Third time’s a charm: just weeks after cracking an elusive problem involving the number 42, mathematicians have found a solution to an even harder problem for the number 3.

Andrew Booker at Bristol University, UK, and Andrew Sutherland at the Massachusetts Institute of Technology have found a big solution to a maths problem known as the sum of three cubes.

The problem asks whether any integer, or whole number, can be represented as the sum of three cubed numbers.

There were already two known solutions for the number 3, both of which involve small numbers: 13 + 13 + 1and 43 + 43 + (-5)3.

But mathematicians have been searching for a third for decades. The solution that Booker and Sutherland found is:

5699368212219623807203 + (-569936821113563493509) 3 + (-472715493453327032) 3 = 3

Earlier this month, the pair also found a solution to the same problem for 42, which was the last remaining unsolved number less than 100.

To find these solutions, Booker and Sutherland worked with software firm Charity Engine to run an algorithm across the idle computers of half a million volunteers.

For the number 3, the amount of processing time was equivalent to a single computer processor running continuously for 4 million hours, or more than 456 years.

When a number can be expressed as the sum of three cubes, there are infinitely many possible solutions, says Booker. “So there should be infinitely many solutions for three, and we’ve just found the third one,” he says.

There’s a reason the third solution for 3 was so hard to find. “If you look at just the solutions for any one number, they look random,” he says. “We think that if you could get your hands on loads and loads of solutions – of course, that’s not possible, just because the numbers get so huge so quickly – but if you could, there’s kind of a general trend to them: that the digit sizes are growing roughly linearly with the number of solutions you find.”

It turns out that this rate of growth is extremely small for the number 3 – only 114, now the smallest unsolved number, has a smaller rate of growth. In other words, numbers with a slow rate of growth have fewer solutions with a lower number of digits.

The duo also found a solution to the problem for 906. We know for sure that certain numbers, such as 4, 5 and 13, can’t be expressed as the sum of three cubes. There now remain nine unsolved numbers under 1000. Mathematicians think these can be written as the sum of three cubes, but we don’t yet know how.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Donna Lu*