Want To Win at Gambling? Use Your Head

When you know the numbers, things get a whole lot easier. Roberto Bouza

Gambling in Australia – Some say “punting is a mug’s game”. But is this always true, or can an astute gambler make long-term profits?

Certainly not from casino games. Casinos make profits by paying less than they should on winning bets. A roulette wheel has 37 numbers, so a gambler who bets a dollar has a 1/37 chance of winning and should receive back $37 on a winning number.

But the casino pays only $36.

On average, a gambler loses $1 for every $37 they bet: a loss of 2.7%.

This is the cost of playing the game and it’s the profit the casino makes often called the “house percentage”.

Houses of all sizes

For casino games such as roulette, Keno and poker machines, the house percentage can be calculated mathematically, and in spite of many proposed betting systems, is an immutable and unchangeable number. No strategy can be used by the punter to make the game profitable.

While gamblers may experience short-term lucky streaks, in the long run they will lose this predetermined percentage of their wagers. But a sensible casino gambler should at least be familiar with the house percentages:

Betting the win line at craps at 1.4%, or red or black at roulette at 2.7%, might be a better option than Keno or Lotto with a house percentage of over 40%.

Let’s be clear here: for every $100 bet through Tattslotto or Powerball, the “house” only pays out $60, keeping $40 for itself.

But sports betting is different.

In a horse race, the chance of winning (and hence the price for a winning bet) is determined subjectively, either by the bookmaker or by the weight of money invested by the public.

If 20% of the amount a bookmaker takes on a race is for the favourite, the public is effectively estimating that particular horse’s chance of winning at one in five. But the bookmaker might set the horse’s winning price at $4.50 (for every $1 bet, the punter gets $4.50 back), giving the bookie a house percentage of 10%.

But a trainer, or jockey with inside knowledge (or statistician with a mathematical model based on past data), may estimate this same horse’s chances at one in three. If the savvy punter is correct, then for every $3 bet they average $4.50 return.

A logical punter looks for value – bets that pay more than a fair price as determined by their true probability of winning. There are several reasons why sports betting lends itself to punters seeking value bets.

A sporting chance

In general, more outcomes in a game allow for a higher house percentage. With two even outcomes (betting on a head or tail with a single coin toss, say), a fair price would be $2.

The operator might be able to pay out as little $1.90, giving a house percentage of 5%, but anything less than this would probably see little interest from gamblers.

But a Keno game with 20 million outcomes might only pay $1 million for a winning $1 bet, rather than a fair $20,000,000. A payout of $1 million gives a staggering house percentage of 95%.

Traditionally, sports betting was restricted to horse, harness and dog racing – events with several outcomes that allowed house percentages of around 15%-20%.

With the extension into many other team and individual sports, betting on which of the two participants would win reduced a bookmaker’s take to as little as 3%-4%.

Competition reduces this further. Only the state-run totalisator (an automated system which like Tattslotto, determined the winning prices after the event, thus always ensuring the legislated house percentage), and a handful of on-course bookmakers were originally allowed to offer bets on horse racing, whereas countless internet operators now compete.

Betfair even allows punters to bet against each other, effectively creating millions of “bookmakers”.

Head or heart

Many sports punters bet with their hearts, not their heads. This reduces the prices of popular players or teams, thereby increasing the price of their opponents. The low margins and extensive competition even allow punters to sometimes find arbitrage opportunities (where betting on both sides with different bookmakers allows a profit whoever wins).

To overcome their heart, and lack of inside knowledge, many mathematicians create mathematical and statistical models based on past data and results to predict the chances of sports outcomes. They prove the veracity of their models by testing (either on past data or in real time) whether they would profit if the predictions were used for betting.

Academics call the ability to show a profit the “inefficiency of betting markets”, and there are many papers to suggest sports markets are inefficient. Of course the more successful have a vested interest in keeping their methods to themselves and may not publicise their results.

Astute punters can make sports betting profitable in the long term. But the profits made by the plethora of sports bookmakers indicate that most sports punters are not that astute.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Stephen Clarke*


Physicists are Turning to Lewis Carroll For Help With Their Maths

Lewis Caroll was the pen name for mathematician Charles Dodgson

Curiouser and curiouser! Particle physicists could have the author of Alice’s Adventures in Wonderland to thank for simplifying their calculations.

Lewis Carroll, the 19th century children’s author, was the pen name of mathematician Charles Lutwidge Dodgson. While his mathematical contributions mostly proved unremarkable, one particular innovation may have stood the test of time.

Marcel Golz at Humboldt University, Berlin has built on Dodgson’s work to help simplify the complex equations that arise when physicists try to calculate what happens when particles interact. The hope is that it could allow for speedier and more accurate computations, allowing experimentalists at places like the Large Hadron Collider in Geneva, Switzerland to better design their experiments.

Working out the probabilities of different particle interactions is commonly done using Feynman diagrams, named after the Nobel prize winning physicist Richard Feynman. These diagrams are a handy visual aid for encoding the complex processes at play, allowing them to be converted into mathematical notation.

One early way of representing these diagrams was known as the parametric representation, which has since lost favour among physicists owing to its apparent complexity. To mathematicians, however, patterns in the resulting equations suggest that it might be possible to dramatically simplify them in ways not possible for more popular representations. These simplifications could in turn enable new insights. “A lot of this part of physics is constrained by how much you can compute” says Karen Yeats, a mathematician at the university of Waterloo, Canada.

Golz’s work makes use of the Dodgson identity, a mathematical equivalence noted by Dodgson in an 1866 paper, to perform this exact sort of simplification. While much of the connecting mathematics was done by Francis Brown, one of Golz’s tutors at Oxford University, the intellectual lineage can be traced all the way back to Lewis Carroll. “It’s kind of a nice curiosity,” says Golz. “A nice conversation starter.”

In the past, parametric notation was only useful in calculating simplified forms of quantum theory. Thanks to work like Golz’s, these simplifications could be extended to particle behaviour of real interest to experimentalists. “I can say with confidence that these parametric techniques, applied to the right problems, are game-changing,” says Brown.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Is the Universe a Game?

Generations of scientists have compared the universe to a giant, complex game, raising questions about who is doing the playing – and what it would mean to win.

If the universe is a game, then who’s playing it?

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or mathematician to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

Is the universe a game? Famed physicist Richard Feynman certainly thought so: “‘The world’ is something like a great chess game being played by the gods, and we are observers of the game.” As we observe, it is our task as scientists to try to work out the rules of the game.

The 17th-century mathematician Gottfried Wilhelm Leibniz also looked on the universe as a game and even funded the foundation of an academy in Berlin dedicated to the study of games: “I strongly approve of the study of games of reason not for their own sake but because they help us to perfect the art of thinking.”

Our species loves playing games, not just as kids but into adulthood. It is believed to have been an important part of evolutionary development – so much so that the cultural theorist Johan Huizinga proposed we should be called Homo ludens, the playing species, rather than Homo sapiens. Some have suggested that once we realised that the universe is controlled by rules, we started developing games as a way to experiment with the consequences of these rules.

Take, for example, one of the very first board games that we created. The Royal Game of Ur dates back to around 2500 BC and was found in the Sumerian city of Ur, part of Mesopotamia. Tetrahedral-shaped dice are used to race five pieces belonging to each player down a shared sequence of 12 squares. One interpretation of the game is that the 12 squares represent the 12 constellations of the zodiac that form a fixed background to the night sky and the five pieces correspond to the five visible planets that the ancients observed moving through the night sky.

But does the universe itself qualify as a game? Defining what actually constitutes a game has been a subject of heated debate. Logician Ludwig Wittgenstein believed that words could not be pinned down by a dictionary definition and only gained their meaning through the way they were used, in a process he called the “language game”. An example of a word that he believed only got its meaning through use rather than definition was “game”. Every time you try to define the word “game”, you wind up including some things that aren’t games and excluding others you meant to include.

Other philosophers have been less defeatist and have tried to identify the qualities that define a game. Everyone, including Wittgenstein, agrees that one common facet of all games is that they are defined by rules. These rules control what you can or can’t do in the game. It is for this reason that as soon as we understood that the universe is controlled by rules that bound its evolution, the idea of the universe as a game took hold.

In his book Man, Play and Games, theorist Roger Caillois proposed five other key traits that define a game: uncertainty, unproductiveness, separateness, imagination and freedom. So how does the universe match up to these other characteristics?

The role of uncertainty is interesting. We enter a game because there is a chance either side will win – if we know in advance how the game will end, it loses all its power. That is why ensuring ongoing uncertainty for as long as possible is a key component in game design.

Polymath Pierre-Simon Laplace famously declared that Isaac Newton’s identification of the laws of motion had removed all uncertainty from the game of the universe: “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past could be present before its eyes.”

Solved games suffer the same fate. Connect 4 is a solved game in the sense that we now know an algorithm that will always guarantee the first player a win. With perfect play, there is no uncertainty. That is why games of pure strategy sometimes suffer – if one player is much better than their opponent then there is little uncertainty in the outcome. Donald Trump against Garry Kasparov in a game of chess will not be an interesting game.

The revelations of the 20th century, however, have reintroduced the idea of uncertainty back into the rules of the universe. Quantum physics asserts that the outcome of an experiment is not predetermined by its current state. The pieces in the game might head in multiple different directions according to the collapse of the wave function. Despite what Albert Einstein believed, it appears that God is playing a game with dice.

Even if the game were deterministic, the mathematics of chaos theory also implies that players and observers will not be able to know the present state of the game in complete detail and small differences in the current state can result in very different outcomes.

That a game should be unproductive is an interesting quality. If we play a game for money or to teach us something, Caillois believed that the game had become work: a game is “an occasion of pure waste: waste of time, energy, ingenuity, skill”. Unfortunately, unless you believe in some higher power, all evidence points to the ultimate purposelessness of the universe. The universe is not there for a reason. It just is.

The other three qualities that Caillois outlines perhaps apply less to the universe but describe a game as something distinct from the universe, though running parallel to it. A game is separate – it operates outside normal time and space. A game has its own demarcated space in which it is played within a set time limit. It has its own beginning and its own end. A game is a timeout from our universe. It is an escape to a parallel universe.

The fact that a game should have an end is also interesting. There is the concept of an infinite game that philosopher James P. Carse introduced in his book Finite and Infinite Games. You don’t aim to win an infinite game. Winning terminates the game and therefore makes it finite. Instead, the player of the infinite game is tasked with perpetuating the game – making sure it never finishes. Carse concludes his book with the rather cryptic statement, “There is but one infinite game.” One realises that he is referring to the fact that we are all players in the infinite game that is playing out around us, the infinite game that is the universe. Although current physics does posit a final move: the heat death of the universe means that this universe might have an endgame that we can do nothing to avoid.

Caillois’s quality of imagination refers to the idea that games are make-believe. A game consists of creating a second reality that runs in parallel with real life. It is a fictional universe that the players voluntarily summon up independent of the stern reality of the physical universe we are part of.

Finally, Caillois believes that a game demands freedom. Anyone who is forced to play a game is working rather than playing. A game, therefore, connects with another important aspect of human consciousness: our free will.

This raises a question: if the universe is a game, who is it that is playing and what will it mean to win? Are we just pawns in this game rather than players? Some have speculated that our universe is actually a huge simulation. Someone has programmed the rules, input some starting data and has let the simulation run. This is why John Conway’s Game of Life feels closest to the sort of game that the universe might be. In Conway’s game, pixels on an infinite grid are born, live and die according to their environment and the rules of the game. Conway’s success was in creating a set of rules that gave rise to such interesting complexity.

If the universe is a game, then it feels like we too lucked out to find ourselves part of a game that has the perfect balance of simplicity and complexity, chance and strategy, drama and jeopardy to make it interesting. Even when we discover the rules of the game, it promises to be a fascinating match right up to the moment it reaches its endgame.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Marcus Du Sautoy*


Physicists Figured Out The Ideal Container Size For Pistachio Shells

A simple experiment and mathematical model suggest that when you snack on pistachios, you may need a surprisingly large bowl to accommodate the discarded shells.

Shelling your favourite snack nuts just got a lot easier: physicists have worked out the exact size of bowl to best fit discarded pistachio shells.

Ruben Zakine and Michael Benzaquen at École Polytechnique in Paris often find themselves discussing science in the cafeteria while eating pistachios. Naturally, they began wondering about the mathematics behind storing their snack refuse.

The researchers stuffed 613 pistachios into a cylindrical container to determine “packing density”, or the fraction of space taken up by whole nuts in their shells. Separately, they measured the packing density of the shells alone. In one experiment setup, the researchers poured the shells into a container and let them fall as they may, and in another they shook them into a denser, more efficient configuration.

Without shaking, the shells had about 73 per cent of the original packing density. Shaking decreased this number to 57 per cent. This suggests that, with any pistachio container, an additional half-sized container will hold shell refuse as long you occasionally shake the container while eating.

Zakine and Benzaquen backed up their findings by modelling pistachios as ellipsoids – three-dimensional shapes resembling squashed spheres – and their shells as hollow half-spheres and calculated their packing densities based on mathematical rules. These results confirmed the real-life experiments and suggested that the same ratios would work for other container shapes.

Despite these similarities, the researchers found about a 10 per cent discrepancy between the calculations and the real-life measurements. Zakine says that this is not surprising because pistachios are not perfect ellipsoids and have natural variations in shape. More broadly, it is tricky to calculate how best to pack objects into containers. So far, mathematics researchers have only had luck with doing calculations for spheres, like marbles, and uniform shapes like M&M’s, he says.

Going forward, the researchers want to run more complex calculations on a computer. But for now, they are looking forward to fielding mathematical questions whenever they serve pistachios at dinner parties.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karmela Padavic-Callaghan*


A Math SaiBlog? Say What?

Yes! I’m writing about math. No! Don’t close your browser window. Hear me out first…

I know very well that math has a bad rap. It’s often taught or thought of as a dry, intimidating, unapproachable, completely boring, who-in-their-right-mind-would-want-to-think-about-this-on-purpose kind of subject. I get it. Math was the last thing on earth I thought I’d study. Seriously.

But my understanding of math has since changed. I used to think it was a mess of equations and formulas, only enjoyed by a small number of masochists. But oh how I was wrong! Mathematics is not just numbers. It is not just strange symbols. And it is certainly not something reserved only for the few elite geniuses of the world.

Mathematics is a language –

a language of ideas, concepts, and notions.

It’s true! Math is a language just like English, French, or Mandarin. And just like some ideas are best communicated in a particular language, other ideas are best communicated “in math.” This is why I’ve started a SaiBlog – as an aid in my own pursuit of becoming more proficient at thinking/speaking/reading mathematics.

One of the main challenges I face in this pursuit is the ability to strip away the intimidation factor

– the cryptic symbols, the elaborate vocabulary, the fancy formalities –

and unveil the true meaning of the text at hand. For me, this unveiling comes by reading and rereading, by working through problem after problem, and by writing. Quite often while learning new (and recalling old) mathematics, I have to stop and ask, “What is the text really saying behind all that jargon?” And if I can proceed to write down the idea in English (i.e. in lingo that’s easy on the brain) then that bit of information becomes engrained in my mind. Or at least it gets stored away in my brain somewhere. And if (or when) I forget it, I find that looking at my own handwritten notes conjures up the memory and the blood, sweat, and tears that went into learning that bit of info, and it all comes right back.

So Math3ma is my online repository as I make my way through this journey. Here’s the plan for now: some of the SaiBlog posts will be divided into two sections, in keeping with the aforementioned thought process:

And some posts will fall into “The Back Pocket” where I’ll keep little tidbits of math for a rainy day (or, perhaps, an exam). As for the actual content, I’m focusing on material found in the initial years of a graduate math program because, well, passing the qualifying exams is next on my agenda. But I think I’ll include some include undergrad material too. And as for future content, who knows? I’m excited to see what Math3ma can turn into.

Thanks for taking the time to peak into my journey as I work to see mathematics for what it really is–a very powerful, very beautiful language inherent in the world all around us!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


This Turing Machine Should Run Forever Unless Maths is Wrong

Alan Turing: still casting a long shadow over mathematics

One hundred and fifty years of mathematics will be proved wrong if a new computer program stops running. Thankfully, it’s unlikely to happen, but the code behind it is testing the limits of the mathematical realm.

The program is a simulated Turing machine, a mathematical model of computation created by codebreaker Alan Turing. In 1936, he showed that the actions of any computer algorithm can be mimicked by a simple machine that reads and writes 0s and 1s on an infinitely long tape by working through a set of states, or instructions. The more complex the algorithm, the more states the machine requires.

Now Scott Aaronson and Adam Yedidia of the Massachusetts Institute of Technology have created three Turing machines with behaviour that is entwined in deep questions of mathematics. This includes the proof of the 150-year-old Riemann hypothesis – thought to govern the patterns of prime numbers.

Turing’s machines have long been used to probe such questions. Their origins lie in a series of philosophical revelations that rocked the mathematical world in the 1930s. First, Kurt Gödel proved that some mathematical statements can never be proved true or false – they are undecidable. He essentially created a mathematical version of the sentence “This sentence is false”: a logical brain-twister that contradicts itself.

No proof of everything

Gödel’s assertion has a get-out clause. If you change the base assumptions on which proofs are built – the axioms – you can render a problem decidable. But that will still leave other problems that are undecidable. That means there are no axioms that let you prove everything.

Following Gödel’s arguments, Turing proved that there must be some Turing machines whose behaviour cannot be predicted under the standard axioms – known as Zermelo-Fraenkel set theory with the axiom of choice (C), or more snappily, ZFC – underpinning most of mathematics. But we didn’t know how complex they would have to be.

Now, Yedidia and Aaronson have created a Turing machine with 7918 states that has this property. And they’ve named it “Z”.

“We tried to make it concrete, and say how many states does it take before you get into this abyss of unprovability?” says Aaronson.

The pair simulated Z on a computer, but it is small enough that it could theoretically be built as a physical device, says Terence Tao of the University of California, Los Angeles. “If one were then to turn such a physical machine on, what we believe would happen would be that it would run indefinitely,” he says, assuming you ignore physical wear and tear or energy requirements.

No end in sight

Z is designed to loop through its 7918 instructions forever, but if it did eventually stop, it would prove ZFC inconsistent. Mathematicians wouldn’t be too panicked, though – they could simply shift to a slightly stronger set of axioms. Such axioms already exist, and could be used to prove the behaviour of Z, but there is little to be gained in doing so because there will always be a Turing machine to exceed any axiom.

“One can think of any given axiom system as being like a computer with a certain limited amount of memory or processing power,” says Tao. “One could switch to a computer with even more storage, but no matter how large an amount of storage space the computer has, there will still exist some tasks that are beyond its ability.”

But Aaronson and Yedidia have created two other machines that might give mathematicians more pause. These will stop only if two famous mathematical problems, long believed to be true, are actually false. These are Goldbach’s conjecture, which states that every even whole number greater than 2 is the sum of two prime numbers, and the Riemann hypothesis, which says that all prime numbers follow a certain pattern. The latter forms the basis for parts of modern number theory, and disproving it would be a major, if unlikely, upset.

Practical benefits

Practically, the pair have no intention of running their Turing machines indefinitely in an attempt to prove these problems wrong. It’s not a particularly efficient way to attack that problem, says Lance Fortnow of the Georgia Institute of Technology in Atlanta.

Expressing mathematical problems as Turing machines has a different practical benefit: it helps to work out how complex they are. The Goldbach machine has 4888 states, the Riemann one has 5372, while Z has 7918, suggesting the ZFC problem is the most complex of the three. “That would match most people’s intuitions about these sorts of things,” Aaronson says.

Yedidia has placed his code online, and mathematicians may now compete to reduce the size of these Turing machines, pushing them to the limit. Already a commenter on Aaronson’s SaiBlog claims to have created a 31-state Goldbach machine, although the pair have yet to verify this.

Fortnow says the actual size of the Turing machines are irrelevant. “The paper tells us that we can have very compressed descriptions that already go beyond the power of ZFC, but even if they were more compressed, it wouldn’t give us much pause about the foundations of math,” he says.

But Aaronson says shrinking Z further could say something interesting about the limits of the foundations of maths – something Gödel and Turing are likely to have wanted to know. “They might have said ‘that’s nice, but can you get 800 states? What about 80 states?’” Aaronson says. “I would like to know if there is a 10-state machine whose behaviour is independent of ZFC.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


The Million-Dollar Puzzle That Could Change The World

Cracking this mathematical mystery will make someone rich, but its value to all of us could be incalculable. Jacob Aron explains why it matters if P = NP.

Dear Fellow Researchers, I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts.” So began an email sent in August 2010 to a group of leading computer scientists by Vinay Deolalikar, a mathematician at Hewlett-Packard Labs in Palo Alto, California.

It was an incendiary claim. Deolalikar was saying he had cracked the biggest problem in computer science, a question concerning the complexity and fundamental limits of computation. Answer the question “Is P equal to NP?” with a yes, and you could be looking at a world transformed, where planes and trains run on time, accurate electronic translations are routine, the molecular mysteries of life are revealed at the click of a mouse – and secure online shopping becomes fundamentally impossible. Answer it with a no, as Deolalikar claimed to have done, and it would suggest that some problems are too irreducibly complex for computers to solve exactly.

One way or another, then, we would like an answer. But it has been frustratingly slow in coming. “It’s turned out to be an incredibly hard problem,” says Stephen Cook of the University of Toronto, Canada, the computer scientist who first formulated it in May 1971. “I never thought it was going to be easy, but now 40 years later it looks as difficult as ever.”

The importance of P = NP? was emphasised in 2000, when the privately funded Clay Mathematics Institute in Cambridge, Massachusetts, named it as one of seven “Millennium Prize” problems, with a $1 million bounty for whoever solves it. Since then Cook and other researchers in the area have regularly received emails with purported solutions. Gerhard Woeginger of the Eindhoven University of Technology in the Netherlands even maintains an online list of them. “We’ve seen hundreds of attempts, and we can more or less classify them according to the mistakes the proof makes,” he says.

Deolalikar’s attempt seemed different. For a start, it came from an established researcher rather than one of the legion of amateurs hoping for a pop at glory and a million dollars. Also, Cook initially gave it a cautious thumbs up, writing that it seemed to be a “relatively serious claim”. That led it to spread across the web and garnered it widespread attention in the press, New Scientist included (14 August 2010, p 7).

In the end, though, it was a false dawn. “It probably didn’t deserve all the publicity,” says Neil Immerman, a computer scientist at the University of Massachusetts, Amherst. He was one of an army of researchers who, working in an informal online collaboration, soon exposed fundamental flaws in the proof. Deolalikar has responded to some of them and submitted an expanded version to an academic journal. Till he hears back, he refuses to comment further.

The consensus now is that his proof – like all attempts before it – is unfixable. And so the mystique of P = NP? remains unbroken 40 years on. But what is the problem about? Why is it so important, and what happens if it is answered one way or the other?

“The consensus now is that last year’s proof is false – which just adds to the mystique of the problem”

What is P?

Computing power, we are accustomed to think, is the gift that keeps on giving. Every year or two that goes by sees roughly a doubling in our number-crunching capabilities – a march so relentless that it has acquired the label “Moore’s law”, after the Intel researcher, Gordon Moore, who first noted the trend back in the 1960s.

Talk of the ultimate limits of computation tends to be about how many transistors, the building blocks of microprocessors, we can cram onto a conventional silicon chip – or whatever technology or material might replace it. P = NP? raises the spectre that there is a more fundamental limitation, one that lies not in hardware but in the mechanics of computation itself.

P = NP should not be mistaken for a simple algebraic equation – if it were, we could just have N = 1 and claim the Clay institute’s million bucks. P and NP are examples of “complexity classes”, categories into which problems can be slotted depending on how hard it is to unravel them using a computer.

Solving any problem computationally depends on finding an algorithm, the step-by-step set of mathematical instructions that leads us to the answer. But how much number-crunching does an algorithm need? That depends on the problem.

The P class of problems is essentially the easy ones: an algorithm exists to solve them in a “reasonable” amount of time. Imagine looking to see if a particular number appears in a list. The simplest solution is a “linear search” algorithm: you check each number in turn until you find the right one. If the list has n numbers – the “size” of the problem – this algorithm takes at most n steps to search it, so its complexity is proportional to n. That counts as reasonable. So too do things that take a little more computational muscle – for instance, the manual multiplication of two n-digit numbers, which takes about n2 steps. A pocket calculator will still master that with ease, at least for relatively small values of n. Any problem of size n whose solution requires n to the power of something (nx) steps is relatively quick to crack. It is said to be solvable in “polynomial time”, and is denoted P.

What is NP?

Not all problems are as benign. In some cases, as the size of the problem grows, computing time increases not polynomially, as nx, but exponentially, as xn – a much steeper increase. Imagine, for example, an algorithm to list out all possible ways to arrange the numbers from 1 to n. It is not difficult to envisage what the solutions are, but even so the time required to list them rapidly runs away from us as n increases. Even proving a problem belongs to this non-polynomial class can be difficult, because you have to show that absolutely no polynomial-time algorithm exists to solve it.

That does not mean we have to give up on hard problems. With some problems that are difficult to solve in a reasonable time, inspired guesswork might still lead you to an answer whose correctness is easy to verify. Think of a sudoku puzzle. Working out a solution can be fiendishly difficult, even for a computer, but presented with a completed puzzle, it is easy to check that it fulfils the criteria for being a valid answer (see diagram). Problems whose solutions are hard to come by but can be verified in polynomial time make up the complexity class called NP, which stands for non-deterministic polynomial time.

Constructing a valid sudoku grid – in essence asking the question “Can this number fill this space?” over and over again for each space and each number from 1 to 9, until all spaces are compatibly filled – is an example of a classic NP problem, the Boolean satisfiability problem. Processes such as using formal logic to check software for errors and deciphering the action of gene regulatory networks boil down to similar basic satisfiability problems.

And here is the nub of the P = NP? problem. All problems in the set P are also in the set NP: if you can easily find a solution, you can easily verify it. But is the reverse true? If you can easily verify the solution to a problem, can you also easily solve it – is every problem in the set NP also in P? If the answer to that question is yes, the two sets are identical: P is equal to NP, and the world of computation is irrevocably changed – sudoku becomes a breeze for a machine to solve, for starters. But before we consider that possibility, what happens if P is found to be not equal to NP?

What if P ≠ NP?

In 2002, William Gasarch, a computer scientist at the University of Maryland in College Park, asked 100 of his peers what they thought the answer to the P = NP? question would be. “You hear people say offhand what they think of P versus NP. I wanted to get it down on record,” he says. P ≠ NP was the overwhelming winner, with 61 votes. Only nine people supported P = NP, some, they said, just to be contrary. The rest either had no opinion or deemed the problem impossible to solve.

If the majority turns out to be correct, and P is not equal to NP – as Deolalikar suggested – it indicates that some problems are by their nature so involved that we will never be able to crunch through them. If so, the proof is unlikely to make a noticeable difference to you or me. In the absence of a definitive answer to P = NP?, most computer scientists already assume that some hard problems cannot be solved exactly. They concentrate on designing algorithms to find approximate solutions that will suffice for most practical purposes. “We’ll be doing exactly the same as we’re currently doing,” says Woeginger.

Proving P ≠ NP would still have practical consequences. For a start, says Immerman, it would shed light on the performance of the latest computing hardware, which splits computations across multiple processors operating in parallel. With twice as many processors, things should run twice as fast – but for certain types of problem they do not. That implies some kind of limitation to computation, the origin of which is unclear. “Some things seem like they’re inherently sequential,” says Immerman. “Once we know that P and NP are not equal, we’ll know why.”

It could also have an impact on the world of cryptography. Most modern encryption relies on the assumption that breaking a number down to its prime-number factors is hard. This certainly looks like a classic NP problem: finding the prime factors of 304,679 is hard, but it’s easy enough to verify that they are 547 and 557 by multiplying them together. Real encryption uses numbers with hundreds of digits, but a polynomial-time algorithm for solving NP problems would crack even the toughest codes.

So would P ≠ NP mean cryptography is secure? Not quite. In 1995, Russell Impagliazzo of the University of California, San Diego, sketched out five possible outcomes of the P = NP? debate. Four of them were shades-of-grey variations on a world where P ≠ NP. For example, we might find some problems where even though the increase in complexity as the problem grew is technically exponential, there are still relatively efficient ways of finding solutions. If the prime-number-factoring problem belongs to this group, cryptography’s security could be vulnerable, depending where reality lies on Impagliazzo’s scale. “Proving P is not equal to NP isn’t the end of our field, it’s the beginning,” says Impagliazzo.

Ultimately, though, it is the fifth of Impagliazzo’s worlds that excites researchers the most, however unlikely they deem it. It is “Algorithmica” – the computing nirvana where P is indeed equal to NP.

What if P = NP?

If P = NP, the revolution is upon us. “It would be a crazy world,” says Immerman. “It would totally change our lives,” says Woeginger.

That is because of the existence, proved by Cook in his seminal 1971 paper, of a subset of NP problems known as NP-complete. They are the executive key to the NP washroom: find an algorithm to solve an NP-complete problem, and it can be used to solve any NP problem in polynomial time.

Lots of real-world problems are known to be NP-complete. Satisfiability problems are one example; versions of the knapsack problem (below), which deals with optimal resource allocation, and the notorious travelling salesman problem (above) are others. This problem aims to find the shortest-distance route for visiting a series of points and returning to the starting point, an issue of critical interest in logistics and elsewhere.

If we could find a polynomial-time algorithm for any NP-complete problem, it would prove that P=NP, since all NP problems would then be easily solvable. The existence of such a universal computable solution would allow the perfect scheduling of transport, the most efficient distribution of goods, and manufacturing with minimal waste – a leap and a bound beyond the “seems-to-work” solutions we now employ.

It could also lead to algorithms that perform near-perfect speech recognition and language translation, and that let computers process visual information as well as any human can. “Basically, you would be able to compute anything you wanted,” says Lance Fortnow, a computer scientist at Northwestern University in Evanston, Illinois. A further curious side effect would be that of rendering mathematicians redundant. “Mathematics would be largely mechanisable,” says Cook. Because finding a mathematical proof is difficult, but verifying one is relatively easy, in some way maths itself is an NP problem. If P=NP, we could leave computers to churn out new proofs.

“Proving P = NP would have the curious side effect of rendering mathematicians redundant”

What if there’s no algorithm?

There is an odd wrinkle in visions of a P = NP world: that we might prove the statement to be true, but never be able to take advantage of that proof. Mathematicians sometimes find “non-constructive proofs” in which they show that a mathematical object exists without actually finding it. So what if they could show that an unknown P algorithm exists to solve a problem thought to be NP? “That would technically settle the problem, but not really,” says Cook.

There would be a similar agonising limbo if a proof that P=NP is achieved with a universal algorithm that scales in complexity as n to the power of a very large number. Being polynomial, this would qualify for the Clay institute’s $1 million prize, but in terms of computability it would not amount to a hill of beans.

Will we have an answer soon?

When can we expect the suspense to be over, one way or another? Probably not so soon. “Scientometrist” Samuel Arbesman of the Harvard Medical School in Boston, Massachusetts, predicts that a solution of “P = NP?” is not likely to arrive before 2024 (New Scientist, 25 December 2010, p 24). In Gasarch’s 2002 poll of his peers, only 45 per cent believed it would be resolved by 2050. “I think people are now more pessimistic,” he says, “because after 10 years there hasn’t been that much progress.” He adds that he believes a proof could be up to 500 years away.

Others find that excessively gloomy, but all agree there is a mammoth task ahead. “The current methods we have don’t seem to be giving us progress,” says Fortnow. “All the simple ideas don’t work, and we don’t know where to look for new tools,” says Woeginger. Part of the problem is that the risk of failure is too great. “If you have already built up a reputation, you don’t want to publish something that makes other people laugh,” says Woeginger.

Some are undeterred. One is Ketan Mulmuley at the University of Chicago. He declined to comment on his work, which is currently under review for publication, but fellow researchers say his ideas look promising. In essence it involves translating the P = NP? problem into more tractable problems in algebraic geometry, the branch of mathematics that relates shapes and equations. But it seems even Mulmuley is not necessarily anticipating a quick success. “He expects it to take well over 100 years,” says Fortnow.

Ultimately, though, Mulmuley’s tactic of connecting P = NP? to another, not obviously related, mathematical area seems the most promising line of attack. It has been used before: in 1995, the mathematician Andrew Wiles used work linking algebraic geometry and number theory to solve another high-profile problem, Fermat’s last theorem. “There were a lot of building blocks,” says Fortnow. “Then it took one brilliant mind to make that last big leap.”

Woeginger agrees: “It will be solved by a mathematician who applies methods from a totally unrelated area, that uses something nobody thinks is connected to the P versus NP question.” Perhaps the person who will settle P = NP? is already working away in their own specialised field, just waiting for the moment that connects the dots and solves the world’s hardest problem.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*


AI is Helping Tackle One of The Biggest Unsolved Problems In Maths

AI can search through numbers and equations to find patterns

Artificial intelligence’s ability to sift through large amounts of data is helping us tackle one of the most difficult unsolved problems in mathematics.

Yang-Hui He at City, University of London, and his colleagues are using the help of machine learning to better understand the Birch and Swinnerton-Dyer conjecture. This is one of the seven fiendishly difficult Millennium Prize Problems, each of which has a $1 million reward on offer for the first correct solution.

The conjecture describes solutions to equations known as elliptic curves, equations in the form of y2 = x3 + ax + b, where x and y are variables and a and b are fixed constants.

Elliptic curves were essential in cracking the long-standing Fermat’s last theorem, which was solved by mathematician Andrew Wiles in the 1990s, and are also used in cryptography.

To study the behaviour of elliptic curves, mathematicians also use an equation called the L-series. The conjecture, first stated by mathematicians Bryan Birch and Peter Swinnerton-Dyer in the 1960s, says that if an elliptic curve has an infinite number of solutions, its L-series should equal 0 at certain points.

“It turns out to be a very, very difficult problem to find a set of integer solutions on such equations,” says He, meaning solutions only involving whole numbers. “This is part of the biggest problem in number theory: how do you find integer solutions to polynomials?”

Finding integer solutions or showing that they cannot exist has been crucial. “For example, Fermat’s last theorem is reduced completely to the statement of whether you can find certain properties of elliptic curves,” says He.

He and his colleagues used an AI to analyse close to 2.5 million elliptic curves that had been compiled in a database by John Cremona at the University of Warwick, UK. The rationale was to search the equations to see if any statistical patterns emerged.

Plugging different values into the elliptic curve equation and plotting the results on a graph, the team found that the distribution takes the shape of a cross, which mathematicians hadn’t previously observed. “The distribution of elliptic curves seems to be symmetric from left to right, and up and down,” says He.

“If you spot any interesting patterns, then you can raise a conjecture which may later lead to an important result,” says He.  “We now have a new, really powerful thing, which is machine learning and AI, to do this.”

To see whether a theoretical explanation exists for the cross-shaped distribution, He consulted number theorists. “Apparently, nobody knows,” says He.

“Machine learning hasn’t yet been applied very much to problems in pure maths,” says Andrew Booker at the University of Bristol, UK. “Elliptic curves are a natural place to start.”

“Birch and Swinnerton-Dyer made their conjecture based on patterns that they observed in numerical data in the 1960s, and I could imagine applications of machine learning that tried to detect those patterns efficiently,” says Booker, but the approach so far is too simple to turn up any deep patterns, he says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Donna Lu*


Decimals to Fractions

Some students get their first look at manipulating equations when they learn how to convert a repeating decimal to its corresponding fraction (rational numbers yield terminating decimals, or non-terminating repeating decimals). Playing with fractions and decimals is always good in itself, but there are also two nice side-benefits to this simple calculation: first, you can see the surprising result that  0.999… = 1, and second, it helps answer the question ‘which fractions lead  to terminating decimals, and which lead to repeating decimals?’ Better still, it leads to other questions.

  1. The calculation – turning decimals into fractions

A repeating decimal like 0.633633… can be written as a fraction by writing out the equation x =  0.633633…, and then manipulating it so that the infinitely long repeated parts get eliminated. The manipulation is done by multiplying the equation by the right power of 10 to shift the digits, and then subtracting off the original equation.

Perhaps you have a more complicated repeating decimal that you want to fractionalize. The procedure remains essentially the same: obtain two equations that have the same “decimal part” that can be eliminated by subtracting. You might have to shift twice in order to accomplish this. Here is an example:

  1. Showing that 0.999… = 1

This school mathematics result has been made into a popular topic of discussion on the Internet. Try doing a search on “0.999 = 1”, and see what you find.

  1. Which fractions terminate, and which keep going?

A fraction can sometimes give us a terminating decimal, and sometimes a repeating decimal. It is reasonable to wonder what kind of fractions lead to each kind of decimal.

Thinking about the procedure above might give you an idea for how to think about this question.

If we have a situation where a fraction x = p/q is a terminating decimal, it means that there must be some power of 10, say 10^n, that if we multiply x by  this power we can completely eliminate the decimal (i.e. multiplying by 10^n gives us an integer). This could only happen if the denominator q divides 10^n. This would require q to be of the form (2^m)(5^k), where the exponents m and k are non negative integers (possibly zero), and n = max(mk).

So, a fraction p/q terminates if and only if the denominator is of the form (2^m)(5^k). Consequently, fractions such as 1/2, 3/5, 9/25, 7/8, 3/200, are all terminating, and, for example, any fraction of the form 1/p where p is a prime that is not 2 or 5, will be non-terminating and repeating.

It is reasonable to ask questions about how long the decimals or their repeating parts can be. For a non-terminating repeating decimal, questions about the length of the repeating part are harder to answer.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Danger of Death: Are We Programmed to Miscalculate Risk?

Our best efforts to gauge threats may be counter-productive.

Assessing risk is something everyone must do every day. Yet few are very good at it, and there are significant consequences of the public’s collective inability to accurately assess risk.

As a first and very important example, most people presume, as an indisputable fact, that the past century has been the most violent in all history — two devastating world wars, the Holocaust, the Rawanda massacre, the September 11 attacks and more — and that we live in a highly dangerous time today.

And yet, as Canadian psychologist (now at Harvard) Steven Pinker has exhaustively documented in his new book The Better Angels of Our Nature: Why Violence Has Declined, the opposite is closer to the truth, particularly when normalised by population.

As Pinker himself puts it:

“Believe it or not — and I know most people do not — violence has been in decline over long stretches of time, and we may be living in the most peaceful time in our species’ existence. The decline of violence, to be sure, has not been steady; it has not brought violence down to zero (to put it mildly); and it is not guaranteed to continue.

“But I hope to convince you that it’s a persistent historical development, visible on scales from millennia to years, from the waging of wars and perpetration of genocides to the spanking of children and the treatment of animals.”

How could the public perception be so wrong? The news media is partly to blame — good news doesn’t sell much advertising space. But the problem might go even deeper: we may be psychologically disposed to miscalculate risk, perhaps as an evolutionary response to danger.

One well-known problem is the “conjunction fallacy” — the common predilection to assign greater probability to a more specialised risk.

One indication of our inability to objectively assess risk is the fanatical and often counter-productive measures taken by parents nowadays to protect children. Some 42 years years ago, 67% of American children walked or biked to school, but today only 10% do, in part stemming from a handful of highly publicised abduction incidents.

Yet the number of cases of real child abduction by strangers (as opposed to, say, a divorced parent) has dwindled from 200-300 per year in the 1990s to only about 100 per year in the US today.

Even if one assumes all of these children are harmed (which is not true), this is still only about 1/20 the risk of drowning and 1/40 of the risk of a fatal car accident.

Such considerations many not diminish the tragedy of an individual loss, but they do raise questions of priority in prevention. Governments worldwide often agonise over marginal levels of additives in certain products (agar in apples in the 1980s and asbestos insulation in well-protected ceilings), while refusing to spend money or legislate for clear social good (smoking in the developing world, gun control, infectious disease control, needle exchange programs and working conditions in coal mines).

One completely absurd example is the recent surge of opposition in the U.S. (supposedly on health concerns) to “smart meters,” which once an hour send usage statistics to the local electric or natural gas utility.

The microwave exposure for these meters, even if you are standing just two feet from a smart meter when it broadcasts its data, is 550 times less than standing in front of an active microwave oven, up to 4,600 times less than holding a walkie-talkie at your ear, and up to 1,100 times less than holding an active cell phone at your ear.

It is even less than sitting in a WiFi cyber cafe using a laptop computer.

A much more serious example is the ongoing hysteria, especially in the UK and the US, over childhood vaccinations. Back in 1998, a study was published in the British medical journal Lancet claiming that vaccination shots with a certain mercury compound may be linked to autism, but other studies showed no such link.

In the meantime, many jumped on the anti-vaccination bandwagon, and several childhood diseases began to reappear, including measles in England and Wales, and whooping cough in California. We should note the rate of autism is probably increasing.

Finally, in January 2011, Lancet formally acknowledged that the original study was not only bad science (which had been recognised for years), but further an “elaborate fraud”.

Yet nearly one year later, opposition to vaccination remains strong, and irresponsible politicians such as would-be-US-President Michele Bachmann cynically (or ignorantly?) milk it.

A related example is the worldwide reaction to the Fukushima reactor accident. This was truly a horrible incident, and we do not wish to detract from death and environmental devastation that occurred. But we question decisions such as that quickly made by Germany to discontinue and dismantle its nuclear program.

Was this decision made after a sober calculation of relative risk, or simply from populist political pressure? We note this decision inevitably will mean more consumption of fossil fuels, as well as the importation of electricity from France, which is 80% nuclear.

Is this a step forward, or a step backward? We also note that concern about global warming is, if anything, more acute than ever in light of accelerating carbon consumption.

This kind of over-reaction — to which many of us are prey — is exacerbated by cynical and exploitive individuals, such as Bill and Michelle Deagle and Jeff Rense, who profit from such fears by peddling bogus medical products, speaking at conspiracy conventions for hefty fees, and charging for elite information.

This is just one instance of a large, growing and dangerous co-evolution of creationist, climate-denial and other anti-science movements.

How do we protect against such misinformation and misperceptions? The complete answers are complex but several things are clear.

First of all, science education must be augmented to address the assessment of risk — this should be a standard part of high school mathematics, as should be more attention to the information needed to make informed assessment.

Second, the press needs to be significantly more vigilant in critically commenting on dubious claims of public risk by citing literature, consulting real experts, and so on. Ideally, we should anticipate scientifically trained and certified scientific journalists.

Third, mathematicians and scientists themselves need to recognise their responsibility to help the public understand risk. Failure to do so, well, poses a serious risk to society.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*