Timetables – Hard to Read, Even Harder to Build

When you look at a train or university timetable you don’t think about how the timetable was made – you’re thinking about your trip or where your next class is.

In that moment, you couldn’t care less about all the juggling and compromising that needs to happen to get a timetable working.

Not surprisingly, though, this is a difficult process dating back many years and one that perplexes mathematicians even today.

To understand the difficulty associated with creating timetables – those timetables we all rely so heavily on – we need to understand the nature of difficulty itself.

In particular, we need to understand how it applies in a mathematical context.

Seriously, how hard can it be?

Not surprisingly, some tasks in mathematics are harder than others – but even this perception is complicated. Many important computational tasks are hard, or at least appear to be hard.

Knowing precisely what “hard” and “easy” mean has itself turned out to be extremely challenging. To really know a computational task is hard, we have to really know that there isn’t some nice efficient method waiting to be discovered that would render the task straightforward.

Our old friend timetabling is a classic example of a problem for which the precise difficulty is unknown.

I don’t get it, and my train’s leaving …

Consider a university timetable: a computer can easily check that such a timetable has no clashes but a clash-free timetable cannot (in general) be guaranteed. Because of this, finding a timetable with the minimum number of clashes is a genuine mathematical challenge.

Commercial timetabling applications can’t in general do any better than attempt to approximate the minimum number of clashes.

As yet, no-one has yet proved that timetabling cannot be solved efficiently. In fact, the ability to solve timetabling efficiently hinges on one of the foremost unsolved problems in mathematics: the so-called “P vs NP” problem.

A Millennium Problem

Roughly speaking, the “P vs NP” problem asks whether:

1) finding correct solutions to a problem (e.g. constructing a clash-free timetable) is genuinely harder task than …

2) … simply checking a given correct solution (e.g. checking to see if a completed timetable is clash-free).

Intuitively, you would believe finding a solution should be harder than checking a solution.

Think of your favourite Sudoku puzzle: checking that your solution is correct is easy – each square, column and row can only contain each number from one to nine once. But finding the solution – completing the puzzle – is hard graft.

But it’s not quite as simple as that.

In 2000, the Clay Institute chose “P vs NP” as one of seven Millennium Prize problems, with a $1 million prize for the person who solves it.

Constraint satisfaction

Algebraic and logical methods have proved particularly useful in understanding what determines the difficulty of a problem.

Among the computational problems in the NP class – i.e. problems that can’t “easily” be solved by a computer – Constraint Satisfaction Problems (or CSPs, as they are affectionately known) have received particular attention.

Any problem that involves giving values to a large collection of objects that obey some constraints is a CSP – and timetabling is one such problem.

In a school or university setting, we assign exam slots to particular subjects, obeying the constraint that subjects with common enrollment are scheduled at different times.

Map colouring is a CSP closely related to timetabling. In the above image we gave each state and territory one of three colours – red, green or blue – with the constraint that neighbouring states and territories must have different colours.

Map colouring is typical of “hard” problems in the class NP: given a successfully three-coloured map it is easy to verify that it obeys the neighbouring region constraint (so checking is easy). But in general there is no known efficient algorithm for deciding if a map can be a successfully three-coloured.

With every CSP one may associate an algebraic structure, but be warned: even by the imaginative standards of modern algebra, these structures are unusual beasts.

Numbers of the beast

In the familiar high school “algebra of numbers”, operations such as addition and multiplication are used to combine numbers to produce new ones. So, combining 1 and 2 using + gives 3.

For a CSP such as timetabling though, the role of numbers is taken by the available timetable slots, and the operations used to combine these are even more bizarre.

(How bizarre are these operations? Well, they are “technical generalisations of symmetries”, known as “polymorphisms”. You did ask!)

Despite their unusual character, these weird algebraic oddities are known to precisely determine the difficulty of a CSP.

A problem such as timetabling turns out to have very few polymorphisms: a classic hallmark of a difficult problem.

Many mathematicians and theoretical computer scientists around the world are working hard to prove it is precisely this absence of interesting properties that causes computational difficulty.

Will anyone ever solve this mighty head-scratcher? The chance of winning a Millenium Prize – not to mention $1 million – is definitely a motivating factor.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


If I Had a Blank Cheque I’d … Turn IBM’s Watson Into a Maths Genius

Money should be no object when it comes to the numbers game. krissyho

Mathematics has many grand challenge problems, but none that can potentially be settled by pouring in more money – unlike the case of the Large Hadron Collider, the Square Kilometre Array or other such projects.

Maths is a different beast. But, of course, you’re offering me unlimited, free dosh, so I should really think of something.

Grand Challenges in Mathematics

In his famous 1900 speech The Problems of Mathematics David Hilbert listed 23 problems that set the stage for 20th century mathematics.

It was a speech full of optimism for mathematics in the coming century and Hilbert felt open (or unsolved) problems were a sign of vitality:

“The great importance of definite problems for the progress of mathematical science in general … is undeniable … [for] as long as a branch of knowledge supplies a surplus of such problems, it maintains its vitality … every mathematician certainly shares … the conviction that every mathematical problem is necessarily capable of strict resolution … we hear within ourselves the constant cry: There is the problem, seek the solution. You can find it through pure thought …”

Hilbert’s problems included the continuum hypothesis, the “well-ordering” of the reals, Goldbach’s conjecture, the transcendence of powers of algebraic numbers, the Riemann hypothesis, the extension of Dirichlet’s principle and many more.

Many were solved in subsequent decades, and each time it was a major event for mathematics.

The Riemann hypothesis (which deals with the distribution of prime numbers) remains on a list of seven “third millennium” problems.

For the solution of each millennium problem, the Clay Mathematics Institute offers – in the spirit of the times – a one million dollar prize.

This prize has already been awarded and refused by Perelman for resolving the Poincaré conjecture. The solution also merited Science’s Breakthrough of the Year, the first time mathematics had been so honoured.

Certainly, given unlimited moolah, learned groups could be gathered to attack each problem and assisted in various material ways. But targeted research in mathematics has even less history of success than in the other sciences … which is saying something.

Doron Zeilberger famously said that the Riemann hypothesis is the only piece of mathematics whose proof (i.e. certainty of knowledge) merits $10 billion being spent.

As John McCarthy wrote in Science in 1997:

“In 1965 the Russian mathematician Alexander Konrod said ‘Chess is the Drosophila [a type of fruit fly] of artificial intelligence.

“But computer chess has developed as genetics might have if the geneticists had concentrated their efforts, starting in 1910, on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies.”

Unfortunately, the so-called “curse of exponentiality” – whereby the more difficult a problem becomes, the challenge of solving it increases exponentially – pervades all computing, and especially mathematics.

As a result, many problems – such as Ramsey’s Theorem – will likely be impossible to solve by computer brute force, regardless of advances in technology.

Money for nothing

But, of course, I must get to the point. You’re offering me a blank cheque, so what would I do? A holiday in Greece for two? No, not this time. Here’s my manifesto:

Google has transformed mathematical life (as it has with all aspects of life) but is not very good at answering mathematical questions – even if one knows precisely the question to ask and it involves no symbols.

In February, IBM’s Watson computer walloped the best human Jeopardy players in one of the most impressive displays of natural language competence by a machine.

I would pour money into developing an enhanced Watson for mathematics and would acquire the whole corpus of maths for its database.

Maths ages very well and I am certain we would discover a treasure trove. Since it’s hard to tell where maths ends and physics, computer science and other subjects begin, I would be catholic in my acquisitions.

Since I am as rich as Croesus and can buy my way out of trouble, I will not suffer the same court challenges Google Books has faced.

I should also pay to develop a comprehensive computation and publishing system with features that allow one to manipulate mathematics while reading it and which ensures published mathematics is rich and multi-textured, allowing for reading at a variety of levels.

Since I am still in a spending mood, I would endow a mathematical research institute with great collaboration tools for roughly each ten million people on Earth.

Such institutes have greatly enhanced research in the countries that can afford and chose to fund them.

Content with my work, I would then rest.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Want To Win at Gambling? Use Your Head

When you know the numbers, things get a whole lot easier. Roberto Bouza

Gambling in Australia – Some say “punting is a mug’s game”. But is this always true, or can an astute gambler make long-term profits?

Certainly not from casino games. Casinos make profits by paying less than they should on winning bets. A roulette wheel has 37 numbers, so a gambler who bets a dollar has a 1/37 chance of winning and should receive back $37 on a winning number.

But the casino pays only $36.

On average, a gambler loses $1 for every $37 they bet: a loss of 2.7%.

This is the cost of playing the game and it’s the profit the casino makes often called the “house percentage”.

Houses of all sizes

For casino games such as roulette, Keno and poker machines, the house percentage can be calculated mathematically, and in spite of many proposed betting systems, is an immutable and unchangeable number. No strategy can be used by the punter to make the game profitable.

While gamblers may experience short-term lucky streaks, in the long run they will lose this predetermined percentage of their wagers. But a sensible casino gambler should at least be familiar with the house percentages:

Betting the win line at craps at 1.4%, or red or black at roulette at 2.7%, might be a better option than Keno or Lotto with a house percentage of over 40%.

Let’s be clear here: for every $100 bet through Tattslotto or Powerball, the “house” only pays out $60, keeping $40 for itself.

But sports betting is different.

In a horse race, the chance of winning (and hence the price for a winning bet) is determined subjectively, either by the bookmaker or by the weight of money invested by the public.

If 20% of the amount a bookmaker takes on a race is for the favourite, the public is effectively estimating that particular horse’s chance of winning at one in five. But the bookmaker might set the horse’s winning price at $4.50 (for every $1 bet, the punter gets $4.50 back), giving the bookie a house percentage of 10%.

But a trainer, or jockey with inside knowledge (or statistician with a mathematical model based on past data), may estimate this same horse’s chances at one in three. If the savvy punter is correct, then for every $3 bet they average $4.50 return.

A logical punter looks for value – bets that pay more than a fair price as determined by their true probability of winning. There are several reasons why sports betting lends itself to punters seeking value bets.

A sporting chance

In general, more outcomes in a game allow for a higher house percentage. With two even outcomes (betting on a head or tail with a single coin toss, say), a fair price would be $2.

The operator might be able to pay out as little $1.90, giving a house percentage of 5%, but anything less than this would probably see little interest from gamblers.

But a Keno game with 20 million outcomes might only pay $1 million for a winning $1 bet, rather than a fair $20,000,000. A payout of $1 million gives a staggering house percentage of 95%.

Traditionally, sports betting was restricted to horse, harness and dog racing – events with several outcomes that allowed house percentages of around 15%-20%.

With the extension into many other team and individual sports, betting on which of the two participants would win reduced a bookmaker’s take to as little as 3%-4%.

Competition reduces this further. Only the state-run totalisator (an automated system which like Tattslotto, determined the winning prices after the event, thus always ensuring the legislated house percentage), and a handful of on-course bookmakers were originally allowed to offer bets on horse racing, whereas countless internet operators now compete.

Betfair even allows punters to bet against each other, effectively creating millions of “bookmakers”.

Head or heart

Many sports punters bet with their hearts, not their heads. This reduces the prices of popular players or teams, thereby increasing the price of their opponents. The low margins and extensive competition even allow punters to sometimes find arbitrage opportunities (where betting on both sides with different bookmakers allows a profit whoever wins).

To overcome their heart, and lack of inside knowledge, many mathematicians create mathematical and statistical models based on past data and results to predict the chances of sports outcomes. They prove the veracity of their models by testing (either on past data or in real time) whether they would profit if the predictions were used for betting.

Academics call the ability to show a profit the “inefficiency of betting markets”, and there are many papers to suggest sports markets are inefficient. Of course the more successful have a vested interest in keeping their methods to themselves and may not publicise their results.

Astute punters can make sports betting profitable in the long term. But the profits made by the plethora of sports bookmakers indicate that most sports punters are not that astute.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Stephen Clarke*