Millennium Prize: the Riemann Hypothesis

What will be the next number in this sequence?

“At school I was never really good at maths” is an all too common reaction when mathematicians name their profession.

In view of most people’s perceived lack of mathematical talent, it may come as somewhat of a surprise that a recent study carried out at John Hopkins University has shown that six-month-old babies already have a clear sense of numbers. They can count, or at least approximate, the number of happy faces shown on a computer screen.

By the time they start school, at around the age of five, most children are true masters of counting, and many will proudly announce when for the first time they have counted up to 100 or 1000. Children also intuitively understand the regular nature of counting; by adding sufficiently many ones to a starting value of one they know they will eventually reach their own age, that of their parents, grandparents, 2011, and so on.

Counting is child’s play. Photography By Shaeree

From counting to more general addition of whole numbers is only a small step—again within children’s almost-immediate grasp. After all, counting is the art of adding one, and once that is mastered it takes relatively little effort to work out that 3 + 4 = 7. Indeed, the first few times children attempt addition they usually receive help from their fingers or toes, effectively reducing the problem to that of counting:

3 + 4 = (1 + 1 + 1) + (1 + 1 + 1 + 1) = 7.

For most children, the sense of joy and achievement quickly ends when multiplication enters the picture. In theory it too can be understood through counting: 3 x 6 is three lots of six apples, which can be counted on fingers and toes to give 18 apples.

In practice, however, we master it through long hours spent rote-learning multiplication tables—perhaps not among our favourite primary school memories.

But at this point, we ask the reader to consider the possibility—in fact, the certainty—that multiplication is far from boring and uninspiring, but that it is intrinsically linked with some of mathematics’ deepest, most enduring and beautiful mysteries. And while a great many people may claim to be “not very good at maths” they are, in fact, equipped to understand some very difficult mathematical questions.

Primes

Let’s move towards these questions by going back to addition and those dreaded multiplication tables. Just like the earlier example of 7, we know that every whole number can be constructed by adding together sufficiently many ones. Multiplication, on the other hand, is not so well-behaved.

The number 12, for example, can be broken up into smaller pieces, or factors, while the number 11 cannot. More precisely, 12 can be written as the product of two whole numbers in multiple ways: 1 x 12, 2 x 6 and 3 x 4, but 11 can only ever be written as the product 1 x 11. Numbers such as 12 are called composite, while those that refuse to be factored are known as prime numbers or simply primes. For reasons that will soon become clear, 1 is not considered a prime, so that the first five prime numbers are 2, 3, 5, 7 and 11.

Just as the number 1 is the atomic unit of whole-number addition, prime numbers are the atoms of multiplication. According to the Fundamental Theorem of Arithmetic, any whole number greater than 1 can be written as a product of primes in exactly one way. For example: 4 = 2 x 2, 12 = 2 x 2 x 3, 2011 = 2011 and

13079109366950 = 2 x 5 x 5 x 11 x 11 x 11 x 37 x 223 x 23819,

where we always write the factors from smallest to largest. If, rather foolishly, we were to add 1 to the list of prime numbers, this would cause the downfall of the Fundamental Theorem of Arithmetic:

4 = 2 x 2 = 1 x 2 x 2 = 1 x 1 x 2 x 2 = …

In the above examples we have already seen several prime numbers, and a natural question is to ask for the total number of primes. From what we have learnt about addition with its single atom of 1, it is not unreasonable to expect there are only finitely many prime numbers, so that, just maybe, the 2649th prime number, 23819, could be the largest. Euclid of Alexandria, who lived around 300BC and who also gave us Euclidean Geometry, in fact showed that there are infinitely many primes.

Euclid’s reasoning can be captured in just a single sentence: if the list of primes were finite, then by multiplying them together and adding 1 we would get a new number which is not divisible by any prime on our list—a contradiction.

A few years after Euclid, his compatriot Eratosthenes of Cyrene found a clever way, now known as the Sieve of Eratosthenes, to obtain all primes less than a given number.

For instance, to find all primes less than 100, Eratosthenes would write down a list of all numbers from 2 to 99, cross out all multiples of 2 (but not 2 itself), then all multiples of 3 (but not 3 itself), then all multiples of 5, and so on. After only four steps(!) this would reveal to him the 25 primes

2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89 and 97.

While this might seem very quick, much more sophisticated methods, combined with very powerful computers, are needed to find really large prime numbers. The current world record, established 2008, is the truly monstrous 243112609 – 1, a prime number of approximately 13 million digits.

The quest to tame the primes did not end with the ancient Greeks, and many great mathematicians, such as Pierre de Fermat, Leonhard Euler and Carl Friedrich Gauss studied prime numbers extensively. Despite their best efforts, and those of many mathematicians up to the present day, there are many more questions than answers concerning the primes.

One famous example of an unsolved problem is Goldbach’s Conjecture. In 1742, Christian Goldbach remarked in a letter to Euler that it appeared that every even number greater than 2 could be written as the sum of two primes.

For example, 2012 = 991 + 1021. While computers have confirmed the conjecture holds well beyond the first quintillion (1018) numbers, there is little hope of a proof of Goldbach’s Conjecture in the foreseeable future.

Another intractable problem is that of breaking very large numbers into their prime factors. If a number is known to be the product of two primes, each about 200 digits long, current supercomputers would take more than the lifetime of the universe to actually find these two prime factors. This time round our inability to do better is in fact a blessing: most secure encryption methods rely heavily on our failure to carry out prime factorisation quickly. The moment someone discovers a fast algorithm to factor large numbers, the world’s financial system will collapse, making the GFC look like child’s play.

To the dismay of many security agencies, mathematicians have also failed to show that fast algorithms are impossible—the possibility of an imminent collapse of world order cannot be entirely ruled out!

Margins of error

For mathematicians, the main prime number challenge is to understand their distribution. Quoting Don Zagier, nobody can predict where the next prime will sprout; they grow like weeds among the whole numbers, seemingly obeying no other law than that of chance. At the same time the prime numbers exhibit stunning regularity: there are laws governing their behaviour, obeyed with almost military precision.

The Prime Number Theorem describes the average distribution of the primes; it was first conjectured by both Gauss and Adrien-Marie Legendre, and then rigorously established independently by Jacques Hadamard and Charles Jean de la Vallée Poussin, a hundred years later in 1896.

The Prime Number Theorem states that the number of primes less than an arbitrarily chosen number n is approximately n divided by ln(n), where ln(n) is the natural logarithm of n. The relative error in this approximation becomes arbitrarily small as n becomes larger and larger.

For example, there are 25 primes less than 100, and 100/ln(100) = 21.7…, which is around 13% short. When n is a million we are up to 78498 primes and since 106/ln(106) = 72382.4…, we are only only 8% short.

The Riemann Hypothesis

The Prime Number Theorem does an incredible job describing the distribution of primes, but mathematicians would love to have a better understanding of the relative errors. This leads us to arguably the most famous open problem in mathematics: the Riemann Hypothesis.

Posed by Bernhard Riemann in 1859 in his paper “Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse” (On the number of primes less than a given magnitude), the Riemann Hypothesis tells us how to tighten the Prime Number Theorem, giving us a control of the errors, like the 13% or 8% computed above.

The Riemann Hypothesis does not just “do better” than the Prime Number Theorem—it is generally believed to be “as good as it gets”. That is, we, or far-superior extraterrestrial civilisations, will never be able to predict the distribution of the primes any better than the Riemann Hypothesis does. One can compare it to, say, the ultimate 100 metres world record—a record that, once set, is impossible to ever break.

Finding a proof of the Riemann Hypothesis, and thus becoming record holder for all eternity, is the holy grail of pure mathematics. While the motivation for the Riemann Hypothesis is to understand the behaviour of the primes, the atoms of multiplication, its actual formulation requires higher-level mathematics and is beyond the scope of this article.

In 1900, David Hilbert, the most influential mathematician of his time, posed a now famous list of 23 problems that he hoped would shape the future of mathematics in the 20th century. Very few of Hilbert’s problems other than the Riemann Hypothesis remain open.

Inspired by Hilbert, in 2000 the Clay Mathematics Institute announced a list of seven of the most important open problems in mathematics. For the successful solver of any one of these there awaits not only lasting fame, but also one million US dollars in prize money. Needless to say, the Riemann Hypothesis is one of the “Millennium Prize Problems”.

Hilbert himself remarked: “If I were awoken after having slept for a thousand years, my first question would be: has the Riemann Hypothesis been proven?” Judging by the current rate of progress, Hilbert may well have to sleep a little while longer.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Ole Warnaar*

 


How Far Away is Everybody? Climbing The Cosmic Distance Ladder

We know the universe is vast, but how do we measure the distances between things? Dave Scrimshaw.

Let’s talk numbers for a moment.

The moon is approximately 384,000 kilometres away, and the sun is approximately 150 million kilometres away. The mean distance between Earth and the sun is known as the “astronomical unit” (AU). Neptune, the most distant planet, then, is 30 AU from the sun.

The nearest stars to Earth are 1,000 times more distant, roughly 4.3 light-years away (one light-year being the distance that light travels in 365.25 days – just under 10 trillion kilometres).

The Milky Way galaxy consists of some 300 billion stars in a spiral-shaped disk roughly 100,000 light-years across.

The Andromeda Galaxy, which can be seen with many home telescopes, is 2.54 million light years away. There are hundreds of billions of galaxies in the observable universe.

At present, the most distant observed galaxy is some 13.2 billion light-years away, formed not long after the Big Bang, 13.75 billion years ago (plus or minus 0.011 billion years).

The scope of the universe was illustrated by the astrophysicist Geraint Lewis in a recent Conversation article.

He noted that, if the entire Milky Way galaxy was represented by a small coin one centimetre across, the Andromeda Galaxy would be another small coin 25 centimetres away.

Going by this scale, the observable universe would extend for 5 kilometres in every direction, encompassing some 300 billion galaxies.

But how can scientists possibly calculate these enormous distances with any confidence?

Parallax

One technique is known as parallax. If you cover one eye and note the position of a nearby object, compared with more distant objects, the nearby object “moves” when you view it with the other eye. This is parallax (see below).

Booyabazooka

The same principle is used in astronomy. As Earth travels around the sun, relatively close stars are observed to move slightly, with respect to other fixed stars that are more distant.

Distance measurements can be made in this way for stars up to about 1,000 light-years away.

Standard candles

For more distant objects such as galaxies, astronomers rely on “standard candles” – bright objects that are known to have a fixed absolute luminosity (brightness).

Since light flux falls off as the square of the distance, by measuring the actual brightness observed on Earth astronomers can calculate the distance.

One type of standard candle, which has been used since the 1920s, is Cepheid variable stars.

Distances determined using this scheme are believed accurate to within about 7% for more nearby galaxies, and 15-20% for the most distant galaxies.

Type Ia supernovas

In recent years scientists have used Type Ia supernovae. These occur in a binary star system when a white dwarf star starts to attract matter from a larger red dwarf star.

As the white dwarf gains more and more matter, it eventually undergoes a runaway nuclear explosion that may briefly outshine an entire galaxy.

Because this process can occur only within a very narrow range of total mass, the absolute luminosity of Type Ia supernovas is very predictable. The uncertainty in these measurements is typically 5%.

In August, worldwide attention was focused on a Type Ia supernova that exploded in the Pinwheel Galaxy (known as M101), a beautiful spiral galaxy located just above the handle of the Big Dipper in the Northern Hemisphere. This is the closest supernova to the earth since the 1987 supernova, which was visible in the Southern Hemisphere.

These and other techniques for astronomical measurements, collectively known as the “cosmic distance ladder”, are described in an excellent Wikipedia article. Such multiple schemes lend an additional measure of reliability to these measurements.

In short, distances to astronomical objects have been measured with a high degree of reliability, using calculations that mostly employ only high-school mathematics.

Thus the overall conclusion of a universe consisting of billions of galaxies, most of them many millions or even billions of light-years away, is now considered beyond reasonable doubt.

Right tools for the job

The kind of distances we’re dealing with above do cause consternation for some since, as we peer millions of light-years into space, we are also peering millions of years into the past.

Some creationists, for instance, have theorised that, in about 4,000 BCE, a Creator placed quadrillions of photons in space en route to Earth, with patterns suggestive of supernova explosions and other events millions of years ago.

Needless to say, most observers reject this notion. Kenneth Miller of Brown University commented, “Their [Creationists’] version of God is one who has filled the universe with so much bogus evidence that the tools of science can give us nothing more than a phony version of reality.”

There are plenty of things in the universe to marvel at, and plenty of tools to help us understand them. That should be enough to keep us engaged for now.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


The Stunningly Simple Rule That Will Always Get You Out of a Maze

You thought the maze looked fun, but now you can’t find your way out. Luckily, mathematics is here to help you escape, says Katie Steckles.

Getting lost in a maze is no fun, and on that rare occasion when you find yourself stuck in one without a map or a bird’s-eye view, it can be difficult to choose which way to go. Mathematics gives us a few tools we can use – in particular, topology, which concerns shapes and how they connect.

The most devious mazes are designed to be as confusing as possible, with dead ends and identical-looking junctions. But there is a stunningly simple rule that will always get you out of a maze, no matter how complicated: always turn right.

Any standard maze can be solved with this method (or its equivalent, the “always-turn-left” method). To do it, place one hand on the wall of the maze as you go in and keep it there. Each time you come to a junction, keep following the wall – if there is an opening on the side you are touching, take it; otherwise go straight. If you hit a dead end, turn around and carry on.

The reason this works is because the walls of any solvable maze will always have at least two distinct connected pieces: one to the left of the optimal solution path (shown in red), and one to the right. The section of wall next to the entrance is part of the same connected chunk of maze as the wall by the exit, and if you keep your hand on it, you will eventually walk along the whole length of the edge of this object – no matter how many twists and turns this involves – and reach the part at the exit.

While it is guaranteed to work, this certainly won’t be the most efficient path – you might find you traverse as much as half of the maze in the process, or even more depending on the layout. But at least it is easy to remember the rule.

Some mazes have more than two pieces. In these, disconnected sections of wall (shown in yellow) inside the maze create loops. In this case, if you start following the wall somewhere in the middle of the maze, there is a chance it could be part of an isolated section, which would leave you walking around a loop forever. But if you start from a wall that is connected to the outside, wall-following will still get you out.

It is reassuring to know that even if you are lost in a maze, you can always get out by following some variation on this rule: if you notice you have reached part of the maze you have been to before, you can detect loops, and switch to the opposite wall.

This is especially useful for mazes where the goal is to get to the centre: if the centre isn’t connected to the outside, wall-following won’t work, and you will need to switch walls to get onto the centre component. But as long as there are a finite number of pieces to the maze, and you keep trying different ones, you will eventually find a piece that is connected to your goal. You might, however, miss the bus home.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


Are Pi’s Days Numbered?

Pi defines the relationship between a circle’s radius and its area.

Some people have argued that Pi’s days are numbered and that other tools, such as tau, could do its job more efficiently. As someone who has studied Pi throughout his entire working life, my response to such challenges is unwavering: Pi is the gift that keeps on giving.

People call me Doctor Pi. I have played with Pi since I was a child and have studied it seriously for 30 years. Each year I discover new, unexpected and amusing things about Pi, its history and its computation. I never tire of it.

Erm, what is Pi?

Pi, written with the Greek letter π, has the value of 3.14159 …, is the most important number in mathematics. The area of a circle of radius r is πr2 while the perimeter has length 2πr.

Some Pi facts? OK

  • Without Pi there is no theory of motion, no understanding of geometry or space/time.
  • Pi occurs in important fields of applied mathematics.
  • Pi is used throughout engineering, science and medicine and is studied for its own sake in number theory.
  • It fascinates specialists and hobbyists alike.

The history of Pi is a history of mathematics

The most famous names in mathematics – Leibniz, Euler, Gauss, Riemann – all play their part in Pi’s illustrious history. In approximately 250BCE Archimedes of Syracuse rigorously showed that the area of a circle is Pi times the square of its radius.

Isaac Newton computed Pi to at least 15 digits in 1666 and a raft of new formulas for calculating Pi in the intervening years have vastly expanded our understanding of this irrational, irreplaceable number.

In my capacity as Doctor Pi – an affectionate name given to me by my students and colleagues – I have met Nobel Prize winners, pop stars and variety of colourful characters, many of whom go potty for this number.

So why the broad attraction? What is the secret of Pi’s enduring appeal? It appears in The Simpsons (doh!), in Star Trek (beam me up!), and in British singer-songwriter Kate Bush’s lovely 2005 song Pi:

“Sweet and gentle and sensitive man With an obsessive nature and deep fascination for numbers And a complete infatuation with the calculation of Pi.”

In the song’s refrain, Bush recites the first 160 digits of Pi (but messes up after 50!) Pi shows up in the movie The Matrix, episodes of Law and Order, and Yann Martel’s Mann-Booker prize winning 2001 novel Life of Pi. No other piece of mathematics can command such attention.

Memorising Pi

The current Guinness World Record for reciting these by rote is well in excess of 60,000 digits.

This is particularly impressive when you consider that Pi, having been proven irrational in the 18th century, has no known repetition or pattern within its infinite decimal representation.

A former colleague of mine, Simon Plouffe, was a Guinness World Record-holder a generation ago, after reciting Pi to approximately 4,700 digits.

Not surprisingly, there is a trend towards building mnemonics whereby the number of letters in a given word represents a digit in the series. For example “How I need a drink, alcoholic of course” represents 3.1415926. This mnemonic formed the basis of a Final Jeopardy! question in 2005.

Some mnemonics are as long as 4,000 digits, but my current favourite is a 33-digit self-referrent mnemonic published in New Scientist on Pi Day (March 14) last year.

Is Pi really infinite?

In a word: yes. So far, it has been calculated to five trillion (5,000,000,000,000) digits. This record was set in August 2010 on Shigeru Kondo’s US$18,000 homemade computer using software written by American university student Alex Yee.

Each such computation is a tour-de-force of computing science.

Estimates suggest that within the next ten to 15 years a quadrillion (1,000,000,000,000,000) digits of Pi will probably be computed. As relatively-recently as 1961, Daniel Shanks, who himself calculated Pi to over 100,000 digits, declared that computing one billion digits would be “forever impossible”. As it transpired, this feat was achieved in 1989 by Yasumasa Kanada of Japan.

It’s a kind of magic

Although it is very likely we will learn nothing new mathematically about Pi from computations to come, we just may discover something truly startling. Pi has seen off attacks in the past. It will see off attacks in the future. Pi, like its inherent magic, is infinite.

The battle continues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


The Monty Hall Problem Shows How Tricky Judging The Odds Can Be

Calculating probabilities can be complicated, as this classic “what’s behind the doors” problem shows, says Peter Rowlett.

Calculating probabilities can be tricky, with subtle changes in context giving quite different results. I was reminded of this recently after setting BrainTwister #10 for New Scientist readers, which was about the odds of seating two pairs of people adjacently in a row of 22 chairs.

Several readers wrote to say my solution was wrong. I had figured out all the possible seating arrangements and counted the ones that had the two groups adjacent. The readers, meanwhile, seated one pair first and then counted the ways of seating the second pair adjacently. Neither approach was wrong, depending on how you read the question.

This subtlety with probability is illustrated nicely by the Monty Hall problem, which is based on the long-running US game show Let’s Make a Deal. A contestant tries to guess which of three doors conceals a big prize. They guess at random, with ⅓ probability of finding the prize. In the puzzle, host Monty Hall doesn’t open the chosen door. Instead, he opens one of the other doors to reveal a “zonk”, an item of little value. He then offers the contestant the opportunity to switch to the remaining door or stick with their first choice.

Hall said in 1991 that the game is designed so contestants make the mistaken assumption that, since there are now two choices, their ⅓ probability has increased to ½. This, combined with a psychological preference to avoid giving up a prize already won, means people tend to stick

Marilyn vos Savant published the problem in her column in Parade magazine in 1990 along with the answer that you are much more likely to win if you switch. She received thousands of letters, many from mathematicians and scientists, telling her she was wrong.

Imagine the host opened one of the unchosen doors at random: one-third of the time, they would reveal the prize. But in the remaining cases, the prize would be behind the chosen door half the time, for a probability of ½.

But that isn’t really the problem being solved. The missing piece of information is that the host knows where the prize is, and of course the show must go on. There is a ⅓ probability that the prize is behind the chosen door, and therefore a ⅔ probability that it is behind one of the other two. Being shown a zonk behind one of the other two hasn’t changed this set-up – the door chosen still has a probability of ⅓, so the other door carries a ⅔ probability. You should switch.

Probability problems depend on the precise question more than people realise. This is why it might seem surprising when you run into a friend, because you aren’t considering the number of people you walked past and how many friends you might see. And for scientists, it is why they have to be very careful about what their evidence is really telling them.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


Explainer: Evolutionary Algorithms

My intention with this article is to give an intuitive and non-technical introduction to the field of evolutionary algorithms, particularly with regards to optimisation.

If I get you interested, I think you’re ready to go down the rabbit hole and simulate evolution on your own computer. If not … well, I’m sure we can still be friends.

Survival of the fittest

According to Charles Darwin, the great evolutionary biologist, the human race owes its existence to the phenomenon of survival of the fittest. And being the fittest doesn’t necessarily mean the biggest physical presence.

Once in high school, my lunchbox was targeted by swooping eagles, and I was reduced to a hapless onlooker. The eagle, though smaller in form, was fitter than me because it could take my lunch and fly away – it knew I couldn’t chase it.

As harsh as it sounds, look around you and you will see many examples of the rule of the jungle – the fitter survive while the rest gradually vanish.

The research area, now broadly referred to as Evolutionary Algorithms, simulates this behaviour on a computer to find the fittest solutions to a number of different classes of problems in science, engineering and economics.

The area in which this area is perhaps most widely used is known as “optimisation”.

Optimisation is everywhere

Your high school maths teacher probably told you the shortest way to go from point A to point B was along the straight line joining A and B. Your mum told you that you should always get the right amount of sleep.

And, if you have lived on your own for any length of time, you’ll be familiar with the ever-increasing cost of living versus the constant income – you always strive to minimise the expenditures, while ensuring you are not malnourished.

Whenever you undertake an activity that seeks to minimise or maximise a well-defined quantity such as distance or the vague notion of the right amount of sleep, you are optimising.

Look around you right now and you’ll see optimisation in play – your Coke can is shaped like that for a reason, a water droplet is spherical for a reason, you wash all your dishes together in the dishwasher for a reason.

Each of these strives to save on something: volume of material of the Coke can, and energy and water, respectively, in the above cases.

So we can safely say optimisation is the act of minimising or maximising a quantity. But that definition misses an important detail: there is always a notion of subject to, or satisfying some conditions.

You must get the right amount of sleep, but you also must do your studies and go for your music lessons. Such conditions, which you also have to adhere to, are known as “constraints”. Optimisation with constraints is then collectively termed “constrained optimisation”.

After constraints comes the notion of “multi-objective optimisation”. You’ll usually have more than one thing to worry about (you must keep your supervisor happy with your work and keep yourself happy and also ensure that you are working on your other projects). In many cases these multiple objectives can be in conflict.

Evolutionary algorithms and optimisation

Imagine your local walking group has arranged a weekend trip for its members and one of the activities is a hill climbing exercise. The problem assigned to your group leader is to identify who among you will reach the hill in the shortest time.

There are two approaches he or she could take to complete this task: ask only one of you to climb up the hill at a time and measure the time needed, or ask all of you to run all at once and see who reaches first.

That second method is known as the “population approach” of solving optimisation problems – and that’s how evolutionary algorithms work. The “population” of solutions are evolved over a number of iterations, with only the fittest solutions making it to the next.

This is analogous to the champion girl from your school making to the next round which was contested among champions from other schools in your state, then your country, and finally winning among all the countries.

Or, in our above scenario, finding who in the walking group reaches the hill top fastest, who would then be denoted as the fittest.

In engineering, optimisation needs are faced at almost every step, so it’s not surprising evolutionary algorithms have been successful in that domain.

Design optimisation of scramjets

At the Multi-disciplinary Design Optimisation Group at the University of New South Wales, my colleagues and I are involved in the design optimisation of scramjets, as part of the SCRAMSPACE program. In this, we’re working with colleagues from the University of Queensland.

Our evolutionary algorithms-based optimisation procedures have been successfully used to obtain the optimal configuration of various components of a scramjet.

Some of these have quite technical names, that in themselves would require quite a bit of explanation but, if you want, you can get a feel for the kind of work we do, and its applications for scramjets, by clicking here.

There are, at the risk of sounding over-zealous, no limits to the application of evolutionary algorithms.

Has this whetted your appetite? Have you learnt something new today?

If so, I’m glad. May the force be with you!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Amit Saha*


Millennium Prize: The Yang-Mills Existence and Mass Gap problem

There’s a contradiction between classical and quantum theories.

One of the outstanding discoveries made in the early part of the last century was that of the quantum behaviour of the physical world. At very short distances, such as the size of an atom and smaller, the world behaves very differently to the “classical” world we are used to.

Typical of the quantum world is so-called wave-particle duality: particles such as electrons behave sometimes as if they are point particles with a definite position, and sometimes as if they are spread out like waves.

This strange behaviour is not just of theoretical interest, since it is underpins much of our modern technology. It is fundamental to the behaviour of semiconductors in all our electronic devices, the behaviour of nano-materials, and the current rise of quantum computing.

Quantum theory is fundamental. It must govern not just the very small but also the classical realm. That means physicists and mathematicians have had to develop methods not just for understanding new quantum phenomena, but also for replacing classical theories by their quantum analogues.

This is the process of [quantization.](http://en.wikipedia.org/wiki/Quantization_(physics) When we have a finite number of degrees of freedom, such as for a finite collection of particles, although the quantum behaviour is often counter-intuitive, we have a well-developed mathematical machinery to handle this quantization called quantum mechanics.

This is well understood physically and mathematically. But when we move to study the electric and magnetic fields where we have an infinite number of degrees of freedom, the situation is much more complicated. With the development of so-called quantum field theory, a quantum theory for fields, physics has made progress that mathematically we do not completely understand.

What’s the problem?

Many field theories fall into a class called gauge field theories, where a particular collection of symmetries, called the gauge group, acts on the fields and particles. In the case that these symmetries all commute, so-called abelian gauge theories, we have a reasonable understanding of the quantization.

This includes the case of the electromagnetic field, quantum electrodynamics, for which the theory makes impressively accurate predictions.

The first example of a non-abelian theory that arose historically is the theory of the electro-weak interaction, which requires a mechanism to make the predicted particles massive as we observe them in nature. This involves the so-called Higgs boson, which is currently being searched for with the Large Hadron Collider (LHC) at CERN.

The notable feature of this theory for our present discussion is that the Higgs mechanism is classical and carries over to the quantum theory under the quantization process.

The case of interest in the Millennium Problem “Yang-Mills theory and Mass-Gap” is Yang-Mills gauge theory, a non-abelian theory which we expect to describe quarks and the strong force that binds the nucleus and powers the sun. Here we encounter a contradiction between the classical and quantum theories.

The classical theory predicts massless particles and long-range forces. The quantum theory has to match the real world with short-range forces and massive particles. Physicists expect various mathematical properties such as the “mass gap” and “asymptotic freedom” to explain the non-existence of massless particles in observations of the strong interactions.

As these properties are not visible in the classical theory and arise only in the quantum theory, understanding them means we need a rigorous approach to “quantum Yang-Mills theory”. Currently we do not have the mathematics to do this, although various approximations and simplifications can be done which suggest the quantum theory has the required properties.

The Millennium Problem seeks to establish by rigorous mathematics the existence of the “mass gap” – that is, the non-existence of massless particles in Yang-Mills theory. The solution of the problem would involve an approach to quantum field theory in four dimensions that is sophisticated enough to explain at least this feature of quantum non-abelian Yang-Mills gauge theory.

Doing the maths

Clearly this is of interest to physicists, but why is it of importance to mathematicians? It has become apparent in the last few decades that the tools that physicists have developed for doing quantum field theory, in particular path integrals, make precise predictions about geometry and topology, particularly in low dimensions.

But we don’t know mathematically what a path integral is, except in very simple cases. It is as if we are in a pre-Newtonian world – certain calculations can be done with certain tricks but Newton hasn’t developed calculus for us yet.

Analogously, there are calculations in geometry and topology that can be done non-rigorously using methods developed by physicists in quantum field theory which give the right answers. This suggests that there is a set of powerful techniques waiting to be discovered.

A solution to this Millennium Problem would shed light on what these new techniques are.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Murray*

 


Another Triangular Number Formula

The double recurrence relation that defines the higher triangular numbers is a simple one – it is no surprise that they turn up so often.

The geometric interpretation is stacking: For a given dimension d, you get the n+1 d-triangular number by stacking the nth d-1 triangular number (the gnomon) onto the nth d-triangular number.  The zero dimensional triangular numbers are just the sequence: 1, 1, 1, 1,…, presumably counting stacks of nothing. The one-dimensional triangular numbers are the naturals: 1, 2, 3, 4, …, made by stacking the ones of the one-dimensional case. The two dimensional triangular numbers stack the naturals: 1, 3, 6, 10, …, the three dimensional triangular numbers make pyramids of the triangulars: 1, 4, 10, 20, ….

If you write out a difference table for the higher triangular numbers, you end up with Pascal’s triangle. This suggests a nice formula for the triangulars in terms of binomial coefficients:

From this, you can obtain another recursive formula that you can use when working with higher triangular numbers (this is the “another” formula for this post):

If you vary the defining recurrence relation so that the initial “zero dimensional” value is a number other than 1, you get the other polygonal numbers (square, pentagonal, hexagonal, square-based pyramidal, etc.). In particular, if you let the zero-dimensional value be k-2, you obtain the k-polygonal numbers (k-2 corresponding to the number of triangles in your k-sided polygon).

It turns out there is a nice formula for these in terms of binomial coefficients as well:

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Science, Maths and The Future of Australia

Australia faces many big challenges – in the economy, health, energy, water, climate change, infrastructure, sustainable agriculture and the preservation of our precious biodiversity.

To meet these, we need creative scientists and engineers drawn from many disciplines, and a technologically-skilled workforce.

The many world-changing advances and achievements of Australian research and development (R&D) are encouraging. Indeed, the Australian Academy of Science, of which I’m president, believes our country’s scientific potential has never been greater.

But our ability to improve this performance in the future, or even maintain it, is not assured.

Four things threaten our ongoing R&D performance and, as a consequence, our economic security and prosperity, and I’ll address each of these in turn.

1) The level of investment in R&D

Over the past decade, successive Australian governments have recognised the need to properly invest in research and innovation.

Total investment by the current government has increased by almost 43%, and is projected to amount to $9.4 billion dollars over the current financial year. This is very commendable.

It’s heartening to see Australia’s business sector is also increasing its investment – although admittedly this boost is coming off a low base compared to many other OECD nations. (Australia ranks 14th for business expenditure on R&D as a percentage of GDP).

But to remain competitive internationally we need even greater investment.

Australia spends around 2.2% of its GDP (around AU$900 per person per year) on research and development.

Iceland, the next best-ranked country, devotes 2.6% cent of GDP. Top of the list is Israel, with 4.6%, followed by Finland and Sweden, each of which spend 3.6%.

We have around 92,000 full-time equivalent researchers which, again, is only middle order. According to the OECD, in 2008 the proportion of R&D personnel in our total labour force puts Australia 16th, well short of Canada, which ranks ninth.

China has more than 1.6 million people working on research and development, a number that’s increasing rapidly. (China is ranked 33rd, with 2.5 R&D personnel per thousand in the workforce, from a total population of 1.3 billion)

Worryingly, Australia sits well within the bottom half of OECD countries (ranked 20th of 30) when it comes to the number of university graduates emerging with a science or engineering degree per capita.

These are sobering statistics.

The Australian Academy of Science therefore calls on the government to create a Sovereign Fund for Science, to secure the future prosperity of the nation.

The goal should be to increase Australia’s research and development expenditure to at least 3% of GDP by 2020.

2) International collaboration

By its very nature, science is a collaborative enterprise. It transcends generations, individual scientific disciplines and, increasingly, national boundaries. To paraphrase Sir Isaac Newton, we see further by standing on the shoulders of giants.

Australia produces only 2% of the world’s knowledge. To gain access to the other 98%, we must ensure our scientists are well-connected internationally.

Getting involved with major international projects at inception allows Australia to stay abreast of new scientific developments, to have a say in their direction, to take the knowledge further, and to apply it.

International collaborations also attract scientists from overseas to spend time in Australia, bringing us new skills and knowledge. Importantly, many return and become part of our scientific workforce.

Work arising from such collaborations often attracts great attention and gets cited more frequently. Take the recently announced kangaroo genome sequence, which garnered international media attention.

This work was done by a consortium of more than 100 researchers from Australia, the US, the UK, Germany and Japan, headed by my friend and Academy colleague Professor Marilyn Renfree. The “kangaroo” was in fact the Tammar wallaby.

Its genome is yielding many unexpected insights that may have significance for humans as well as for wallabies – for example the genes that make antibiotics in the mother’s milk to protect the tiny newborns from harmful bacteria.

There are many such examples.

We hope to bring international astronomers to Australia by winning the bid to build a giant collection of radio telescopes in the Western Australian desert. Known as the Square Kilometre Array, or SKA, this international project – which could go to either South Africa or Australia – will give astronomers huge insights into the formation and evolution of the first stars and galaxies after the Big Bang.

Barriers that have impeded the use of Australian research grants for international collaborations are being dismantled.

Today many grants and fellowships provided by the Australian Research Council, National Health and Medical Research Council and CSIRO support projects that include international partners.

Many of these linkages were initially catalysed by the federal government’s International Science Linkages (or ISL) program.

With funding of about $10 million per year, the ISL program has supported bilateral and multilateral relations with many other countries.

Regrettably, the ten-year program ended in June this year as funding was not renewed in the 2011-2012 Budget.

Put simply, it would be a grave blow if our ability to compete on the international stage were to be diminished.

I strongly urge the Federal Government to fund in its next Budget a new program to provide strategic support for Australia’s International Science Linkages.

3) Science capability in the workforce

We are a lucky nation: we have access to immense mineral wealth. But resources are finite. Even the minerals sector acknowledges that we cannot ride the current boom indefinitely.

Further, the Minerals Council of Australia warns skills shortages and structural weaknesses in the Australian economy have been masked by the boom.

And so, when the end of the mining boom comes, where will Australia be?

There is broad consensus among minds more economically astute than mine that our future prosperity will depend upon:

  • a skilled workforce
  • innovation
  • entrepreneurship
  • high productivity
  • the creation of the kind of knowledge-intensive goods and services that can only result from robust research and development.

Certain skills are already in short supply in Australia.

In fact, the No More Excuses report issued by the Industry Skills Council earlier this year points to an alarming deficit in even basic skills.

According to that report, “millions of Australians have insufficient language, literacy and numeracy skills to benefit fully from training or to participate effectively at work”.

A recent project looking at the maths skills of bricklaying apprentices at a regional TAFE showed:

  • 75% could not do basic arithmetic.
  • 80% could not calculate the area of a rectangle, or the pay owed for working four-and-a-half hours.

Such figures are particularly worrying at a time when the demand for higher-level skills is increasing.

It’s essential we act now to ease the bottleneck and put in place measures that will create the technologically-competent workforce we need for the future.

We can, and should, be “the clever country”. But this will only happen if we place appropriate emphasis on properly educating our young people.

4) Science and maths education

Without a robust and inspiring science and maths education system, it’s impossible to create an internationally-competitive workforce.

Myriad jobs – apart from the obvious research, engineering and technology careers – require a basic understanding of science and maths.

And, as a parent, a mentor of young scientists and a passionate advocate for quality education, I know that all children are natural born scientists.

“Why?”, “How?”, and “What happens if …?” are questions asked frequently by young children, whose natural spirit of inquiry is crucial to understanding the big exciting world around them.

We need to harness this natural curiosity and nurture it with inspiring education.

Australian public expenditure on education as a percentage of GDP is just 4.2% – significantly below the OECD average of 5.4%.

A decade ago, a review of Australian science education, revealed many students were disappointed with their high school science.

Today, this disenchantment continues, as evidenced by the declining number of students choosing to study science in senior secondary school. Consider the following:

  • In 1991, more than a third of Year 12 students chose to study biology. That now sits at less than a quarter.
  • 23% of Year 12 students studied chemistry ten years ago, compared with 18% now.
  • In the same period, physics has fallen from 21% to 14%.

While Australian students have been losing interest in science, their international peers have been taking it up with great enthusiasm.

The OECD Program for International Student Assessment (PISA) examines the scientific literacy of teenagers in 57 different countries.

In 2000, the only nations that performed better than Australia were Korea and Japan. In 2009 – the most recent figures available – Australia ranked behind Shanghai, Finland, Hong Kong, Singapore, Japan and Korea.

What happened? The Assessment indicated that the performance of other countries has improved while Australia’s has remained stationary.

Maths

Australia’s early secondary mathematical literacy scores have significantly declined over the last decade. Our Year 4 and Year 8 students ranked 14th internationally in the most recent Trends International Mathematics and Science Study, conducted in 2007.

The decline in Australia’s mathematical literacy is of grave concern because mathematics is an enabling science, without which it’s not possible to make use of other sciences – either in the lab or in the workforce.

A recent survey conducted by Science and Technology Australia and the Academy of Science showed Australians clearly value science – 80% of respondents acknowledged science education is absolutely essential or very important to the national economy.

But it also revealed some alarming holes in the basic science understanding of the average Australian.

  • Three in ten believe humans were around at the time of dinosaurs.
  • More than a fifth of our university graduates think that it takes just one day for the Earth to travel around the sun.
  • Almost a third of Australians do not think evolution is currently occurring.
  • About a quarter say human activity is not influencing the evolution of other species: a worrying statistic given the impact that human activity is having on the environment.

In other words, many of us do not understand even the most basic science.

How can we halt this slide in science and maths in our schools and attain an internationally enviable position?

Thankfully, our government is already investing significantly in school infrastructure and in rolling out a national high-speed internet network.

Last December, education ministers approved the content for new national curricula in English, history, maths and science. In coming months, they’ll be asked to sign off on the standards for these curricula. This is an important initiative and the Academy of Science applauds it.

But we also need investment in teachers, and in inspiring curriculum programs.

This is a responsibility for both the Commonwealth and the States, who must work together rather than reverting to the blame game.

Inspired (and inspiring) teachers will be the most important agents for improving educational outcomes.

We must place a much higher societal value on teachers and do everything we can to recruit some of our brightest and best into teaching.

We must support these educators with the best tools and resources available and provide them with stimulating opportunities for ongoing training.

I agree with Prime Minister Julia Gillard that science is one of the fundamental platforms upon which our conception of a modern advanced society is based.

I agree with the prime minister that we live in a crucial time for science in Australia and around the world.

In fact, I could not agree more.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Suzanne Coryter *


A Family of Sequences and Number Triangles

The triangular numbers (and higher triangular numbers) can be generated using this recurrence relationship:

These will form Pascal’s triangle (if we shift the variables n->n+d-1 and d->r, we get the familiar C(n,r) indexing for the Pascal Triangle). The d=2 case gives the usual “flat” 2d triangular numbers, and other d values provide triangular numbers of different dimensions.

 

It turns out that recurrence relation can be generalized to generate a family of sequences and triangles. Consider this more general relation:

Doing some initial exploring reveals four interesting cases:

The triangular numbers 

With all these additional parameters set to 1, we get our original relation, the familiar triangular numbers, and Pascal’s triangle.

The k-polygonal numbers 

If we set the “zero dimension” to k-2, we end up with the k-polygonal numbers. The triangular numbers arise in the special case where k=3. Except in the k=3 case, the triangles that are generated are not symmetrical.

 

Below is the triangle generated by setting k=5.

The symmetrically shifted k-polygonal numbers

As far as I know, there is not a standard name for these.  Each k value will generate a triangle that is symmetrical about its center and whose edge values are equal to k-2. For a given k value, if you enter sequences generated by particular values of d, you’ll find that some are well known. The codes in the diagrams correspond to the sequence ids from the Encyclopedia.

 

Here is the triangle generated by k=4:

And here is the triangle generated for k=5:

The Eulerian numbers (Euler’s number triangle)

This is a particularly nice way to generate the Eulerian numbers, which have a nice connection to the triangular numbers. There is a little inconsistency in the way the Eulerian numbers are indexed, however. For this formula to work, it should be altered slightly so that d>0. The resulting formula looks like this:

And the triangle looks like this:

It is surprising that so many interesting and well known sequences and triangles can be generated from such a simple formula, and that they can be interpreted as being part of a single family.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*