Digital Alchemy: Sir Isaac Newton’s Papers Now Online

Mirrors of a magical scientist: Andromeda photographed through a Newtonian telescope.

The notebooks of Sir Isaac Newton, who was famously reported to have suffered a (scientifically) earth-shaking blow to the head from an apple, are being scanned and published online by the University of Cambridge.

Newton, a Biblical numerologist when he wasn’t developing calculus or building the first reflecting telescope, founded classical mechanics with Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), which was first published in 1687. In the book that made his name, Newton set out his three laws of motion, and his theory of universal gravitation (prompted by pondering what force plummeted the fruit straight down onto his head, or so goes the apocryphal tale).

Newton studied and later held the Lucasian Chair of Mathematics at Cambridge, which was given numerous manuscripts of his in 1872 and has since bought more. The online publication has started with Newton’s mathematical works of the 1660s and more papers will become available over coming months.

Striking a light for science.

A philosopher of science at Flinders University, George Couvalis, said that Newton’s gravitational experiments – which largely corrected ancient observations of gravity – were sparked by his interest in magic and magnetism. “The idea that things might naturally attract one another is an idea that he got from magical ideas. He adapted it across to mathematical theory because it was a mystical theory,” Dr Couvalis said.

It was important to remember that scientists of Newton’s era did not have what we would consider a modern sceptical outlook and – with the exception of the “exceptional” Galileo Galilei – instead held a fusion of views that we would consider deeply irrational, Dr Couvalis said.

“It was certainly far more common in the 17th and 18th centuries for scientists to be interested in magical beliefs and alchemical beliefs and religious beliefs. Johannes Kepler, for example, had all kinds of strange views about the music of the spheres, Copernicus had strange views about the sacredness of the sun, and Newton famously had views about the mysterious numerical meanings of Biblical passages and about alchemical material, ” Dr Couvalis said.

Scientists of the period saw their work touching on many illogical and occult fields of interest, including Robert Boyle, a founder of modern chemistry, who had “an interest in doing experimental research on magical mirrors, which to us would sound bizarre but at the time it was thought to be a possibility,” said Dr Couvalis, who added that Boyle pulled back from some experiments for religious reasons. “He thought it might get him in touch with demons.”

Demonology may have fallen out of favour amongst scientists, but “the view that we’re getting everything right would be a serious mistake,” Dr Couvalis said. “To some degree science is always in the sway of the time it’s in; this is now the standard view of philosophers and historians.”

“Newton’s mechanics is in certain respects pretty much right, but in other respects it was shown by Einstein and others to be wildly wrong. By about 1900 we had people saying to their graduate students ‘You should give up physics because it’s all been done,’ but Einstein managed to show that it was wildly wrong in certain respects,” Dr Couvalis said.

The ideal of the scientific method is never met, and our beliefs and discoveries will likely on day be seen as flawed but perhaps useful stepping stones in the continuum of science, Dr Couvalis said. “People make mistakes, people have a lot of trouble leaving assumptions behind, and our tests are never rigorous enough to be absolutely certain that we’re getting things right. Future experimental studies and the sheer empirical facts will show us to be wrong in many ways that we can’t anticipate.”

“We work with what we have because we just don’t know anything better at the moment. It might turn out that Einstein’s special and general theories of relativity are wrong in some deep-seated way. It might turn out that some of our theories of the universe are wrong. It’s starting to look in biology as if neo-Darwinism isn’t completely right, so where will that go – I don’t know. Research will determine the direction. That doesn’t mean that we’re going to go back to being creationists – that view has been thoroughly debunked. Imre Lakatos wrote in the 1970s there are no good scientific theories, there’s only the best rotten theory we have.”

For more such insights, log into www.international-maths-challenge.com.

*Credit For article given to Matthew Thompson*

 


Millennium Prize: P vs NP

Deciding whether a statement is true is a computational head-scratcher.

In the 1930s, Alan Turing showed there are basic tasks that are impossible to achieve by algorithmic means. In modern lingo, what he showed was that there can be no general computer program that answers yes or no to the question of whether another computer program will eventually stop when it is run.

The amazing unsolvability of this Halting Problem contains a further perplexing subtlety. While we have no way of finding in advance if a program will halt, there is an obvious way, in principle, to demonstrate that it halts if it is a halting program: run it, wait, and witness it halting!

In other words, Turing showed that, at the broadest level, deciding whether a statement is true is computationally harder than demonstrating that it’s true when it is.

A question of efficiency

Turing’s work was a pivotal moment in the history of computing. Some 80 years later, computing devices have pervaded almost every facet of society. Turing’s original “what is computable?” question has been mostly replaced by the more pertinent, “what is efficiently computable?”

But while Turing’s Halting Problem can be proved impossible in a few magical lines, the boundary between “efficient” and “inefficient” seems far more elusive. P versus NP is the most famous of a huge swathe of unresolved questions to have emerged from this modern take on Turing’s question.

So what is this NP thing?

Roughly speaking, P (standing for “polynomial time”), corresponds to the collection of computational problems that have an efficient solution. It’s only an abstract formulation of “efficient”, but it works fairly well in practice.

The class NP corresponds to the problems for which, when the answer is “yes”, there is an efficient demonstration that the answer is yes (the “N” stands for “nondeterministic”, but the description taken here is more intuitive). P versus NP simply asks if these two classes of computational problems are the same.

It’s just the “deciding versus demonstrating” issue in Turing’s original Halting Problem, but with the added condition of efficiency.

A puzzler

P certainly doesn’t look to be the same as NP. Puzzles are good examples of the general intuition here. Crossword puzzles are popular because it’s a challenge to find the solution, and humans like challenge. But no-one spends their lunchtime checking already completed crosswords: checking someone else’s solution offers nowhere near the same challenge.

Even clearer is Sudoku: again it is a genuine challenge to solve, but checking an existing solution for correctness is so routine it is devoid of entertainment value.

The P=NP possibility is like discovering that the “finding” part of these puzzles is only of the same difficulty to the “checking” part. That seems hard to believe, but the truth is we do not know for sure.

This same intuition pervades an enormous array of important computational tasks for which we don’t currently have efficient algorithms. One particularly tantalising feature is that, more often than not, these problems can be shown to be maximally hard among NP problems.

These so-called “NP-complete” problems are test cases for P versus NP: if any one of them has an efficient algorithmic solution then they all do (and efficient checking is no harder than efficient finding).

But if even just one single one can be shown to have no efficient solution, then P does not equal NP (and efficient finding really is, in general, harder than efficient checking).

Here are some classic examples of NP-complete problems.

  • Partition (the dilemma of the alien pick-pockets). On an alien planet, two pick-pockets steal a wallet. To share the proceeds, they must evenly divide the money: can they do it? Standard Earth currencies evolved to have coin values designed to make this task easy, but in general this task is NP-complete. It’s in NP because, if there is an equal division of the coins, this can be easily demonstrated by simply showing the division. (Finding it is the hard part!)
  • Timetabling. Finding if a clash-free timetable exists is NP-complete. The problem is in NP because we can efficiently check a correct, clash-free timetable to be clash-free.
  • Travelling Salesman. A travelling salesman must visit each of some number of cities. To save costs, the salesman wants to find the shortest route that passes through all of the cities. For some given target distance “n”, is there a route of length at most “n”?
  • Short proofs. Is there a short proof for your favourite mathematical statement (a Millennium Prize problem perhaps)? With a suitable formulation of “short”, this is NP-complete. It is in NP because checking formal proofs can be done efficiently: the hard part is finding them (at least, we think that’s the hard part!).

In every case, we know of no efficient exact algorithm, and the nonexistence of such an algorithm is equivalent to proving P not equal to NP.

So are we close to a solution? It seems the best we know is that we don’t know much! Arguably, the most substantial advances in the P versus NP saga are curiously negative: they mostly show we cannot possibly hope to resolve P as different to NP by familiar techniques.

We know Turing’s approach cannot work. In 2007, Alexander Razborov and Steven Rudich were awarded the Gödel Prize (often touted as the Nobel Prize of Computer Science) for their work showing that no “natural proof” can prove P unequal to NP.

Of course, we’ll keep looking!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Marcel Jackson*

 


Millennium Prize: The Navier–Stokes Existence And Uniqueness Problem

How fluids move has fascinated researchers since the birth of science.

Among the seven problems in mathematics put forward by the Clay Mathematics Institute in 2000 is one that relates in a fundamental way to our understanding of the physical world we live in.

It’s the Navier-Stokes existence and uniqueness problem, based on equations written down in the 19th century.

The solution of this prize problem would have a profound impact on our understanding of the behaviour of fluids which, of course, are ubiquitous in nature. Air and water are the most recognisable fluids; how they move and behave has fascinated scientists and mathematicians since the birth of science.

But what are the so-called Navier-Stokes equations? What do they describe?

The equations

In order to understand the Navier-Stokes equations and their derivation we need considerable mathematical training and also a sound understanding of basic physics.

Without that, we must draw upon some very simple basics and talk in terms of broad generalities – but that should be sufficient to give the reader a sense of how we arrive at these fundamental equations, and the importance of the questions.

From this point, I’ll refer to the Navier-Stokes equations as “the equations”.

The equations governing the motion of a fluid are most simply described as a statement of Newton’s Second Law of Motion as it applies to the movement of a mass of fluid (whether that be air, water or a more exotic fluid). Newton’s second law states that:

Mass x Acceleration = Force acting on a body

For a fluid the “mass” is the mass of the fluid body; the “acceleration” is the acceleration of a particular fluid particle; the “forces acting on the body” are the total forces acting on our fluid.

Without going into full details, it’s possible to state here that Newton’s Second Law produces a system of differential equations relating rates of change of fluid velocity to the forces acting on the fluid. We require one other physical constraint to be applied on our fluid, which can be most simply stated as:

Mass is conserved! – i.e. fluid neither appears nor disappears from our system.

The solution

Having a sense of what the Navier-Stokes equations are allows us to discuss why the Millennium Prize solution is so important. The prize problem can be broken into two parts. The first focuses on the existence of solutions to the equations. The second focuses on whether these solutions are bounded (remain finite).

It’s not possible to give a precise mathematical description of these two components so I’ll try to place the two parts of the problem in a physical context.

1) For a mathematical model, however complicated, to represent the physical world we are trying to understand, the model must first have solutions.

At first glance, this seems a slightly strange statement – why study equations if we are not sure they have solutions? In practice we know many solutions that provide excellent agreement with many physically relevant and important fluid flows.

But these solutions are approximations to the solutions of the full Navier-Stokes equations (the approximation comes about because there is, usually, no simple mathematical formulae available – we must resort to solving the equations on a computer using numerical approximations).

Although we are very confident that our (approximate) solutions are correct, a formal mathematical proof of the existence of solutions is lacking. That provides the first part of the Millennium Prize challenge.

2) The second part asks whether the solutions of the Navier-Stokes equations can become singular (or grow without limit).

Again, a lot of mathematics is required to explain this. But we can examine why this is an important question.

There is an old saying that “nature abhors a vacuum”. This has a modern parallel in the assertion by physicist Stephen Hawking, while referring to black holes, that “nature abhors a naked singularity”. Singularity, in this case, refers to the point at which the gravitational forces – pulling objects towards a black hole – appear (according to our current theories) to become infinite.

In the context of the Navier-Stokes equations, and our belief that they describe the movement of fluids under a wide range of conditions, a singularity would indicate we might have missed some important, as yet unknown, physics. Why? Because mathematics doesn’t deal in infinites.

The history of fluid mechanics is peppered with solutions of simplified versions of the Navier-Stokes equations that yield singular solutions. In such cases, the singular solutions have often hinted at some new physics previously not considered in the simplified models.

Identifying this new physics has allowed researchers to further refine their mathematical models and so improve the agreement between model and reality.

If, as many believe, the Navier-Stokes equations do posses singular solutions then perhaps the next Millennium Prize will go to the person that discovers just what new physics is required to remove the singularity.

Then nature can, as all fluid mechanists already do, come to delight in the equations handed down to us by Claude-Louis Navier and George Gabriel Stokes.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jim Denier*


Cut-and-Glue Polyhedral Models

Building polyhedral models is a nice way to explore a lot of significant mathematics. The above models were made by printing patterns onto card-stock, cutting them out, and gluing them together. For these models, only triangular faces were used. These can give you a wide variety of cumulated (or augmented) polyhedra. The triangular faces are circumscribed to provide tabs that you glue together. You can fold and glue the tabs so that they are inside the models, but it is easier to leave them out, and they look nice this way (I think).
1. Decide on a model that you would like to make, and figure out how many faces you will need.
2. Copy and paste the images below into a document or presentation slide (PowerPoint works well) for printing. Choose the right ones for your model, and fit as many as you can on a single sheet.
3. Print out onto card-stock. Most desk ink-jet printers can take card-stock instead of printer-paper.
4. Cut out the units, fold the tabs, and assemble and glue.
Throughout this process, it helps if you have pictures of the polyhedra that you want to construct. Poly is a nice software package for browsing through families of polyhedra.
I’ve found that it works well to bend the tabs using a ruler, that glue-sticks provide the best gluing, and that it helps to hold the model together with binder-clips while assembling.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Are Pigeons as Smart as Primates? You can Count on It

The humble pigeon mightn’t look smart, but it’s no bird-brain.

We humans have long been interested in defining the abilities that set us apart from other species. Along with capabilities such as language, the ability to recognise and manipulate numbers (“numerical competence”) has long been seen as a hallmark of human cognition.

In reality, a number of animal species are numerically competent and according to new research from psychologists at the University of Otago in New Zealand, the humble pigeon could be one such species.

Damian Scarf, Harlene Hayne and Michael Colombo found that pigeons possess far greater numerical abilities than was previously thought, actually putting them on par with primates.

More on pigeons in a moment, but first: why would non-human animals even need to be numerically competent? Would they encounter numerical problems in day-to-day life?

In fact, there are many reports indicating that number is an important factor in the way many species behave.

Brown cowbirds are nest parasites – they lay their eggs in the nests of “host” species; species that are then landed with the job of raising a young cowbird.

 

Cowbirds are sensitive to the number of eggs in the host nest, preferring to lay in nests with three host eggs rather than one. This presumably ensures the host parent is close to the end of laying a complete clutch and will begin incubating shortly after the parasite egg has been added.

Crows identify individuals by the number of caw sounds in their vocalisations, while lionesses appear to evaluate the risk of approaching intruder lions based on how many individuals they hear roaring.

But numerical competence is about more than an ability to count. In fact, it’s three distinct abilities:

  • the “cardinal” aspect: the ability to evaluate quantity (eg. counting the number of eggs already in a nest)
  • the “ordinal” aspect: the ability to put an arbitrary collection of items in their correct order or rank (eg. ordering a list of animals based on the number of legs they have, or ordering the letters of the alphabet)
  • the “symbolic” aspect: the ability to symbolically represent a given numerical quantity (eg. the number “3” or the word “three” are symbols that represent the quantity 3).

We know that humans are capable of all three aspects of numerical competence, but what about other animals?

For a start, we already know that the cowbird, lion and crow possess the cardinal aspect of numerical competency – they are all able to count. Pigeons possess the cardinal aspect too (as was reported as early as 1941) as do several other vertebrate and invertebrate species.

And in 1998, Elizabeth Brannon and Herbert Terrace showed that rhesus monkeys have the ability to order arrays of objects according to the number of items contained within these arrays. After learning to order sets of one, two and three items, the monkeys were able to order any three sets containing from one to nine items.

This discovery represented a clear progression in complexity, since ranking according to numerical quantity is an abstract ability – the ordinal aspect.

The new research by Scarf, Hayne and Colombo – which was published in Science in late December – has pushed, even further, our understanding of numerical abilities in the animal kingdom.

So what did they do?

Well, first they trained pigeons to peck three “stimulus arrays” – collections of objects on a touch screen. These arrays contained one, two or three objects and to receive a reward, the pigeon had to peck the arrays in order – the array with one object first, the array with two objects second, the array with three objects third.

Once this basic requirement was learned, the pigeons were presented with different object sets – one set containing arrays with one to three objects, and sets containing up to nine objects.

Having been presented with these novel object sets, the pigeons were once again required to peck the sets in ascending order. Pigeons solved the task successfully, even though they had never been trained with arrays containing more than three items.

A pigeon taking part in the University of Otago experiment.

In fact, they performed on par with rhesus monkeys, demonstrating that both pigeons and monkeys are able to identify and order the numbers from one to nine. This is significant because it shows these complex numerical abilities are not confined to the primates (and that pigeons are smarter than many people think!)

So if non-human animals possess the cardinal and ordinal aspects of numerical competency, that means it’s the symbolic representation of numbers that makes humans unique, right?

As it turns out, no.

It’s been shown that red wood ants (Formica polyctena) can not only count up to several tens (20, 30 etc.), but can also communicate this numerical information to their brethren.

It would seem, therefore, that not even the symbolic representation of numerical information is specific to humans.

Of course, we still have much more to discover and understand within this fascinating field of research. In the meantime, you might want to think twice before dismissing pigeons as “stupid birds”.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to David Guez and Andrea S. Griffin*


Google Has Created a Maths AI That Has Already Proved 1200 Theorems

Mathematicians don’t need to worry about AI taking over their jobs just yet

You don’t need a human brain to do maths — even artificial intelligence can write airtight proofs of mathematical theorems.

An AI created by a team at Google has proven more than 1200 mathematical theorems. Mathematicians already knew proofs for these particular theorems, but eventually the AI could start working on more difficult problems.

One of the core pillars of maths is the concept of proof. It is an argument based on known statements, assumptions, or rules, that a certain mathematical statement, such as a theorem, is true.

To train their AI, the Google team started with a database of more than 10,000 human-written mathematical proofs, along with the reasoning behind each step known as a tactic. Tactics could include using a known property about numbers, such as the fact that multiplying x by y is the same as multiplying y by x, or applying the chain rule.

Then, they tested the AI on 3225 theorems it hadn’t seen before and it successfully proved 1253 of them. Those that it couldn’t prove were because it had only 41 tactics at its disposal.

To prove each theorem, the AI split them into smaller and smaller components using the list of tactics. Eventually each of the smaller components could be proven with a single tactic, thus proving the larger theorem.

“Most of the proofs we used are relatively short, so they don’t require a lot of long complicated reasoning, but this is a start,” says Christian Szegedy at Google. “Where we want to get to is a system that can prove all the theorems that humans can prove, and maybe even more.”

Tackling harder problems

While this particular algorithm is focused on linear algebra and complex calculus, changing its training set could allow it to do any sort of mathematics, says Szgedy. For now, the AI’s main application is filling in the details of long and arduous proofs with extreme precision.

Mathematicians often make intellectual jumps in their proofs without spelling out the exact tactics used to get from one step to the next, and provers like this could walk through the intermediate work automatically, without requiring a human mathematician to fill in each exact tactic used.

“You get the maximum of precision and correctness all really spelled out, but you don’t have to do the work of filling in the details,” says Jeremy Avigad at Carnegie Mellon University in Pennsylvania. “Maybe offloading some things that we used to do by hand frees us up for looking for new concepts and asking new questions.”

AIs like this could one day even solve maths problems we don’t know how to solve or that are too long and complicated. But that will take a much larger training set, more tactics, and a simpler way to plug the theorems into the computer. “That’s far away, but I think it could happen in our lifetime,” says Szgedy.

“Pretty much anything that you can state and try to prove mathematically, you can put into this system,” says Avigad. “You can distill just about all of mathematics down to very basic rules and assumptions, and these systems implement those rules and assumptions.”

All of this happens in a matter of seconds per proof and the only source of error is the translation of the theorem into formal language the computer can understand. Szegedy says that the team is now working on the problem of automatic translation so that it’s easier for mathematicians to interact with the system.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Leah Crane*

 


Mathematics: Why We Need More Qualified Teachers

There is a crisis in the education system, and it’s affecting the life chances of many young Australians. The number of secondary teaching graduates with adequate qualifications to teach mathematics is well below what it should be, and children’s education is suffering.

A report completed for the Australian Council of Deans of Science in 2006 documented the problem, but the situation has deteriorated since. The percentage of Year 12 students completing the more advanced mathematics courses continues to decline. This affects mathematics enrolments in the universities and a number no longer offer a major in mathematics, worsening an already inadequate supply of qualified teachers.

Changing qualifications

To exacerbate an already serious problem, the Australian Institute for Teaching and School Leadership (AITSL) currently proposes that graduate entry secondary programs must comprise at least two years of full-time equivalent professional studies in education.

There will be no DipEd pathway, which allows graduates to enter the profession within a year. Forcing them to spend more time in education will lead to increased debt. You couldn’t blame people for changing their mind about becoming a teacher.

I believe the changes in qualifications will lead to a disaster, denying even more young people access to a quality mathematics education that gives them real opportunities in the modern world.

An unequal opportunity

This is a social justice issue because access to a decent mathematics education in Australia is now largely determined by where you live and parental income.

In the past there have been concerns regarding the participation of girls in mathematics and the effect on their careers and life chances.

Australia now seems incapable of responding to a situation where only the privileged have access to well-qualified teachers of mathematics.

The Northern Territory is a prime example. The contraction of mathematics at Charles Darwin University means the NT is now totally dependent on the rest of Australia for its secondary mathematics teachers. And how can talented mathematics students in the NT be encouraged to pursue mathematical careers when it means moving away?

Elsewhere most of regional Australia is largely dependent on mathematics teachers who complete their mathematics in the capital or large regional cities.

Examine the policy

In what is supposed to be a research-driven policy environment, has anyone considered the consequences of the AITSL proposal? And whether this will actually give teachers the skills they need for the positions they subsequently occupy?

In my own case I came to Melbourne with a BSc (Hons) from the University of Adelaide. In the early 1970s I completed a DipEd at La Trobe. The only real cost was some childcare. If I remember correctly the government was so keen to get professional women into the workforce they even helped with the cost of books. Would I have committed to a two-year course? I’m not sure but I had no HECS debt and ongoing employment was just about guaranteed.

My first school had a very high percentage of students from a non-English speaking background. Many of the Year 7s had very poor achievement in mathematics and I turned my attention to finding out what could be done to help them reach a more appropriate standard.

In the course of this I met Associate Professor John Munro who stressed the importance of language in the learning of mathematics. To be a better mathematics teacher, I completed another degree in teaching English as a second language.

Later I coordinated a DipEd program. Many of our better students were of a mature age and struggling with money, family, jobs and a host of other things. They managed for a year. Requiring them to complete two would have seen many of them not enrol in the first place or drop out when it became too much.

Learn on the job

A two-year teaching qualification does not necessarily equip you for the teaching situation you find yourself in. If AITSL wants all teachers to have a second year, let that be achieved in work-related learning over, for example, 5-7 years.

Australia can’t afford to lose a single prospective teacher who is an articulate, well-qualified graduate in mathematics. If the one-year DipEd goes, many will be lost. They have too many options. The new graduates will think about other courses, the career change, mature-age graduates will decide it is all too hard.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jan Thomas*


A Rethink Of Cause And Effect Could Help When Things Get Complicated

Some scientists insist that the cause of all things exists at the most fundamental level, even in systems as complex as brains and people. What if it isn’t so?

There are seconds left on the clock, and the score is 0-0. Suddenly, a midfielder seizes possession and makes a perfect defence-splitting pass, before the striker slots the ball into the bottom corner to win the game. The moment will be scrutinised ad nauseam in the post-match analysis. But can anyone really say why the winners won?

One thing is for sure, precious few would attribute the victory to quantum mechanics. But isn’t that, in the end, all there is? A physicist might claim that to explain what happens to a football when it is kicked, the interactions of quantum particles are all you need. But they would admit that, as with many things we seek to understand, there is too much going on at the particle-level description to extract real understanding.

Identifying what causes what in complex systems is the aim of much of science. Although we have made amazing progress by breaking things down into ever smaller components, this “reductionist” approach has limits. From the role of genetics in disease to how brains produce consciousness, we often struggle to explain large-scale phenomena from microscale behaviour.

Now, some researchers are suggesting we should zoom out and look at the bigger picture. Having created a new way to measure causation, they claim that in many cases the causes of things are found at the more coarse-grained levels of a system. If they are right, this new approach could reveal fresh insights about biological systems and new ways to intervene – to prevent disease, say. It could even shed light on the contentious issue of free will, namely whether it exists.

The problem with the reductionist approach is apparent in many fields of science, but let’s take applied genetics. Time and again, gene variants associated with a particular disease or trait are hunted down, only to find that knocking that gene out of action makes no apparent difference. The common explanation is that the causal pathway from gene to trait is tangled, meandering among a whole web of many gene interactions.

The alternative explanation is that the real cause of the disease emerges only at a higher level. This idea is called causal emergence. It defies the intuition behind reductionism, and the assumption that a cause can’t simply appear at one scale unless it is inherent in microcauses at finer scales.

Higher level

The reductionist approach of unpicking complex problems into their constituent parts has often been fantastically useful. We can understand a lot in biology from what enzymes and genes do, and the properties of materials can often be rationalised from how their constituent atoms and molecules behave. Such successes have left some researchers suspicious of causal emergence.

“Most people agree that there is causation at the macro level,” says neuroscientist Larissa Albantakis at the University of Wisconsin-Madison. “But they also insist that all the macroscale causation is fully reducible to the microscale causation.”

Neuroscientists Erik Hoel at Tufts University in Massachusetts and Renzo Comolatti at the University of Milan in Italy are seeking to work out if causal emergence really exists and if so, how we can identify it and use it. “We want to take causation from being a philosophical question to being an applied scientific one,” says Hoel.

The issue is particularly pertinent to neuroscientists. “The first thing you want to know is, what scales should I probe to get relevant information to understand behaviour?” says Hoel. “There’s not really a good scientific way of answering that.”

Mental phenomena are evidently produced by complex networks of neurons, but for some brain researchers, the answer is still to start at small scales: to try to understand brain function on the basis of how the neurons interact. The European Union-funded Human Brain Project set out to map every one of the brain’s 86 billion neurons, in order to simulate a brain on a computer. But will that be helpful?

Some think not: all the details will just obscure the big picture, they say. After all, you wouldn’t learn much about how an internal combustion engine works by making an atomic-scale computer simulation of one. But if you stick with a coarse-grained description, with pistons and crankshafts and so on, is that just a convenient way of parcelling up all the atomic-scale information into a package that is easier to understand?

The default assumption is that all the causal action still happens at the microscopic level, says Hoel, but we simply “lack the computing power to model all the microphysical details, and that’s why we fixate on particular scales”. “Causal emergence,” he says, “is an alternative to this null hypothesis.” It says that, for some complex systems, looking at a coarse-grained picture isn’t just tantamount to data compression that dispenses with some detail. Instead, it is proposing that there can be more causal clout at these higher levels than there is below. Hoel reckons he can prove it.

To do so, he first had to establish a method for identifying the cause of an effect. It isn’t enough to find a correlation between one state of affairs and another: correlation isn’t causation, as the saying goes. Just because the number of people eating ice creams correlates with the number who get sunburn, it doesn’t mean that one causes the other. Various measures of causation have been proposed to try to get to the root of such correlations and see if they can be considered causative.

In 2013, Hoel, working with Albantakis and fellow neuroscientist Giulio Tononi, also at the University of Wisconsin-Madison, introduced a new way to do this, using a measure called “effective information”. This is based on how tightly a scenario constrains the past causes that could have produced it (the cause coefficient) and the constraints on possible future effects (the effect coefficient). For example, how many other configurations of football players would have allowed that midfielder to release the striker into space, and how many other outcomes could have come from the position of the players as it was just before the goal was scored? If the system is really noisy and random, both coefficients are zero; if it works like deterministic clockwork, they are both 1.

Effective information thus serves as a proxy measure of causal power. By measuring and comparing it at different scales in simple model systems, including a neural-like system, Hoel and his colleagues demonstrated that there could be more causation coming from the macro than from the micro levels: in other words, causal emergence.

Quantifying causation

It is possible that this result might have been a quirk of the models they used, or of their definition of effective information as a measure of causation. But Hoel and Comolatti have now investigated more than a dozen different measures of causation, proposed by researchers in fields including philosophy, statistics, genetics and psychology to understand the roots of complex behaviour. In all cases, they saw some form of causal emergence. It would be an almighty coincidence, Hoel says, if all these different schemes just happened to show such behaviour by accident.

The analysis helped the duo to establish what counts as a cause. We might be more inclined to regard something as a genuine cause if its existence is sufficient to bring about the result in question. Does eating ice cream on its own guarantee a high chance of sunburn, for example? Obviously not. We can also assess causation on the basis of necessity: does increased sunburn only happen if more ice creams are consumed, and not otherwise? Again, evidently not: if the ice-cream seller takes a day off on a sunny day, sunburn can still happen. Causation can thus be quantified in terms of the probability that a set of affairs always and only leads to an effect.

Their work has its critics. Judea Pearl, a computer scientist at the University of California, Los Angeles, says that attempts to “measure causation in the language of probabilities” are outdated. His own models simply state causal structures between adjacent components and then use these to enumerate causal influences between more distantly connected components. But Hoel says that in his latest work with Comolatti, the measures of causation they consider include such “structural causal models” too.

Their conclusion that causal emergence really exists also finds support in recent work by physicists Marius Krumm and Markus Müller at the University of Vienna in Austria. They have argued that the behaviour of some complex systems can’t be predicted by anything other than a complete replay of what all of the components do at all levels; the microscale has no special status as the fundamental origin of what happens at the larger scales. The larger scales, they say, might then constitute the real “computational source” – what you might regard as the cause – of the overall behaviour.

In the case of neuroscience, Müller says, thoughts and memories and feelings are just as much “real” causal entities as are neurons and synapses – and perhaps more important ones, because they integrate more of what goes into producing actual behaviour. “It’s not the microphysics that should be considered the cause for an action, but its high-level structure.” says Müller. “In this sense we agree with the idea of causal emergence.”

Causal emergence seems to also feature in the molecular workings of cells and whole organisms, and Hoel and Comolatti have an idea why. Think about a pair of heart muscle cells. They may differ in some details of which genes are active and which proteins they are producing more of at any instant, yet both remain secure in their identity as heart muscle cells – and it would be a problem if they didn’t. This insensitivity to the fine details makes large-scale outcomes less fragile, says Hoel. They aren’t contingent on the random “noise” that is ubiquitous in these complex systems, where, for example, protein concentrations may fluctuate wildly.

As organisms got more complex, Darwinian natural selection would therefore have favoured more causal emergence – and this is exactly what Hoel and his Tufts colleague Michael Levin have found by analysing the protein interaction networks across the tree of life. Hoel and Comolatti think that by exploiting causal emergence, biological systems gain resilience not only against noise, but also against attacks. “If a biologist could figure out what to do with a [genetic or protein] wiring diagram, so could a virus,” says Hoel. Causal emergence makes the causes of behaviour cryptic, hiding it from pathogens that can only latch onto molecules.

Whatever the reasons behind it, recognising causal emergence in some biological systems could offer researchers more sophisticated ways to predict and control those systems. And that could in turn lead to new and more effective medical interventions. For example, while genetic screening studies have identified many associations between variations of different genes and specific diseases, such correlations have rarely translated into cures, suggesting these correlations may not be signposts to real causal factors. Instead of assuming that specific genes need to be targeted, a treatment might need to intervene at a higher level of organisation. As a case in point, one new strategy for tackling cancer doesn’t worry about which genetic mutation might have made a cell turn cancerous, but instead aims to reprogramme it at the level of the whole cell into a non-malignant state.

You decide?

Suppressing the influence of noise in biological systems may not be the only benefit causal emergence confers, says Kevin Mitchell, a neuroscientist at Trinity College Dublin, Ireland. “It’s also about creating new types of information,” he says. Attributing causation is, Mitchell says, also a matter of deciding which differences in outcome are meaningful and which aren’t. For example, asking what made you decide to read New Scientist is a different causal question to asking what made you decide to read a magazine.

Which brings us to free will. Are we really free to make decisions like that anyway, or are they preordained? One common argument against the existence of free will is that atoms interact according to rigid physical laws, so the overall behaviour they give rise to can be nothing but the deterministic outcome of all their interactions. Yes, quantum mechanics creates some randomness in those interactions, but if it is random, it can’t be involved in free will. With causal emergence, however, the true causes of behaviour stem from higher degrees of organisation, such as how neurons are wired, our brain states, past history and so on. That means we can meaningfully say that we – our brains, our minds – are the real cause of our behaviour.

That is certainly how neuroscientist Anil Seth at the University of Sussex, UK, sees things. “What one calls ‘real’ is of course always going to be contentious, but there is no objection in my mind to treating emergent levels of description as being real,” he says. We do this informally anyway: we speak of our thoughts, desires and goals. “The trick is to come up with sensible ways to identify and measure emergence,” says Seth. Like Hoel and Comolatti, he is pursuing ways of doing that.

Hoel says that the work demonstrating the existence of causal emergence “completely obviates” the idea that “all the causal responsibility drains down to the lower scale”. It shows that “physics is not the only science: there are real entities that do causal work at higher levels”, he says – including you.

Case closed? Not quite. While Mitchell agrees that causal emergence allows us to escape being ruled by the laws of quantum mechanics, he adds that what most people mean by free will requires an additional element: the capacity for conscious reflection and deliberate choice. It may be that we experience this sense of free will in proportion to the degree to which our higher-level brain states are genuine emergent causes of behaviour. Our perception of executing voluntary actions, says Seth, “may in turn relate to volition involving a certain amount of downward causality”.

In other words, you really are more than the sum of your atoms. If you think you made a choice to read this article, you probably did.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Philip Ball*


Superman Returns – But Who’s Looking After His Water?

Is it a plane? No, it’s Smoothed Particle Hydrodynamics.

Watching films such as Superman Returns or The Day after Tomorrow, you would have seen dramatic sequences of surging water and crumbling buildings.

While doing so, mathematics was probably the last thing you thought about; but without it, scenes of this nature would be virtually impossible.

Take the 2006 film Superman Returns. In one scene, a giant spherical object smashes into a water tank releasing a huge amount of water (see below).

Traditionally, the only possible way to create this kind of sequence would be to use small models – which produce unrealistic results. Or we could create a computer simulation.

Swapping droplets for particles

These days, one of the most popular methods for simulating water is to replace fluid with millions of individual particles within a computer simulation.

And the way these particles move is determined by an algorithm that my colleagues and I invented to simulate the formation of stars in our galaxy’s giant molecular clouds.

The method is known as Smoothed Particle Hydrodynamics (SPH) and the use of SPH in Superman Returns is the work of an American visual effects company called Tweak.

Superman Returns certainly isn’t the only film to feature SPH fluid simulations: think of Gollum falling into the lava of Mount Doom in Lord of the Rings: Return of the King; or the huge alligator splashing through a swamp in Primeval.

These particular scenes are the work of people at a Spanish visual effects company called NextLimit, who received an Oscar for their troubles.

How does SPH work?

Rather than trying to model a body of water as a whole, SPH replaces the fluid with a set of particles. A mathematical technique then uses the position and masses of these particles to determine the density of the fluid being modelled.

Using the density and pressure of the fluid, SPH makes it possible to map the force acting on each particle within the fluid. This technique provides results quite similar to the actual fluid being modelled. And the more particles used in the simulation, the more accurate the model becomes.

This SPH simulation uses 128,000 particles to model a fluid.

Beyond the basics

In Superman Returns, gravity also affects how the body of water behaves (the water spills out of the water tank) and SPH can easily be adapted to accomodate this.

In addition, fluids often need to flow around solid bodies such as rocks and buildings that might be carried, bobbing along, by the flow. The SPH method can be easily extended to handle this combination of solid bodies and fluids by adding sets of particles to the equation, to represent the solid bodies.

These adjustments and extensions to SPH can be made to produce very realistic-looking results.

In industry, SPH is used to describe the motion of offshore rigs in a storm, fluid flow in pumps, and injection moulding of liquid metals. In zoology, it’s being used to investigate the dynamics of fish.

SPH and the stars

As hinted at above, it’s not just water and its inhabitants that can be modelled using this technique.

SPH simulations of star formation by Matthew Bate, from the University of Exeter, and Daniel Price, of Monash, have been able to predict the masses of the stars, and the number of stable two- and three-star systems that form from a typical molecular cloud.

In the case of stable two-star systems (known as binaries) SPH can predict the shape of the orbits in good agreement with astronomical observations.

To get this level of accuracy, millions of particles are used in the SPH calculation, and the motion of these particles is calculated on a number of computer systems that work together in parallel.

SPH is also the method of choice for following the evolution of the universe after the Big Bang. This evolution involves dark matter and gas, and the simulations have one set of SPH particles for the dark matter and one set for the gas.

An advanced SPH code – known as Gadget – used for this purpose was developed by Volker Springel. The code enables astrophysicists to predict the way galaxies form and their distribution in the universe, including the effects of General Relativity.

But for non-astrophysicists, admittedly, the movies may be more of a draw.

So next time you’re watching a film and you see large swathes of water in unusual places or doing incredibly destructive things, think about maths for a moment: without it, such breathtaking scenes would be virtually impossible.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Joe Monaghan*


Millennium Prize: The Hodge Conjecture

If one grossly divides mathematics into two parts they would be: tools for measuring and tools for recognition.

To use an analogy, tools for measuring are the technologies for collecting data about an object, the process of “taking a blurry photograph”. Tools for recognition deal with the following: if you are given a pile of data or a blurry photograph, how can the object that it came from be recognised from the data?

The Hodge Conjecture – a major unsolved problem in algebraic geometry – deals with recognition.

William Vallance Douglas Hodge was a professor at Cambridge who, in the 1940s, worked on developing a refined version of cohomology – tools for measuring flow and flux across boundaries of surfaces (for example, fluid flow across membranes).

The classical versions of cohomology are used for the understanding of the flow and dispersion of electricity and magnetism (for example, Maxwell’s equations, which describe how electric charges and currents act as origins for electric and magnetic fields). These were refined by Hodge in what is now called the “Hodge decomposition of cohomology”.

Hodge recognised that the actual measurements of flow across regions always contribute to a particular part of the Hodge decomposition, known as the (p,p) part. He conjectured that any time the data displays a contribution to the (p,p) part of the Hodge decomposition, the measurements could have come from a realistic scenario of a system of flux and change across a region.

Or, to put this as an analogy, one could say Hodge found a criterion to test for fraudulent data.

If Hodge’s test comes back positive, you can be sure the data is fraudulent. The question in the Hodge conjecture is whether there is any fraudulent data which Hodge’s test will not detect. So far, Hodge’s test seems to work.

But we haven’t understood well enough why it works, and so the possibility is open that there could be a way to circumvent Hodge’s security scheme.

Hodge made his conjecture in 1950, and many of the leaders in the development of geometry have worked on this basic recognition problem. The problem itself has stimulated many other refined techniques for measuring flow, flux and dispersion.

Tate’s 1963 conjecture is another similar recognition question coming out of another measurement technique, the l-adic cohomology developed by Alexander Grothendieck.

The strongest evidence in favour of the Hodge conjecture is a 1995 result of Cattani, Deligne & Kaplan which studies how the Hodge decomposition behaves as a region mutates.

Classical cohomology measurements are not affected by small mutations, but the Hodge decomposition does register mutations. The study of the Hodge decomposition across mutations provides great insight into the patterns in data that must occur in true measurements.

In the 1960s, Grothendieck initiated a powerful theory generalising the usual concept of “region” to include “virtual regions” (the theory of motives on which one could measure “virtual temperatures” and “virtual magnetic fields”.

In a vague sense, the theory of motives is trying to attack the problem by trying to think like a hacker. The “Standard Conjectures” of Grothendieck are far-reaching generalisations of the Hodge conjecture, which try to explain which virtual regions are indistinguishable from realistic scenarios.

The question in the Hodge conjecture has stimulated the development of revolutionary tools and techniques for measurement and analysis of data across regions. These tools have been, and continue to be, fundamental for modern development.

Imagine trying to building a mobile phone without an understanding of how to measure, analyse and control electricity and magnetism. Alternatively, imagine trying to sustain an environment without a way to measure, analyse and detect the spread of toxins across regions and in waterways.

Of course, the tantalising intrigue around recognition and detection problems makes them thrilling. Great minds are drawn in and produce great advances in an effort to understand what makes it all work.

One might, very reasonably, claim that the longer the Hodge conjecture remains an unsolved problem the more good it will do for humanity, driving more and more refined techniques for measurement and analysis and stimulating the development of better and better methods for recognition of objects from the data.

The Clay Mathematics Institute was wise in pinpointing the Hodge conjecture as a problem that has the capacity to stimulate extensive development of new methods and technologies and including it as one of the Millennium problems.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Arun Ram*