Butterfly chaos effect’ discovered in swarms and herds of animals

Researchers at the Universidad Carlos III de Madrid (UC3M) and the Universidad Complutense de Madrid (UCM) have discovered a phase shift between chaotic states that can appear in herds of animals and, in particular, in swarms of insects. This advance may help to better understand their behaviour or be applied to the study of the movement of cells or tumors.

A phase shift occurs when the conditions of a system change drastically, for example, when water changes from a liquid to a solid statewhen it freezes. In this research, recently published in the journal Physical Review E, this group of mathematicians has found such a phenomenon in swarms. Related research is also available on the arXiv preprint server.

“The insects in the swarm stay in a limited volume, even if they’re in a park or an open space. To explain this, we assume that there is a harmonic potential, a kind of recuperative force that confines them (like that of a spring that tries to return to its resting position when we stretch or contract it),” explains one of the study’s authors, Luis L. Bonilla, director of UC3M’s Gregorio Millán Barbany Institute.

This confinement of the insects responds to a constant of proportionality between force and displacement. Researchers have found that for low confinement values, the movement of the insects in the swarm is chaotic (their movements change a lot if the initial conditions are changed). In this context, the phase shift occurs when the swarm splits into several swarms that are, however, closely related to each other, because there are insects moving from one to another.

At the critical line between phases of this shift, the distance between two insects in the swarm that are influenced by each other is proportional to the size of the swarm, even if the number of insects in the swarm grows indefinitely. This is called “scale-free chaos” and hasn’t been discovered until now, according to the researchers.

“As the number of insects increases, the critical line moves towards zero confinement. What happens is that the maximum distance between two insects that still feel each other’s influence is proportional to the size of the swarm. It doesn’t matter how many insects we put in it. And that represents an absolute novelty that we have discovered,” explains Bonilla.

Specifically, what these mathematicians predict through numerical simulations is that certain swarms of insects (specifically a class of small flies) have scale-free chaotic behaviour, which translates into certain power laws with exponents similar to those measured in nature. They have also found a simplified mean-field theory that corroborates the scale-free chaos phase shift. “It would be good to look for and find the phase shift between chaotic phases that we predict, either in observations in nature or in controlled laboratory studies,” says another of the authors of the research, UCM mathematician Rafael González Albaladejo, who is also linked to UC3M’s Gregorio Millán Barbany Institute.

The formation of herds is one of the manifestations of so-called “active matter,” made up of something like self-propelled individuals that form a whole, the researchers explain. It can be a swarm of insects, a flock of sheep, a flock of birds, a school of fish, but also bacteria in motion, melanocytes (the cells that distribute pigments in the skin) or artificial systems such as periodically shaken irregular grains or seeds. “Herd formation mechanisms play a role in some of these systems, so the results we have obtained can be linked to biology, to the study of cells, and beyond that, to the study of tumors and other diseases,” adds Albaladejo.

How do so many animals move in unison? These researchers explain that each individual only senses its neighbours and moves accordingly, even though it has no perspective on the movement of the whole herd. And depending on whether they use sight, hearing or the vibrations of the fluid in which they are immersed, the concept of neighbour can change quite a bit.

Sheep moving together see and sense those around them, while birds in a flock see their nearest neighbours, even if they are quite far apart. “Moving accordingly may mean that they move in the same direction as their neighbours (the norm) or they may adopt different strategies depending on the situation. For example, if a crowd is trying to get out of a crowded pen with more than one gate, there are times when not following neighbours is advantageous,” they explain.

It has taken the mathematicians about two years to carry out this research work. Initially, they set out to explain experiments by studying the conventional phase shift between a crowd of insects that fill a space with constant density and become ordered when passing a critical value of the control parameter (e.g., by decreasing the noise). But then they decided to add a harmonic potential to confine the swarm and explore what happens when the attractive force between individuals decreases.

“We discovered many periodic, quasi-periodic and finally chaotic states for a fixed number of insects that we increased. The surprising thing is the transition between chaotic states that we didn’t know or assume existed, and we were able to find the correct arguments and tests to support their existence,” says another of the study’s authors, Ana Carpio, from UCM’s Department of Mathematical Analysis and Applied Mathematics, who points out that there is still a lot to be done based on this work.

“From experimentally seeking confirmation of our predictions and better adapting the model to experimental observations, to carrying out theoretical and mathematical research that goes beyond our numerical simulations,” she concludes.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Carlos III University of Madrid


Calls For a Posthumous Pardon … But Who was Alan Turing?

Momentum is gathering behind calls to pardon the father of computer science. BinaryApe

You may have read the British Government is being petitioned to grant a posthumous pardon to one of the world’s greatest mathematicians and most successful codebreakers, Alan Turing. You may also have read that Turing was was convicted of gross indecency in 1952 and died tragically two years later.

But who, exactly, was he?

Born in London in 1912, Turing helped lay the foundations of the “information age” we live in.

He did his first degree at King’s College, Cambridge, and then became a Fellow there. His first big contribution was his development of a mathematical model of computation in 1936. This became known as the Turing Machine.

It was not the first time a computer had been envisaged: that distinction belonged to Charles Babbage, a 19th century mathematician who designed a computer based on mechanical technology and built parts of it (some of which may be seen at the Science Museum in London or Powerhouse Museum in Sydney, for example).

But Babbage’s design was necessarily complicated, as he aimed for a working device using specific technology. Turing’s design was independent of any particular technology, and was not intended to be built.

The now iconic shot of Alan Turing.

It was very simple, and would be very inefficient and impractical as a device for doing real computations. But its simplicity meant it could be used to do mathematical reasoning about computation.

Turing used his abstract machines to investigate what kinds of things could be computed. He found some tasks which, although perfectly well defined and mathematically precise, are uncomputable. The first of these is known as the halting problem, which asks, for any given computation, whether it will ever stop. Turing showed that this was uncomputable: there is no systematic method that always gives the right answer.

So, if you have ever wanted a program that can run on your laptop and test all your other software to determine which of them might cause your laptop to “hang” or get stuck in a never-ending loop, the bad news is such a comprehensive testing program cannot be written.

Uncomputability is not confined to questions about the behaviour of computer programs. Since Turing’s work, many problems in mainstream mathematics have been found to be uncomputable. For example, the Russian mathematician and computer scientist, Yuri Matiyasevich, showed in 1970 that determining if a polynomial equation with several variables has a solution consisting only of whole numbers is also an uncomputable problem.

Turing machines have been used to define measures of the efficiency of computations. They underpin formal statements of the P vs NP problem, one of the Millennium Prize problems.

Another important feature of Turing’s model is its capacity to treat programs as data. This means the programs that tell computers what to do can themselves, after being represented in symbolic form, be given as input to other programs. Turing Machines that can take any program as input, and run that program on some input data, are called Universal Turing Machines.

These are really conceptual precursors of today’s computers, which are stored-program computers, in that they can treat programs as data in this sense. The oldest surviving intact computer in the world, in this most complete sense of the term, is CSIRAC at Melbourne Museum.

 

CSIRAC was Australia’s first digital computer, and the fourth “stored program” computer in the world. Melbourne Museum

It seems a mathematical model of computation was an idea whose time had come. In 1936, the year of Turing’s result, another model of computation was published by Alonzo Church of Princeton University. Although Turing and Church took quite different routes, they ended up at the same place, in that the two models give exactly the same notion of computability.

In other words, the classification of tasks into computable and uncomputable is independent of which of these two models is used.

Other models of computation have been proposed, but mostly they seem to lead to the same view of what is and is not computable. The Church-Turing Thesis states that this class of computable functions does indeed capture exactly those things which can be computed in principle (say by a human with unlimited time, paper and ink, who works methodically and makes no mistakes).

It implies Turing Machines give a faithful mathematical model of computation. This is not a formal mathematical result, but rather a working assumption which is now widely accepted.

Turing went to Princeton and completed his PhD under Church, returning to Britain in 1938.

Early in the Second World War, Turing joined the British codebreaking operation at Bletchley Park, north-west of London. He became one of its most valuable assets. He was known by the nickname “Prof” and was described by colleague Jack Good as “a deep rather than a fast thinker”.

One of the famous Enigma machines decrypted at Bletchley Park. Keir David

At the time, Germany was using an encryption device known as Enigma for much of its communications. This was widely regarded as completely secure. The British had already obtained an Enigma machine, from the Poles, and building on their work, Turing and colleague Gordon Welchman worked out how the Enigma-encrypted messages collected by the British could be decrypted.

Turing designed a machine called the Bombe, named after a Polish ice cream, which worked by testing large numbers of combinations of Enigma machine configurations, in order to help decrypt secret messages. These messages yielded information of incalculable value to the British. Winston Churchill described the Bletchley Park codebreakers as “geese that laid the golden eggs but never cackled”.

In 1945, after the war, Turing joined the National Physical Laboratory (NPL), where he wrote a report on how to construct an electronic computer, this time a general-purpose one unlike the machines dedicated to cryptanalysis which he helped to design at Bletchley Park.

This report led to the construction of an early computer (Pilot ACE) at NPL in 1950. By then, Turing had already moved on to Manchester University, where he worked on the first general-purpose stored-program computer in the world, the Manchester “Baby”.

The remade Bombe machine at Bletchley Park, England, features miles of circuitry. Keir David

In their early days, computers were often called “electronic brains”. Turing began to consider whether a computer could be programmed to simulate human intelligence, which remains a major research challenge today and helped to initiate the field of artificial intelligence.

A fundamental issue in such research is: how do you know if you have succeeded? What test can you apply to a program to determine if it has intelligence? Turing proposed that a program be deemed intelligent if, in its interaction with a human, the human is unable to detect whether he or she is communicating with another human or a computer program. (The test requires a controlled setting, for example where all communication with the human tester is by typed text.)

His paper on this topic – Computing Machinery and Intelligence – was published in 1950. The artificial intelligence community holds regular competitions to see how good researchers’ programs are at the Turing test.

The honours Turing received during his lifetime included an OBE in 1945 and becoming a Fellow of the Royal Society in 1951.

His wartime contributions remained secret throughout his life and for many years afterwards.

In 1952 he was arrested for homosexuality, which was illegal in Britain at the time. Turing was found guilty and required to undergo “treatment” with drugs. This conviction also meant he lost his security clearance.

In 1954 he ingested some cyanide, probably via an apple, and died. An inquest classified his death as suicide, and this is generally accepted today. But some at the time, including his mother, contended his death was an accidental consequence of poor handling of chemicals during some experiments he was conducting at home in his spare time.

Dino Gravalo.

The irony of Turing losing his security clearance – after the advantage his work had given Britain in the war, in extraordinary secrecy – is clear.

The magnitude of what was done to him has become increasingly plain over time, helped by greater availability of information about the work at Bletchley Park and changing social attitudes to homosexuality.

Next year, 2012, will be the centenary of Turing’s birth – with events planned globally to celebrate the man and his contribution. As this year approached, a movement developed to recognise Turing’s contribution and atone for what was done to him. In 2009, British Prime Minister, Gordon Brown, responding to a petition, issued a formal apology on behalf of the British government for the way Turing was treated.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Graham Farr*

 


Millennium Prize: the Riemann Hypothesis

What will be the next number in this sequence?

“At school I was never really good at maths” is an all too common reaction when mathematicians name their profession.

In view of most people’s perceived lack of mathematical talent, it may come as somewhat of a surprise that a recent study carried out at John Hopkins University has shown that six-month-old babies already have a clear sense of numbers. They can count, or at least approximate, the number of happy faces shown on a computer screen.

By the time they start school, at around the age of five, most children are true masters of counting, and many will proudly announce when for the first time they have counted up to 100 or 1000. Children also intuitively understand the regular nature of counting; by adding sufficiently many ones to a starting value of one they know they will eventually reach their own age, that of their parents, grandparents, 2011, and so on.

Counting is child’s play. Photography By Shaeree

From counting to more general addition of whole numbers is only a small step—again within children’s almost-immediate grasp. After all, counting is the art of adding one, and once that is mastered it takes relatively little effort to work out that 3 + 4 = 7. Indeed, the first few times children attempt addition they usually receive help from their fingers or toes, effectively reducing the problem to that of counting:

3 + 4 = (1 + 1 + 1) + (1 + 1 + 1 + 1) = 7.

For most children, the sense of joy and achievement quickly ends when multiplication enters the picture. In theory it too can be understood through counting: 3 x 6 is three lots of six apples, which can be counted on fingers and toes to give 18 apples.

In practice, however, we master it through long hours spent rote-learning multiplication tables—perhaps not among our favourite primary school memories.

But at this point, we ask the reader to consider the possibility—in fact, the certainty—that multiplication is far from boring and uninspiring, but that it is intrinsically linked with some of mathematics’ deepest, most enduring and beautiful mysteries. And while a great many people may claim to be “not very good at maths” they are, in fact, equipped to understand some very difficult mathematical questions.

Primes

Let’s move towards these questions by going back to addition and those dreaded multiplication tables. Just like the earlier example of 7, we know that every whole number can be constructed by adding together sufficiently many ones. Multiplication, on the other hand, is not so well-behaved.

The number 12, for example, can be broken up into smaller pieces, or factors, while the number 11 cannot. More precisely, 12 can be written as the product of two whole numbers in multiple ways: 1 x 12, 2 x 6 and 3 x 4, but 11 can only ever be written as the product 1 x 11. Numbers such as 12 are called composite, while those that refuse to be factored are known as prime numbers or simply primes. For reasons that will soon become clear, 1 is not considered a prime, so that the first five prime numbers are 2, 3, 5, 7 and 11.

Just as the number 1 is the atomic unit of whole-number addition, prime numbers are the atoms of multiplication. According to the Fundamental Theorem of Arithmetic, any whole number greater than 1 can be written as a product of primes in exactly one way. For example: 4 = 2 x 2, 12 = 2 x 2 x 3, 2011 = 2011 and

13079109366950 = 2 x 5 x 5 x 11 x 11 x 11 x 37 x 223 x 23819,

where we always write the factors from smallest to largest. If, rather foolishly, we were to add 1 to the list of prime numbers, this would cause the downfall of the Fundamental Theorem of Arithmetic:

4 = 2 x 2 = 1 x 2 x 2 = 1 x 1 x 2 x 2 = …

In the above examples we have already seen several prime numbers, and a natural question is to ask for the total number of primes. From what we have learnt about addition with its single atom of 1, it is not unreasonable to expect there are only finitely many prime numbers, so that, just maybe, the 2649th prime number, 23819, could be the largest. Euclid of Alexandria, who lived around 300BC and who also gave us Euclidean Geometry, in fact showed that there are infinitely many primes.

Euclid’s reasoning can be captured in just a single sentence: if the list of primes were finite, then by multiplying them together and adding 1 we would get a new number which is not divisible by any prime on our list—a contradiction.

A few years after Euclid, his compatriot Eratosthenes of Cyrene found a clever way, now known as the Sieve of Eratosthenes, to obtain all primes less than a given number.

For instance, to find all primes less than 100, Eratosthenes would write down a list of all numbers from 2 to 99, cross out all multiples of 2 (but not 2 itself), then all multiples of 3 (but not 3 itself), then all multiples of 5, and so on. After only four steps(!) this would reveal to him the 25 primes

2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89 and 97.

While this might seem very quick, much more sophisticated methods, combined with very powerful computers, are needed to find really large prime numbers. The current world record, established 2008, is the truly monstrous 243112609 – 1, a prime number of approximately 13 million digits.

The quest to tame the primes did not end with the ancient Greeks, and many great mathematicians, such as Pierre de Fermat, Leonhard Euler and Carl Friedrich Gauss studied prime numbers extensively. Despite their best efforts, and those of many mathematicians up to the present day, there are many more questions than answers concerning the primes.

One famous example of an unsolved problem is Goldbach’s Conjecture. In 1742, Christian Goldbach remarked in a letter to Euler that it appeared that every even number greater than 2 could be written as the sum of two primes.

For example, 2012 = 991 + 1021. While computers have confirmed the conjecture holds well beyond the first quintillion (1018) numbers, there is little hope of a proof of Goldbach’s Conjecture in the foreseeable future.

Another intractable problem is that of breaking very large numbers into their prime factors. If a number is known to be the product of two primes, each about 200 digits long, current supercomputers would take more than the lifetime of the universe to actually find these two prime factors. This time round our inability to do better is in fact a blessing: most secure encryption methods rely heavily on our failure to carry out prime factorisation quickly. The moment someone discovers a fast algorithm to factor large numbers, the world’s financial system will collapse, making the GFC look like child’s play.

To the dismay of many security agencies, mathematicians have also failed to show that fast algorithms are impossible—the possibility of an imminent collapse of world order cannot be entirely ruled out!

Margins of error

For mathematicians, the main prime number challenge is to understand their distribution. Quoting Don Zagier, nobody can predict where the next prime will sprout; they grow like weeds among the whole numbers, seemingly obeying no other law than that of chance. At the same time the prime numbers exhibit stunning regularity: there are laws governing their behaviour, obeyed with almost military precision.

The Prime Number Theorem describes the average distribution of the primes; it was first conjectured by both Gauss and Adrien-Marie Legendre, and then rigorously established independently by Jacques Hadamard and Charles Jean de la Vallée Poussin, a hundred years later in 1896.

The Prime Number Theorem states that the number of primes less than an arbitrarily chosen number n is approximately n divided by ln(n), where ln(n) is the natural logarithm of n. The relative error in this approximation becomes arbitrarily small as n becomes larger and larger.

For example, there are 25 primes less than 100, and 100/ln(100) = 21.7…, which is around 13% short. When n is a million we are up to 78498 primes and since 106/ln(106) = 72382.4…, we are only only 8% short.

The Riemann Hypothesis

The Prime Number Theorem does an incredible job describing the distribution of primes, but mathematicians would love to have a better understanding of the relative errors. This leads us to arguably the most famous open problem in mathematics: the Riemann Hypothesis.

Posed by Bernhard Riemann in 1859 in his paper “Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse” (On the number of primes less than a given magnitude), the Riemann Hypothesis tells us how to tighten the Prime Number Theorem, giving us a control of the errors, like the 13% or 8% computed above.

The Riemann Hypothesis does not just “do better” than the Prime Number Theorem—it is generally believed to be “as good as it gets”. That is, we, or far-superior extraterrestrial civilisations, will never be able to predict the distribution of the primes any better than the Riemann Hypothesis does. One can compare it to, say, the ultimate 100 metres world record—a record that, once set, is impossible to ever break.

Finding a proof of the Riemann Hypothesis, and thus becoming record holder for all eternity, is the holy grail of pure mathematics. While the motivation for the Riemann Hypothesis is to understand the behaviour of the primes, the atoms of multiplication, its actual formulation requires higher-level mathematics and is beyond the scope of this article.

In 1900, David Hilbert, the most influential mathematician of his time, posed a now famous list of 23 problems that he hoped would shape the future of mathematics in the 20th century. Very few of Hilbert’s problems other than the Riemann Hypothesis remain open.

Inspired by Hilbert, in 2000 the Clay Mathematics Institute announced a list of seven of the most important open problems in mathematics. For the successful solver of any one of these there awaits not only lasting fame, but also one million US dollars in prize money. Needless to say, the Riemann Hypothesis is one of the “Millennium Prize Problems”.

Hilbert himself remarked: “If I were awoken after having slept for a thousand years, my first question would be: has the Riemann Hypothesis been proven?” Judging by the current rate of progress, Hilbert may well have to sleep a little while longer.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Ole Warnaar*

 


A surprising result for a group’s optimal path to cooperation

What is the best way for a group of individuals to cooperate? This is a longstanding question with roots in game theory, a branch of science which uses mathematical models of how individuals should best strategize for the optimal result.

A simple example is the prisoner’s dilemma: Two people are arrested for an alleged bank robbery. The police take them downtown and place them in individual, isolated interrogation rooms.

The police admit they don’t have enough evidence to convict them both, and give each the same option: if he confesses and his partner does not, they will release the confessor and convict the other of the serious charge of bank robbery. But if one does not confess and the other does, the first will get a lengthy prison sentence and the other will be released. If both confess, they will both be put away for many years. If neither confesses, they will be arraigned on a lesser charge of gun possession.

What should each do to minimize their time in jail? Does an individual stay silent, trusting his partner to do the same and accept a shorter prison sentence? Or does he confess, hoping the other stays silent. But what if the other confesses too? It is an unenviable position.

There is no correct solution to the prisoner’s dilemma. Other similar problems are the game of chicken, where each driver races towards the other, risking a head-on crash, or swerving away at the last minute and risking humiliation—being called “chicken” for a lack of courage. Many other simple games exist.

Now imagine a group—they may be people, or they may be cellular organisms of some sort. What kind of cooperation gives the optimal result, when each individual is connected to some others and pays a cost (money, energy, time) to create a result that benefits all? It’s a given that individuals are selfish and act in their own best interests, but we also know that cooperation can result in a better outcome for all. Will any take the risk, or look out only for themselves?

A long-standing result is that, in a homogeneous network where all individuals have the same number of neighbours, cooperation is favoured if the ratio between the benefit provided by a cooperator and their associated cost paid exceeds the average number of neighbours.

But people are not homogeneous, they’re heterogeneous, and they don’t usually have the same number of links to neighbours as does everyone else or change their strategy at the same rates.

It is also known that allowing each individual to update their strategy at exactly the same time, such as immediately mimicking their neighbour, significantly alters the evolution of cooperation. Previous investigations have reported that pervasive heterogeneous individual connections hinder cooperation when it’s assumed that individuals update their strategies at identical rates.

Now a group of researchers located in China, Canada and the US have found a surprising result: when individuals’ strategy update rates vary inversely with their number of connections, heterogeneous connections outperform homogeneous ones in promoting cooperation. The study is published in the journal Nature Communications.

“How to analyse the quantitative impact of the prevalent heterogeneous network structures on the emergence of group optimal strategies is a long-standing open question that has attracted much attention,” said Aming Li, a co-author and Assistant Professor in Dynamics and Control at Peking University.

Li’s team solved the problem by analytical calculations backed up by computer simulations, to find the fundamental rule for maintaining collective cooperation: “The nodes with substantial connections within the complex system should update their strategies infrequently,” he says. That is, individual strategy update rates should vary inversely with the number of connections they have in the network. In this way, a network with heterogeneous connections between individuals outperforms a network with homogeneous connections in promoting cooperation.

The team has also developed an algorithm that most efficiently finds the optimal strategy update rates that brings about the group’s optimal strategies, which they call OptUpRat. This algorithm helps collective utility in groups and, Li says, “is also essential in developing robotic collaborative systems.” The finding will be useful to researchers in such multidisciplinary fields as cybernetics, artificial intelligence, systems science, game theory and network science.

“We believe that utilizing AI-related techniques to optimize individual decisions and drive collective intelligence will be the next research hotspot.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Appell , Phys.org

 

 


Putting the magic into maths

Queen Mary, University of London has developed a new educational resource for teachers to help students use amazing magic tricks to learn about maths.

The web resource (www.mathematicalmagic.com), which includes the ‘Manual for Mathematical Magic’ and a series of interactive videos, was led by Queen Mary’s Professor Peter McOwan with the help of the College’s resident stand-up comedian Matt Parker and semi-professional magician and maths teacher Jason Davison.

Professor McOwan said: “It was great fun to be able to work with Matt and Jason on these new videos, showing how maths and magic can fuse together education and entertainment.

“While we explain most of the tricks, we have deliberately included a few where we leave the viewer to figure it out. It’s all just maths, but we wanted to leave some magical mystery in there too!”

Mr Davison said: “Using the fun of magic makes this a really great way to learn some of the fundamentals of maths, the links between maths and magic are strong and a brilliant way to bring excitement into the classroom.”

The educational website builds on a bank of teaching resources led by Professor McOwan, including Illusioneering (www.Illusioneering.org), a website which gives students and teachers the platform to explore science and engineering through a range of magic tricks; and cs4fn (www.cs4fn.org), a web and magazine initiative putting the fun into computer science.

The production of the videos for mathematicalmagic.com was possible due to funding from the UK National Higher Education STEM programme. The Programme supports Higher Education Institutions in the exploration of new approaches to recruiting students and delivering programmes of study within the Science, Technology, Engineering and Mathematics (STEM) disciplines.

Institute of Mathematics and its Applications project manager in HE STEM, Makhan Singh, said: “Once again we see the power of making education fun! Peter McOwan brings alive the mystery of magic whilst showcasing the power of mathematics – sheer brilliance! It’s entertaining, amusing, educational and most definitely relevant in today’s classrooms; well done!”.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Queen Mary, University of London

 


Mathematical proof reveals new insights into typhoon dynamics

In a remarkable breakthrough in the field of mathematical science, Professor Kyudong Choi from the Department of Mathematical Sciences at UNIST has provided an irrefutable proof that certain spherical vortices exist in a stable state. This discovery holds significant implications for predicting weather anomalies and advancing weather prediction technologies. The research is published in the journal Communications on Pure and Applied Mathematics.

A vortex is a rotating region of fluid, such as air or water, characterized by intense rotation. Common examples include typhoons and tornadoes frequently observed in news reports. Professor Choi’s mathematical proof establishes the stability of specific types of vortex structures that can be encountered in real-world fluid flows.

The study builds upon the foundational Euler equation formulated by Leonhard Euler in 1757 to describe the flow of eddy currents. In 1894, British mathematician M. Hill mathematically demonstrated that a ball-shaped vortex could maintain its shape indefinitely while moving along its axis.

Professor Choi’s research confirms that Hill’s spherical vortex maximizes kinetic energyunder certain conditions through the application of variational methods. By incorporating functional analysis and partial differential equation theory from mathematical analysis, this study extends previous investigations on two-dimensional fluid flows to encompass three-dimensional fluid dynamics with axial symmetry conditions.

One notable feature identified by Hill is the presence of strong upward airflow at the front of the spherical vortex—an attribute often observed in phenomena like typhoons and tornadoes. Professor Choi’s findings serve as a starting point for further studies involving measurements related to residual time associated with these ascending air currents.

“Research on vortex stability has gained international attention,” stated Professor Choi. “And it holds long-term potential for advancements in today’s weather forecasting technology.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to JooHyeon Heo, Ulsan National Institute of Science and Technology


How Far Away is Everybody? Climbing The Cosmic Distance Ladder

We know the universe is vast, but how do we measure the distances between things? Dave Scrimshaw.

Let’s talk numbers for a moment.

The moon is approximately 384,000 kilometres away, and the sun is approximately 150 million kilometres away. The mean distance between Earth and the sun is known as the “astronomical unit” (AU). Neptune, the most distant planet, then, is 30 AU from the sun.

The nearest stars to Earth are 1,000 times more distant, roughly 4.3 light-years away (one light-year being the distance that light travels in 365.25 days – just under 10 trillion kilometres).

The Milky Way galaxy consists of some 300 billion stars in a spiral-shaped disk roughly 100,000 light-years across.

The Andromeda Galaxy, which can be seen with many home telescopes, is 2.54 million light years away. There are hundreds of billions of galaxies in the observable universe.

At present, the most distant observed galaxy is some 13.2 billion light-years away, formed not long after the Big Bang, 13.75 billion years ago (plus or minus 0.011 billion years).

The scope of the universe was illustrated by the astrophysicist Geraint Lewis in a recent Conversation article.

He noted that, if the entire Milky Way galaxy was represented by a small coin one centimetre across, the Andromeda Galaxy would be another small coin 25 centimetres away.

Going by this scale, the observable universe would extend for 5 kilometres in every direction, encompassing some 300 billion galaxies.

But how can scientists possibly calculate these enormous distances with any confidence?

Parallax

One technique is known as parallax. If you cover one eye and note the position of a nearby object, compared with more distant objects, the nearby object “moves” when you view it with the other eye. This is parallax (see below).

Booyabazooka

The same principle is used in astronomy. As Earth travels around the sun, relatively close stars are observed to move slightly, with respect to other fixed stars that are more distant.

Distance measurements can be made in this way for stars up to about 1,000 light-years away.

Standard candles

For more distant objects such as galaxies, astronomers rely on “standard candles” – bright objects that are known to have a fixed absolute luminosity (brightness).

Since light flux falls off as the square of the distance, by measuring the actual brightness observed on Earth astronomers can calculate the distance.

One type of standard candle, which has been used since the 1920s, is Cepheid variable stars.

Distances determined using this scheme are believed accurate to within about 7% for more nearby galaxies, and 15-20% for the most distant galaxies.

Type Ia supernovas

In recent years scientists have used Type Ia supernovae. These occur in a binary star system when a white dwarf star starts to attract matter from a larger red dwarf star.

As the white dwarf gains more and more matter, it eventually undergoes a runaway nuclear explosion that may briefly outshine an entire galaxy.

Because this process can occur only within a very narrow range of total mass, the absolute luminosity of Type Ia supernovas is very predictable. The uncertainty in these measurements is typically 5%.

In August, worldwide attention was focused on a Type Ia supernova that exploded in the Pinwheel Galaxy (known as M101), a beautiful spiral galaxy located just above the handle of the Big Dipper in the Northern Hemisphere. This is the closest supernova to the earth since the 1987 supernova, which was visible in the Southern Hemisphere.

These and other techniques for astronomical measurements, collectively known as the “cosmic distance ladder”, are described in an excellent Wikipedia article. Such multiple schemes lend an additional measure of reliability to these measurements.

In short, distances to astronomical objects have been measured with a high degree of reliability, using calculations that mostly employ only high-school mathematics.

Thus the overall conclusion of a universe consisting of billions of galaxies, most of them many millions or even billions of light-years away, is now considered beyond reasonable doubt.

Right tools for the job

The kind of distances we’re dealing with above do cause consternation for some since, as we peer millions of light-years into space, we are also peering millions of years into the past.

Some creationists, for instance, have theorised that, in about 4,000 BCE, a Creator placed quadrillions of photons in space en route to Earth, with patterns suggestive of supernova explosions and other events millions of years ago.

Needless to say, most observers reject this notion. Kenneth Miller of Brown University commented, “Their [Creationists’] version of God is one who has filled the universe with so much bogus evidence that the tools of science can give us nothing more than a phony version of reality.”

There are plenty of things in the universe to marvel at, and plenty of tools to help us understand them. That should be enough to keep us engaged for now.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Flight of the bumblebee decoded by mathematicians

© Dr Tom Ings

Bumblebees use complex flying patterns to avoid predators according to new research from Queen Mary, University of London.

Writing in the journal Physical Review Letters, Dr Rainer Klages from Queen Mary’s School of Mathematical Sciences, Professor Lars Chittka from the School of Biological and Chemical Sciences, and their teams, describe how they carried out a statistical analysis of the velocities of foraging bumblebees. They found that bumblebees respond to the presence of predators in a much more intricate way than was previously thought.

Bumblebees visit flowers to collect nectar, often visiting multiple flowers in a single patch. There is an ongoing debate as to whether they employ an ‘optimal foraging strategy’, and what such a theory may look like.

Dr Klages explains: “In mathematical theory we treat a bumblebee as a randomly moving object hitting randomly distributed targets. However, bumblebees in the wild are under the constant risk of predators, such as spiders, so the question we wanted to answer is how such a threat might modify their foraging behaviour.”

The team used experiments that track real bumblebees visiting replenishing nectar sources under threat from artificial spiders, which can be simulated with a trapping mechanism that grabs the bumblebee for two seconds.

They found that, in the absence of the spiders, the bumblebees foraged more systematically and travelled directly from flower to flower. When predators were present, however, the bumblebees turned around more often highlighting a more careful approach to avoid the spiders.

PhD student Friedrich Lenz, who did the key analysis, explains: “We learned that the bumblebees display the same statistics of velocities irrespective of whether predators are present or not. Surprisingly, however, the way the velocities change with time during a flight is characteristically different under predation threat.”

The team’s analysis indicates that, when foraging in the wild, factors such as bumblebee sensory perception, memory, and even the individuality of different bumblebees should be taken into account in addition to the presence of predators. All of this may cause deviations from predictions of more simplistic foraging theories.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Queen Mary, University of London


The Stunningly Simple Rule That Will Always Get You Out of a Maze

You thought the maze looked fun, but now you can’t find your way out. Luckily, mathematics is here to help you escape, says Katie Steckles.

Getting lost in a maze is no fun, and on that rare occasion when you find yourself stuck in one without a map or a bird’s-eye view, it can be difficult to choose which way to go. Mathematics gives us a few tools we can use – in particular, topology, which concerns shapes and how they connect.

The most devious mazes are designed to be as confusing as possible, with dead ends and identical-looking junctions. But there is a stunningly simple rule that will always get you out of a maze, no matter how complicated: always turn right.

Any standard maze can be solved with this method (or its equivalent, the “always-turn-left” method). To do it, place one hand on the wall of the maze as you go in and keep it there. Each time you come to a junction, keep following the wall – if there is an opening on the side you are touching, take it; otherwise go straight. If you hit a dead end, turn around and carry on.

The reason this works is because the walls of any solvable maze will always have at least two distinct connected pieces: one to the left of the optimal solution path (shown in red), and one to the right. The section of wall next to the entrance is part of the same connected chunk of maze as the wall by the exit, and if you keep your hand on it, you will eventually walk along the whole length of the edge of this object – no matter how many twists and turns this involves – and reach the part at the exit.

While it is guaranteed to work, this certainly won’t be the most efficient path – you might find you traverse as much as half of the maze in the process, or even more depending on the layout. But at least it is easy to remember the rule.

Some mazes have more than two pieces. In these, disconnected sections of wall (shown in yellow) inside the maze create loops. In this case, if you start following the wall somewhere in the middle of the maze, there is a chance it could be part of an isolated section, which would leave you walking around a loop forever. But if you start from a wall that is connected to the outside, wall-following will still get you out.

It is reassuring to know that even if you are lost in a maze, you can always get out by following some variation on this rule: if you notice you have reached part of the maze you have been to before, you can detect loops, and switch to the opposite wall.

This is especially useful for mazes where the goal is to get to the centre: if the centre isn’t connected to the outside, wall-following won’t work, and you will need to switch walls to get onto the centre component. But as long as there are a finite number of pieces to the maze, and you keep trying different ones, you will eventually find a piece that is connected to your goal. You might, however, miss the bus home.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


The numbers game

Dipak Dey, Board of Trustees Distinguished Professor of Statistics and associate dean in the College of Liberal Arts and Sciences.

Dipak Dey, Board of Trustees Distinguished Professor of Statistics and associate dean in the College of Liberal Arts and Sciences, has been called an ambassador for the field. A prolific researcher, he is best known for his contributions in the areas of statistical decision theory and Bayesian statistics.

A Fellow of both the American Statistical Association and the Institute of Mathematical Statistics, he is also recognized for developing and maintaining collaborative research programs with other departments and organizations. He recently sat down to answer questions about his field.

We’ve all heard the saying, “Statistics don’t lie.” Yet it’s not uncommon to see statistics misused to prove a point. As a statistics expert, does this frustrate you? And how do you respond?

Statistics do not lie, but sometimes researchers do by misusing and misunderstanding. Sometimes people try to misuse statistics to get across a particular agenda. As a discipline, statistics is a sound science and is being prominently used in all disciplines. Unfortunately statisticians and scientists can’t control how others with a specific agenda may or may not misuse the sound principles and paradigms for their personal benefit.

Data can often be manipulated or ignored to come to a specific conclusion, but that in and of itself is not a reflection on the theory and modeling used in the field. The situation is similar to misusing any kind of legitimate system for one’s personal agenda. People are warned against doing such things, but every year we hear about some powerful interests misusing statistical data to prove a specific point. This is unfortunate and indeed frustrating. In case of such cases, are the principles of statistics at fault? We need to think before we answer.

Personally, I feel happy and proud to know what statistics is about, and how it helps society and life as a whole. After, all knowledge is golden. So I keep learning through the knowledge that can be acquired through the regular use of statistics.

Why should we all know more about statistics?

We all need to know more statistics because it is the science of using data in all fields and disciplines to determine and draw true conclusions about the world. The knowledge gained from statistics is used regularly in technology, business, economics, medicine, social science, and can be related as fact-based knowledge to help people’s daily lives.

What are statisticians doing to expand the public’s understanding of statistics and how they are used?

Besides teaching statistics at the educational level, statisticians have now joined various government agencies, nonprofit organizations, corporations, and other sectors in everything from technology to fashion. As I mentioned before, the discipline of statistics is used in virtually all fields. Whether it is for the analysis of polling data for politics or the analysis of air and water quality for the environment or the analysis of cancer data for smokers, statistics plays a key role in gaining fact-based knowledge. Specifically when it comes to the example of our government, where the decisions being made impact all citizens of the country, statisticians are playing determinative roles in the Food and Drug Administration, the National Institutes of Health, the Census Bureau, the Bureau of Labor Statistics, the United States Department of Agriculture, the National Center for Atmospheric Research, the National Institute for Environmental Research, the National Institute for Educational and Health Statistics, etc.

What role do statistics play in the public debate about an issue, such as what governments should do to deal with climate change?

Statistics play a major role in the public debate about various issues, often controversial issues. Fact-based data gathered through surveys and opinion polls often determine how much support the government has toward a specific point of view or agenda. Statistics can be used to model and track climate change through scientific data. Similarly, statistics can be used to determine how people feel about certain scientific conclusions. Statistics can be used to both refute and support specific claims. Many debates are resolved by using appropriately designed models to demonstrate a point. Many agencies, e.g. Gallup and Westat, are taking polls on major issues from the public. The Roper Center at UConn is a major archive that maintains a huge database of public opinion about science, economics, and government matters. The government constantly turns to statistics to gauge the way to make policy.

What the government should or wants to do in regards to climate change is based both on public opinion statistics as well as various fact-based expert opinions from scientists. Climatologists, for example, often extensively use statistics in risk analysis and extreme event modeling to factually measure climate change. They draw conclusions based on the detailed statistical analysis.

Why should students who are considering a major pick statistics?

The two primary reasons would be a love for science and, arguably more important, the need for a fruitful career. The job market in statistics is flourishing at a rapid pace. One North American job website recently published its 2011 job ratings where they ranked “statistician” as the fourth best job of 2011. Statistics as a field is extremely popular in all sectors, and its popularity and the need for statisticians will only grow. A statistician’s talent is needed virtually everywhere, and most students should have no problem finding a job post-college. A statistics major has the choice to join various sectors, as I mentioned before, ranging from sports to the government. With a BS or BA in statistics, students can also choose to pursue higher education in a specialty fields such as biostatistics, bioinformatics, computational statistics, actuarial science, financial statistics, etc.

What are some of the career paths for statisticians now that didn’t exist a few years ago? And what types of jobs do your graduates get?

There are many career paths for today’s statisticians. Many of them evolved due to the cutting-edge development of computers that didn’t exist in the past. These include but are certainly not limited to opportunities in pharmaceutical companies, market research firms, biotech companies, insurance industries, and the government. The job prospects are endless and yet to be fully determined.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Cindy Weiss, University of Connecticut