Punctuation in literature of major languages is intriguingly mathematical

A moment’s hesitation… Yes, a full stop here—but shouldn’t there be a comma there? Or would a hyphen be better? Punctuation can be a nuisance; it is often simply neglected. Wrong! The most recent statistical analyses paint a different picture: punctuation seems to “grow out” of the foundations shared by all the (examined) languages, and its features are far from trivial.

To many, punctuation appears as a necessary evil, to be happily ignored whenever possible. Recent analyses of literature written in the world’s current major languages require us to alter this opinion. In fact, the same statistical features of punctuation usage patterns have been observed in several hundred works written in seven, mainly Western, languages.

Punctuation, all ten representatives of which can be found in the introduction to this text, turns out to be a universal and indispensable complement to the mathematical perfection of every language studied. Such a remarkable conclusion about the role of mere commas, exclamation marks or full stops comes from an article by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, published in the journal Chaos, Solitons & Fractals.

“The present analyses are an extension of our earlier results on the multifractal features of sentence length variation in works of world literature. After all, what is sentence length? It is nothing more than the distance to the next specific punctuation mark— the full stop. So now we have taken all punctuation marks under a statistical magnifying glass, and we have also looked at what happens to punctuation during translation,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

Two sets of texts were studied. The main analyses concerning punctuation within each language were carried out on 240 highly popular literary works written in seven major Western languages: English (44), German (34), French (32), Italian (32), Spanish (32), Polish (34) and Russian (32). This particular selection of languages was based on a criterion: the researchers assumed that no fewer than 50 million people should speak the language in question, and that the works written in it should have been awarded no fewer than five Nobel Prizes for Literature.

In addition, for the statistical validity of the research results, each book had to contain at least 1,500 word sequences separated by punctuation marks. A separate collection was prepared to observe the stability of punctuation in translation. It contained 14 works, each of which was available in each of the languages studied (two of the 98 language versions, however, were omitted due to their unavailability).

In total, authors in both collections included such writers as Conrad, Dickens, Doyle, Hemingway, Kipling, Orwell, Salinger, Woolf, Grass, Kafka, Mann, Nietzsche, Goethe, La Fayette, Dumas, Hugo, Proust, Verne, Eco, Cervantes, Sienkiewicz or Reymont.

The attention of the Cracow researchers was primarily drawn to the statistical distribution of the distance between consecutive punctuation marks. It soon became evident that in all the languages studied, it was best described by one of the precisely defined variants of the Weibull distribution.

A curve of this type has a characteristic shape: it grows rapidly at first and then, after reaching a maximum value, descends somewhat more slowly to a certain critical value, below which it reaches zero with small and constantly decreasing dynamics. The Weibull distribution is usually used to describe survival phenomena (e.g. population as a function of age), but also various physical processes, such as increasing fatigue of materials.

“The concordance of the distribution of word sequence lengths between punctuation marks with the functional form of the Weibull distribution was better the more types of punctuation marks we included in the analyses; for all marks the concordance turned out to be almost complete. At the same time, some differences in the distributions are apparent between the different languages, but these merely amount to the selection of slightly different values for the distribution parameters, specific to the language in question. Punctuation thus seems to be an integral part of all the languages studied,” notes Prof. Drozdz.

After a moment he adds with some amusement: “…and since the Weibull distribution is concerned with phenomena such as survival, it can be said with not too much tongue-in-cheek that punctuation has in its nature a literally embedded struggle for survival.”

The next stage of the analyses consisted of determining the hazard function. In the case of punctuation, it describes how the conditional probability of success—i.e., the probability of the next punctuation mark—changes if no such mark has yet appeared in the analysed sequence.

The results here are clear: the language characterized by the lowest propensity to use punctuation is English, with Spanish not far behind; Slavic languages proved to be the most punctuation-dependent. The hazard function curves for punctuation marks in the six languages studied appeared to follow a similar pattern, they differed mainly in vertical shift.

German proved to be the exception. Its hazard function is the only one that intersects most of the curves constructed for the other languages. German punctuation thus seems to combine the punctuation features of many languages, making it a kind of Esperanto punctuation.

The above observation dovetails with the next analysis, which was to see whether the punctuation features of original literary works can be seen in their translations. As expected, the language most faithfully transforming punctuation from the original language to the target language turned out to be German.

In spoken communication, pauses can be justified by human physiology, such as the need to catch one’s breath or to take a moment to structure what is to be said next in one’s mind. And in written communication?

“Creating a sentence by adding one word after another while ensuring that the message is clear and unambiguous is a bit like tightening the string of a bow: it is easy at first, but becomes more demanding with each passing moment. If there are no ordering elements in the text (and this is the role of punctuation), the difficulty of interpretation increases as the string of words lengthens. A bow that is too tight can break, and a sentence that is too long can become unintelligible. Therefore, the author is faced with the necessity of ‘freeing the arrow’, i.e. closing a passage of text with some sort of punctuation mark. This observation applies to all the languages analysed, so we are dealing with what could be called a linguistic law,” states Dr. Tomasz Stanisz (IFJ PAN), first author of the article in question.

Finally, it is worth noting that the invention of punctuation is relatively recent—punctuation marks did not occur at all in old texts. The emergence of optimal punctuation patterns in modern written languages can therefore be interpreted as the result of their evolutionary advancement. However, the excessive need for punctuation is not necessarily a sign of such sophistication.

English and Spanish, contemporarily the most universal languages, appear, in the light of the above studies, to be less strict about the frequency of punctuation use. It is likely that these languages are so formalized in terms of sentence construction that there is less room for ambiguity that would need to be resolved with punctuation marks.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences


Theoretical biologists test two modes of social reasoning and find surprising truths in simplicity

Imagine a small village where every action someone takes, good or bad, is quietly followed by ever-attentive, nosy neighbours. An individual’s reputation is built through these actions and observations, which determines how others will treat them. They help a neighbour and are likely to receive help from others in return; they turn their back on a neighbour and find themselves isolated. But what happens when people make mistakes, when good deeds go unnoticed, or errors lead to unjust blame?

Here, the study of behaviour intersects with Bayesian and abductive reasoning, says Erol Akçay, a theoretical biologist at the University of Pennsylvania’s School of Arts & Sciences.

Bayesian reasoning refers to a method for assessing probability, in which individuals use prior knowledge paired with new evidence to update their beliefs or estimates about a certain condition, in this case the reputation of other villagers. While abductive reasoning involves a simple “what you see is what you get” approach to rationalizing and making a decision, Akçay says.

In two papers, one published in PLoS Computational Biology and the other in the Journal of Theoretical Biology, researchers from the Department of Biology explored how these reasoning strategies can be effectively modeled and applied to enhance biologists’ understanding of social dynamics.

Making the educated guess

The PLoS Computational Biology paper investigates how Bayesian statistical methods can be used to weigh the likelihood of errors and align the judgments of actors within a social network with a more nuanced understanding of reputation. “It’s something we may commonly do when we’re trying to offer up an explanation for some phenomena with no obvious, straightforward, or intuitive solution,” Akçay says.

Bryce Morsky, a co-author on both papers and now an assistant professor at Florida State University, began the work during his postdoctoral research in Akçay’s lab. He says that he initially believed that accounting for errors in judgment could substantially enhance the reward-and-punishment system that underpins cooperation and that he expected that a better understanding of these errors and incorporating them into the model would promote more effective cooperation.

“Essentially, the hypothesis was that reducing errors would lead to a more accurate assessment of reputations, which would in turn foster cooperation,” he says.

The team developed a mathematical model to simulate Bayesian reasoning. It involved a game-theoretical model where individuals interact within a framework of donation-based encounters. Other individuals in the simulation assess the reputations of actors based on their actions, influenced by several predefined social norms.

In the context of the village, this means judging each villager by their actions—whether helping another (good) or failing to do so (bad)—but also taking into account their historical reputation and the potential that you didn’t assess correctly.

“So, for example, if you observe someone behaving badly, but you thought they were good before, you keep an open mind that you perhaps didn’t see correctly. This allows for a nuanced calculation of reputation updates,” Morsky says. He and colleagues use this model to see how errors and reasoning would affect the villagers’ perception and social dynamics.

The five key social norms the study explores are: Scoring, Shunning, Simple Standing, Staying, and Stern Judging; each affects the reputation and subsequent behaviour of individuals differently, altering the evolutionary outcomes of cooperative strategies.

“In some scenarios, particularly under Scoring, Bayesian reasoning improved cooperation, Morsky says. “But under other norms, like Stern judging, it generally resulted in less cooperation due to stricter judgment criteria.”

Morsky explains that under Scoring a simple rule is applied: It is good to cooperate (give) and bad to defect (not give), regardless of the recipient’s reputation. Whereas under Stern judging not only are the actions of individuals considered, but their decisions are also critically evaluated based on the reputation of the recipient.

In the context of the nosy-neighbours scenario, if a villager decides to help another, this action is noted positively under Scoring, regardless of who receives the help or their standing in the village. Conversely, under Stern Judging if a villager chooses to help someone with a bad reputation it is noted negatively, the researchers say.

He adds that lack of cooperation was particularly evident in norms where Bayesian reasoning led to less tolerance for errors, which could exacerbate disagreements about reputations instead of resolving them. This, coupled with the knowledge that humans do not weigh all the relevant information prior to deciding who to work with, prompted Akçay and Morsky to investigate other modes of reasoning.

More than just a hunch

While working in Akçay’s lab, Morsky recruited Neel Pandula, then a sophomore in high school. “We met through the Penn Laboratory Experience in the Natural Sciences program,” Morsky says. “In light of the Bayesian reasoning model, Neel proposed abductive reasoning as another approach to modeling reasoning, and so we got to writing that paper for the Journal of Theoretical Biology, which he became first author of.”

Pandula, now a first-year student in the College of Arts and Sciences, explains that he and Morsky used Dempster-Shafer Theory—a probabilistic framework to infer best explanations—to form the basis of their approach.

“What’s key here is that Dempter-Shafer Theory allows for a bit of flexibility in handling uncertainty and allows for integrating new evidence into existing belief systems without fully committing to a single hypothesis unless the evidence is strong,” Pandula says.

For instance, the researchers explain, in a village, seeing a good person help another good person aligns with social norms and is readily accepted by observers. However, if a villager known as bad is seen helping a good person, it contradicts these norms, leading observers to question the reputations involved or the accuracy of their observation. Then they use the rules of abductive reasoning, specifically the Dempster-Shafer theory, considering error rates and typical behaviours to determine the most likely truth behind the unexpected action.

The team anticipated that abductive reasoning would handle errors in reputationassessments more effectively, especially in public settings in which individuals may be pressured one way or another resulting in discrepancies and errors. Under Scoring and the other norms, they found that abductive reasoning could better foster cooperation than Bayesian in public settings.

Akçay says that it came as a bit of a surprise to see that in navigating social networks, such a simple “cognitively ‘cheap, lazy’ reasoning mechanism proves this effective at dealing with the challenges associated with indirect reciprocity.”

Morsky notes that in both models the researchers chose not to factor in any cost of a cognitive burden. “You’d hope that performing a demanding task like remembering which individuals did what and using that to inform you on what they’re likely to do next would yield some positive, prosocial outcome. Yet even if you make this effort costless, under Bayesian reasoning, it generally undermines cooperation.”

As a follow up, the researchers are interested in exploring how low-cost reasoning methods, like abductive reasoning, can be evolutionarily favoured in larger, more complex social circles. And they are interested in applying these reasoning methods to other social systems.

 

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Nathi Magubane, University of Pennsylvania

 


Mathematicians Have Found a New Way to Multiply Two Numbers Together

It’s a bit more complicated than this

Forget your times tables – mathematicians have found a new, faster way to multiply two numbers together. The method, which works only for whole numbers, is a landmark result in computer science. “This is big news,” says Joshua Cooper at the University of South Carolina.

To understand the new technique, which was devised by David Harvey at the University of New South Wales, Australia, and Joris van der Hoeven at the Ecole Polytechnique near Paris, France, it helps to think back to the longhand multiplication you learned at school.

We write down two numbers, one on top of the other, and then painstakingly multiply each digit of one by each digit of the other, before adding all the results together. “This is an ancient algorithm,” says Cooper.

If your two numbers each have n digits, this way of multiplying will require roughly n2 individual calculations. “The question is, can you do better?” says Cooper.

Lots of logs

Starting in the 1960s, mathematicians began to prove that they could. First Anatoly Karatsuba found an algorithm that could turn out an answer in no more than n1.58 steps, and in 1971, Arnold Schönhage and Volker Strassen found a way to peg the number of steps to the complicated expression n*(log(n))*log(log(n)) – here “log” is short for logarithm.

These advances had a major impact on computing. Whereas a computer using the longhand multiplication method would take about six months to multiply two billion-digit numbers together, says Harvey, the Schönhage-Strassen algorithm can do it in 26 seconds.

The landmark 1971 paper also suggested a possible improvement, a tantalising prediction that multiplication might one day be possible in no more than n*log(n) steps. Now Harvey and van der Hoeven appear to have proved this is the case. “It finally appears to be possible,” says Cooper. “It passes the smell test.”

“If the result is correct, it’s a major achievement in computational complexity theory,” says Fredrik Johansson at INRIA, the French research institute for digital sciences, in Bordeaux. “The new ideas in this work are likely to inspire further research and could lead to practical improvements down the road.”

Cooper also praises the originality of the research, although stresses the complexity of the mathematics involved. “You think, jeez, I’m just multiplying two integers, how complicated can it get?” says Cooper. “But boy, it gets complicated.”

So, will this make calculating your tax returns any easier? “For human beings working with pencil and paper, absolutely not,” says Harvey. Indeed, their version of the proof only works for numbers with more than 10 to the power of 200 trillion trillion trillion digits. “The word ‘astronomical’ falls comically short in trying to describe this number,” says Harvey.

While future improvements to the algorithm may extend the proof to more humdrum numbers only a few trillion digits long, Cooper thinks its real value lies elsewhere. From a theoretical perspective, he says, this work allows programmers to provide a definitive guarantee of how long a certain algorithm will take. “We are optimistic that our new paper will allow us to achieve further practical speed-ups,” says van der Hoeven.

Harvey thinks this may well be the end of the story, with no future algorithm capable of beating n*log(n). “I would be extremely surprised if this turned out to be wrong,” he says, “but stranger things have happened.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Butterfly chaos effect’ discovered in swarms and herds of animals

Researchers at the Universidad Carlos III de Madrid (UC3M) and the Universidad Complutense de Madrid (UCM) have discovered a phase shift between chaotic states that can appear in herds of animals and, in particular, in swarms of insects. This advance may help to better understand their behaviour or be applied to the study of the movement of cells or tumors.

A phase shift occurs when the conditions of a system change drastically, for example, when water changes from a liquid to a solid statewhen it freezes. In this research, recently published in the journal Physical Review E, this group of mathematicians has found such a phenomenon in swarms. Related research is also available on the arXiv preprint server.

“The insects in the swarm stay in a limited volume, even if they’re in a park or an open space. To explain this, we assume that there is a harmonic potential, a kind of recuperative force that confines them (like that of a spring that tries to return to its resting position when we stretch or contract it),” explains one of the study’s authors, Luis L. Bonilla, director of UC3M’s Gregorio Millán Barbany Institute.

This confinement of the insects responds to a constant of proportionality between force and displacement. Researchers have found that for low confinement values, the movement of the insects in the swarm is chaotic (their movements change a lot if the initial conditions are changed). In this context, the phase shift occurs when the swarm splits into several swarms that are, however, closely related to each other, because there are insects moving from one to another.

At the critical line between phases of this shift, the distance between two insects in the swarm that are influenced by each other is proportional to the size of the swarm, even if the number of insects in the swarm grows indefinitely. This is called “scale-free chaos” and hasn’t been discovered until now, according to the researchers.

“As the number of insects increases, the critical line moves towards zero confinement. What happens is that the maximum distance between two insects that still feel each other’s influence is proportional to the size of the swarm. It doesn’t matter how many insects we put in it. And that represents an absolute novelty that we have discovered,” explains Bonilla.

Specifically, what these mathematicians predict through numerical simulations is that certain swarms of insects (specifically a class of small flies) have scale-free chaotic behaviour, which translates into certain power laws with exponents similar to those measured in nature. They have also found a simplified mean-field theory that corroborates the scale-free chaos phase shift. “It would be good to look for and find the phase shift between chaotic phases that we predict, either in observations in nature or in controlled laboratory studies,” says another of the authors of the research, UCM mathematician Rafael González Albaladejo, who is also linked to UC3M’s Gregorio Millán Barbany Institute.

The formation of herds is one of the manifestations of so-called “active matter,” made up of something like self-propelled individuals that form a whole, the researchers explain. It can be a swarm of insects, a flock of sheep, a flock of birds, a school of fish, but also bacteria in motion, melanocytes (the cells that distribute pigments in the skin) or artificial systems such as periodically shaken irregular grains or seeds. “Herd formation mechanisms play a role in some of these systems, so the results we have obtained can be linked to biology, to the study of cells, and beyond that, to the study of tumors and other diseases,” adds Albaladejo.

How do so many animals move in unison? These researchers explain that each individual only senses its neighbours and moves accordingly, even though it has no perspective on the movement of the whole herd. And depending on whether they use sight, hearing or the vibrations of the fluid in which they are immersed, the concept of neighbour can change quite a bit.

Sheep moving together see and sense those around them, while birds in a flock see their nearest neighbours, even if they are quite far apart. “Moving accordingly may mean that they move in the same direction as their neighbours (the norm) or they may adopt different strategies depending on the situation. For example, if a crowd is trying to get out of a crowded pen with more than one gate, there are times when not following neighbours is advantageous,” they explain.

It has taken the mathematicians about two years to carry out this research work. Initially, they set out to explain experiments by studying the conventional phase shift between a crowd of insects that fill a space with constant density and become ordered when passing a critical value of the control parameter (e.g., by decreasing the noise). But then they decided to add a harmonic potential to confine the swarm and explore what happens when the attractive force between individuals decreases.

“We discovered many periodic, quasi-periodic and finally chaotic states for a fixed number of insects that we increased. The surprising thing is the transition between chaotic states that we didn’t know or assume existed, and we were able to find the correct arguments and tests to support their existence,” says another of the study’s authors, Ana Carpio, from UCM’s Department of Mathematical Analysis and Applied Mathematics, who points out that there is still a lot to be done based on this work.

“From experimentally seeking confirmation of our predictions and better adapting the model to experimental observations, to carrying out theoretical and mathematical research that goes beyond our numerical simulations,” she concludes.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Carlos III University of Madrid


Calls For a Posthumous Pardon … But Who was Alan Turing?

Momentum is gathering behind calls to pardon the father of computer science. BinaryApe

You may have read the British Government is being petitioned to grant a posthumous pardon to one of the world’s greatest mathematicians and most successful codebreakers, Alan Turing. You may also have read that Turing was was convicted of gross indecency in 1952 and died tragically two years later.

But who, exactly, was he?

Born in London in 1912, Turing helped lay the foundations of the “information age” we live in.

He did his first degree at King’s College, Cambridge, and then became a Fellow there. His first big contribution was his development of a mathematical model of computation in 1936. This became known as the Turing Machine.

It was not the first time a computer had been envisaged: that distinction belonged to Charles Babbage, a 19th century mathematician who designed a computer based on mechanical technology and built parts of it (some of which may be seen at the Science Museum in London or Powerhouse Museum in Sydney, for example).

But Babbage’s design was necessarily complicated, as he aimed for a working device using specific technology. Turing’s design was independent of any particular technology, and was not intended to be built.

The now iconic shot of Alan Turing.

It was very simple, and would be very inefficient and impractical as a device for doing real computations. But its simplicity meant it could be used to do mathematical reasoning about computation.

Turing used his abstract machines to investigate what kinds of things could be computed. He found some tasks which, although perfectly well defined and mathematically precise, are uncomputable. The first of these is known as the halting problem, which asks, for any given computation, whether it will ever stop. Turing showed that this was uncomputable: there is no systematic method that always gives the right answer.

So, if you have ever wanted a program that can run on your laptop and test all your other software to determine which of them might cause your laptop to “hang” or get stuck in a never-ending loop, the bad news is such a comprehensive testing program cannot be written.

Uncomputability is not confined to questions about the behaviour of computer programs. Since Turing’s work, many problems in mainstream mathematics have been found to be uncomputable. For example, the Russian mathematician and computer scientist, Yuri Matiyasevich, showed in 1970 that determining if a polynomial equation with several variables has a solution consisting only of whole numbers is also an uncomputable problem.

Turing machines have been used to define measures of the efficiency of computations. They underpin formal statements of the P vs NP problem, one of the Millennium Prize problems.

Another important feature of Turing’s model is its capacity to treat programs as data. This means the programs that tell computers what to do can themselves, after being represented in symbolic form, be given as input to other programs. Turing Machines that can take any program as input, and run that program on some input data, are called Universal Turing Machines.

These are really conceptual precursors of today’s computers, which are stored-program computers, in that they can treat programs as data in this sense. The oldest surviving intact computer in the world, in this most complete sense of the term, is CSIRAC at Melbourne Museum.

 

CSIRAC was Australia’s first digital computer, and the fourth “stored program” computer in the world. Melbourne Museum

It seems a mathematical model of computation was an idea whose time had come. In 1936, the year of Turing’s result, another model of computation was published by Alonzo Church of Princeton University. Although Turing and Church took quite different routes, they ended up at the same place, in that the two models give exactly the same notion of computability.

In other words, the classification of tasks into computable and uncomputable is independent of which of these two models is used.

Other models of computation have been proposed, but mostly they seem to lead to the same view of what is and is not computable. The Church-Turing Thesis states that this class of computable functions does indeed capture exactly those things which can be computed in principle (say by a human with unlimited time, paper and ink, who works methodically and makes no mistakes).

It implies Turing Machines give a faithful mathematical model of computation. This is not a formal mathematical result, but rather a working assumption which is now widely accepted.

Turing went to Princeton and completed his PhD under Church, returning to Britain in 1938.

Early in the Second World War, Turing joined the British codebreaking operation at Bletchley Park, north-west of London. He became one of its most valuable assets. He was known by the nickname “Prof” and was described by colleague Jack Good as “a deep rather than a fast thinker”.

One of the famous Enigma machines decrypted at Bletchley Park. Keir David

At the time, Germany was using an encryption device known as Enigma for much of its communications. This was widely regarded as completely secure. The British had already obtained an Enigma machine, from the Poles, and building on their work, Turing and colleague Gordon Welchman worked out how the Enigma-encrypted messages collected by the British could be decrypted.

Turing designed a machine called the Bombe, named after a Polish ice cream, which worked by testing large numbers of combinations of Enigma machine configurations, in order to help decrypt secret messages. These messages yielded information of incalculable value to the British. Winston Churchill described the Bletchley Park codebreakers as “geese that laid the golden eggs but never cackled”.

In 1945, after the war, Turing joined the National Physical Laboratory (NPL), where he wrote a report on how to construct an electronic computer, this time a general-purpose one unlike the machines dedicated to cryptanalysis which he helped to design at Bletchley Park.

This report led to the construction of an early computer (Pilot ACE) at NPL in 1950. By then, Turing had already moved on to Manchester University, where he worked on the first general-purpose stored-program computer in the world, the Manchester “Baby”.

The remade Bombe machine at Bletchley Park, England, features miles of circuitry. Keir David

In their early days, computers were often called “electronic brains”. Turing began to consider whether a computer could be programmed to simulate human intelligence, which remains a major research challenge today and helped to initiate the field of artificial intelligence.

A fundamental issue in such research is: how do you know if you have succeeded? What test can you apply to a program to determine if it has intelligence? Turing proposed that a program be deemed intelligent if, in its interaction with a human, the human is unable to detect whether he or she is communicating with another human or a computer program. (The test requires a controlled setting, for example where all communication with the human tester is by typed text.)

His paper on this topic – Computing Machinery and Intelligence – was published in 1950. The artificial intelligence community holds regular competitions to see how good researchers’ programs are at the Turing test.

The honours Turing received during his lifetime included an OBE in 1945 and becoming a Fellow of the Royal Society in 1951.

His wartime contributions remained secret throughout his life and for many years afterwards.

In 1952 he was arrested for homosexuality, which was illegal in Britain at the time. Turing was found guilty and required to undergo “treatment” with drugs. This conviction also meant he lost his security clearance.

In 1954 he ingested some cyanide, probably via an apple, and died. An inquest classified his death as suicide, and this is generally accepted today. But some at the time, including his mother, contended his death was an accidental consequence of poor handling of chemicals during some experiments he was conducting at home in his spare time.

Dino Gravalo.

The irony of Turing losing his security clearance – after the advantage his work had given Britain in the war, in extraordinary secrecy – is clear.

The magnitude of what was done to him has become increasingly plain over time, helped by greater availability of information about the work at Bletchley Park and changing social attitudes to homosexuality.

Next year, 2012, will be the centenary of Turing’s birth – with events planned globally to celebrate the man and his contribution. As this year approached, a movement developed to recognise Turing’s contribution and atone for what was done to him. In 2009, British Prime Minister, Gordon Brown, responding to a petition, issued a formal apology on behalf of the British government for the way Turing was treated.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Graham Farr*

 


Millennium Prize: the Riemann Hypothesis

What will be the next number in this sequence?

“At school I was never really good at maths” is an all too common reaction when mathematicians name their profession.

In view of most people’s perceived lack of mathematical talent, it may come as somewhat of a surprise that a recent study carried out at John Hopkins University has shown that six-month-old babies already have a clear sense of numbers. They can count, or at least approximate, the number of happy faces shown on a computer screen.

By the time they start school, at around the age of five, most children are true masters of counting, and many will proudly announce when for the first time they have counted up to 100 or 1000. Children also intuitively understand the regular nature of counting; by adding sufficiently many ones to a starting value of one they know they will eventually reach their own age, that of their parents, grandparents, 2011, and so on.

Counting is child’s play. Photography By Shaeree

From counting to more general addition of whole numbers is only a small step—again within children’s almost-immediate grasp. After all, counting is the art of adding one, and once that is mastered it takes relatively little effort to work out that 3 + 4 = 7. Indeed, the first few times children attempt addition they usually receive help from their fingers or toes, effectively reducing the problem to that of counting:

3 + 4 = (1 + 1 + 1) + (1 + 1 + 1 + 1) = 7.

For most children, the sense of joy and achievement quickly ends when multiplication enters the picture. In theory it too can be understood through counting: 3 x 6 is three lots of six apples, which can be counted on fingers and toes to give 18 apples.

In practice, however, we master it through long hours spent rote-learning multiplication tables—perhaps not among our favourite primary school memories.

But at this point, we ask the reader to consider the possibility—in fact, the certainty—that multiplication is far from boring and uninspiring, but that it is intrinsically linked with some of mathematics’ deepest, most enduring and beautiful mysteries. And while a great many people may claim to be “not very good at maths” they are, in fact, equipped to understand some very difficult mathematical questions.

Primes

Let’s move towards these questions by going back to addition and those dreaded multiplication tables. Just like the earlier example of 7, we know that every whole number can be constructed by adding together sufficiently many ones. Multiplication, on the other hand, is not so well-behaved.

The number 12, for example, can be broken up into smaller pieces, or factors, while the number 11 cannot. More precisely, 12 can be written as the product of two whole numbers in multiple ways: 1 x 12, 2 x 6 and 3 x 4, but 11 can only ever be written as the product 1 x 11. Numbers such as 12 are called composite, while those that refuse to be factored are known as prime numbers or simply primes. For reasons that will soon become clear, 1 is not considered a prime, so that the first five prime numbers are 2, 3, 5, 7 and 11.

Just as the number 1 is the atomic unit of whole-number addition, prime numbers are the atoms of multiplication. According to the Fundamental Theorem of Arithmetic, any whole number greater than 1 can be written as a product of primes in exactly one way. For example: 4 = 2 x 2, 12 = 2 x 2 x 3, 2011 = 2011 and

13079109366950 = 2 x 5 x 5 x 11 x 11 x 11 x 37 x 223 x 23819,

where we always write the factors from smallest to largest. If, rather foolishly, we were to add 1 to the list of prime numbers, this would cause the downfall of the Fundamental Theorem of Arithmetic:

4 = 2 x 2 = 1 x 2 x 2 = 1 x 1 x 2 x 2 = …

In the above examples we have already seen several prime numbers, and a natural question is to ask for the total number of primes. From what we have learnt about addition with its single atom of 1, it is not unreasonable to expect there are only finitely many prime numbers, so that, just maybe, the 2649th prime number, 23819, could be the largest. Euclid of Alexandria, who lived around 300BC and who also gave us Euclidean Geometry, in fact showed that there are infinitely many primes.

Euclid’s reasoning can be captured in just a single sentence: if the list of primes were finite, then by multiplying them together and adding 1 we would get a new number which is not divisible by any prime on our list—a contradiction.

A few years after Euclid, his compatriot Eratosthenes of Cyrene found a clever way, now known as the Sieve of Eratosthenes, to obtain all primes less than a given number.

For instance, to find all primes less than 100, Eratosthenes would write down a list of all numbers from 2 to 99, cross out all multiples of 2 (but not 2 itself), then all multiples of 3 (but not 3 itself), then all multiples of 5, and so on. After only four steps(!) this would reveal to him the 25 primes

2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89 and 97.

While this might seem very quick, much more sophisticated methods, combined with very powerful computers, are needed to find really large prime numbers. The current world record, established 2008, is the truly monstrous 243112609 – 1, a prime number of approximately 13 million digits.

The quest to tame the primes did not end with the ancient Greeks, and many great mathematicians, such as Pierre de Fermat, Leonhard Euler and Carl Friedrich Gauss studied prime numbers extensively. Despite their best efforts, and those of many mathematicians up to the present day, there are many more questions than answers concerning the primes.

One famous example of an unsolved problem is Goldbach’s Conjecture. In 1742, Christian Goldbach remarked in a letter to Euler that it appeared that every even number greater than 2 could be written as the sum of two primes.

For example, 2012 = 991 + 1021. While computers have confirmed the conjecture holds well beyond the first quintillion (1018) numbers, there is little hope of a proof of Goldbach’s Conjecture in the foreseeable future.

Another intractable problem is that of breaking very large numbers into their prime factors. If a number is known to be the product of two primes, each about 200 digits long, current supercomputers would take more than the lifetime of the universe to actually find these two prime factors. This time round our inability to do better is in fact a blessing: most secure encryption methods rely heavily on our failure to carry out prime factorisation quickly. The moment someone discovers a fast algorithm to factor large numbers, the world’s financial system will collapse, making the GFC look like child’s play.

To the dismay of many security agencies, mathematicians have also failed to show that fast algorithms are impossible—the possibility of an imminent collapse of world order cannot be entirely ruled out!

Margins of error

For mathematicians, the main prime number challenge is to understand their distribution. Quoting Don Zagier, nobody can predict where the next prime will sprout; they grow like weeds among the whole numbers, seemingly obeying no other law than that of chance. At the same time the prime numbers exhibit stunning regularity: there are laws governing their behaviour, obeyed with almost military precision.

The Prime Number Theorem describes the average distribution of the primes; it was first conjectured by both Gauss and Adrien-Marie Legendre, and then rigorously established independently by Jacques Hadamard and Charles Jean de la Vallée Poussin, a hundred years later in 1896.

The Prime Number Theorem states that the number of primes less than an arbitrarily chosen number n is approximately n divided by ln(n), where ln(n) is the natural logarithm of n. The relative error in this approximation becomes arbitrarily small as n becomes larger and larger.

For example, there are 25 primes less than 100, and 100/ln(100) = 21.7…, which is around 13% short. When n is a million we are up to 78498 primes and since 106/ln(106) = 72382.4…, we are only only 8% short.

The Riemann Hypothesis

The Prime Number Theorem does an incredible job describing the distribution of primes, but mathematicians would love to have a better understanding of the relative errors. This leads us to arguably the most famous open problem in mathematics: the Riemann Hypothesis.

Posed by Bernhard Riemann in 1859 in his paper “Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse” (On the number of primes less than a given magnitude), the Riemann Hypothesis tells us how to tighten the Prime Number Theorem, giving us a control of the errors, like the 13% or 8% computed above.

The Riemann Hypothesis does not just “do better” than the Prime Number Theorem—it is generally believed to be “as good as it gets”. That is, we, or far-superior extraterrestrial civilisations, will never be able to predict the distribution of the primes any better than the Riemann Hypothesis does. One can compare it to, say, the ultimate 100 metres world record—a record that, once set, is impossible to ever break.

Finding a proof of the Riemann Hypothesis, and thus becoming record holder for all eternity, is the holy grail of pure mathematics. While the motivation for the Riemann Hypothesis is to understand the behaviour of the primes, the atoms of multiplication, its actual formulation requires higher-level mathematics and is beyond the scope of this article.

In 1900, David Hilbert, the most influential mathematician of his time, posed a now famous list of 23 problems that he hoped would shape the future of mathematics in the 20th century. Very few of Hilbert’s problems other than the Riemann Hypothesis remain open.

Inspired by Hilbert, in 2000 the Clay Mathematics Institute announced a list of seven of the most important open problems in mathematics. For the successful solver of any one of these there awaits not only lasting fame, but also one million US dollars in prize money. Needless to say, the Riemann Hypothesis is one of the “Millennium Prize Problems”.

Hilbert himself remarked: “If I were awoken after having slept for a thousand years, my first question would be: has the Riemann Hypothesis been proven?” Judging by the current rate of progress, Hilbert may well have to sleep a little while longer.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Ole Warnaar*

 


A surprising result for a group’s optimal path to cooperation

What is the best way for a group of individuals to cooperate? This is a longstanding question with roots in game theory, a branch of science which uses mathematical models of how individuals should best strategize for the optimal result.

A simple example is the prisoner’s dilemma: Two people are arrested for an alleged bank robbery. The police take them downtown and place them in individual, isolated interrogation rooms.

The police admit they don’t have enough evidence to convict them both, and give each the same option: if he confesses and his partner does not, they will release the confessor and convict the other of the serious charge of bank robbery. But if one does not confess and the other does, the first will get a lengthy prison sentence and the other will be released. If both confess, they will both be put away for many years. If neither confesses, they will be arraigned on a lesser charge of gun possession.

What should each do to minimize their time in jail? Does an individual stay silent, trusting his partner to do the same and accept a shorter prison sentence? Or does he confess, hoping the other stays silent. But what if the other confesses too? It is an unenviable position.

There is no correct solution to the prisoner’s dilemma. Other similar problems are the game of chicken, where each driver races towards the other, risking a head-on crash, or swerving away at the last minute and risking humiliation—being called “chicken” for a lack of courage. Many other simple games exist.

Now imagine a group—they may be people, or they may be cellular organisms of some sort. What kind of cooperation gives the optimal result, when each individual is connected to some others and pays a cost (money, energy, time) to create a result that benefits all? It’s a given that individuals are selfish and act in their own best interests, but we also know that cooperation can result in a better outcome for all. Will any take the risk, or look out only for themselves?

A long-standing result is that, in a homogeneous network where all individuals have the same number of neighbours, cooperation is favoured if the ratio between the benefit provided by a cooperator and their associated cost paid exceeds the average number of neighbours.

But people are not homogeneous, they’re heterogeneous, and they don’t usually have the same number of links to neighbours as does everyone else or change their strategy at the same rates.

It is also known that allowing each individual to update their strategy at exactly the same time, such as immediately mimicking their neighbour, significantly alters the evolution of cooperation. Previous investigations have reported that pervasive heterogeneous individual connections hinder cooperation when it’s assumed that individuals update their strategies at identical rates.

Now a group of researchers located in China, Canada and the US have found a surprising result: when individuals’ strategy update rates vary inversely with their number of connections, heterogeneous connections outperform homogeneous ones in promoting cooperation. The study is published in the journal Nature Communications.

“How to analyse the quantitative impact of the prevalent heterogeneous network structures on the emergence of group optimal strategies is a long-standing open question that has attracted much attention,” said Aming Li, a co-author and Assistant Professor in Dynamics and Control at Peking University.

Li’s team solved the problem by analytical calculations backed up by computer simulations, to find the fundamental rule for maintaining collective cooperation: “The nodes with substantial connections within the complex system should update their strategies infrequently,” he says. That is, individual strategy update rates should vary inversely with the number of connections they have in the network. In this way, a network with heterogeneous connections between individuals outperforms a network with homogeneous connections in promoting cooperation.

The team has also developed an algorithm that most efficiently finds the optimal strategy update rates that brings about the group’s optimal strategies, which they call OptUpRat. This algorithm helps collective utility in groups and, Li says, “is also essential in developing robotic collaborative systems.” The finding will be useful to researchers in such multidisciplinary fields as cybernetics, artificial intelligence, systems science, game theory and network science.

“We believe that utilizing AI-related techniques to optimize individual decisions and drive collective intelligence will be the next research hotspot.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Appell , Phys.org

 

 


Putting the magic into maths

Queen Mary, University of London has developed a new educational resource for teachers to help students use amazing magic tricks to learn about maths.

The web resource (www.mathematicalmagic.com), which includes the ‘Manual for Mathematical Magic’ and a series of interactive videos, was led by Queen Mary’s Professor Peter McOwan with the help of the College’s resident stand-up comedian Matt Parker and semi-professional magician and maths teacher Jason Davison.

Professor McOwan said: “It was great fun to be able to work with Matt and Jason on these new videos, showing how maths and magic can fuse together education and entertainment.

“While we explain most of the tricks, we have deliberately included a few where we leave the viewer to figure it out. It’s all just maths, but we wanted to leave some magical mystery in there too!”

Mr Davison said: “Using the fun of magic makes this a really great way to learn some of the fundamentals of maths, the links between maths and magic are strong and a brilliant way to bring excitement into the classroom.”

The educational website builds on a bank of teaching resources led by Professor McOwan, including Illusioneering (www.Illusioneering.org), a website which gives students and teachers the platform to explore science and engineering through a range of magic tricks; and cs4fn (www.cs4fn.org), a web and magazine initiative putting the fun into computer science.

The production of the videos for mathematicalmagic.com was possible due to funding from the UK National Higher Education STEM programme. The Programme supports Higher Education Institutions in the exploration of new approaches to recruiting students and delivering programmes of study within the Science, Technology, Engineering and Mathematics (STEM) disciplines.

Institute of Mathematics and its Applications project manager in HE STEM, Makhan Singh, said: “Once again we see the power of making education fun! Peter McOwan brings alive the mystery of magic whilst showcasing the power of mathematics – sheer brilliance! It’s entertaining, amusing, educational and most definitely relevant in today’s classrooms; well done!”.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Queen Mary, University of London

 


Mathematical proof reveals new insights into typhoon dynamics

In a remarkable breakthrough in the field of mathematical science, Professor Kyudong Choi from the Department of Mathematical Sciences at UNIST has provided an irrefutable proof that certain spherical vortices exist in a stable state. This discovery holds significant implications for predicting weather anomalies and advancing weather prediction technologies. The research is published in the journal Communications on Pure and Applied Mathematics.

A vortex is a rotating region of fluid, such as air or water, characterized by intense rotation. Common examples include typhoons and tornadoes frequently observed in news reports. Professor Choi’s mathematical proof establishes the stability of specific types of vortex structures that can be encountered in real-world fluid flows.

The study builds upon the foundational Euler equation formulated by Leonhard Euler in 1757 to describe the flow of eddy currents. In 1894, British mathematician M. Hill mathematically demonstrated that a ball-shaped vortex could maintain its shape indefinitely while moving along its axis.

Professor Choi’s research confirms that Hill’s spherical vortex maximizes kinetic energyunder certain conditions through the application of variational methods. By incorporating functional analysis and partial differential equation theory from mathematical analysis, this study extends previous investigations on two-dimensional fluid flows to encompass three-dimensional fluid dynamics with axial symmetry conditions.

One notable feature identified by Hill is the presence of strong upward airflow at the front of the spherical vortex—an attribute often observed in phenomena like typhoons and tornadoes. Professor Choi’s findings serve as a starting point for further studies involving measurements related to residual time associated with these ascending air currents.

“Research on vortex stability has gained international attention,” stated Professor Choi. “And it holds long-term potential for advancements in today’s weather forecasting technology.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to JooHyeon Heo, Ulsan National Institute of Science and Technology


How Far Away is Everybody? Climbing The Cosmic Distance Ladder

We know the universe is vast, but how do we measure the distances between things? Dave Scrimshaw.

Let’s talk numbers for a moment.

The moon is approximately 384,000 kilometres away, and the sun is approximately 150 million kilometres away. The mean distance between Earth and the sun is known as the “astronomical unit” (AU). Neptune, the most distant planet, then, is 30 AU from the sun.

The nearest stars to Earth are 1,000 times more distant, roughly 4.3 light-years away (one light-year being the distance that light travels in 365.25 days – just under 10 trillion kilometres).

The Milky Way galaxy consists of some 300 billion stars in a spiral-shaped disk roughly 100,000 light-years across.

The Andromeda Galaxy, which can be seen with many home telescopes, is 2.54 million light years away. There are hundreds of billions of galaxies in the observable universe.

At present, the most distant observed galaxy is some 13.2 billion light-years away, formed not long after the Big Bang, 13.75 billion years ago (plus or minus 0.011 billion years).

The scope of the universe was illustrated by the astrophysicist Geraint Lewis in a recent Conversation article.

He noted that, if the entire Milky Way galaxy was represented by a small coin one centimetre across, the Andromeda Galaxy would be another small coin 25 centimetres away.

Going by this scale, the observable universe would extend for 5 kilometres in every direction, encompassing some 300 billion galaxies.

But how can scientists possibly calculate these enormous distances with any confidence?

Parallax

One technique is known as parallax. If you cover one eye and note the position of a nearby object, compared with more distant objects, the nearby object “moves” when you view it with the other eye. This is parallax (see below).

Booyabazooka

The same principle is used in astronomy. As Earth travels around the sun, relatively close stars are observed to move slightly, with respect to other fixed stars that are more distant.

Distance measurements can be made in this way for stars up to about 1,000 light-years away.

Standard candles

For more distant objects such as galaxies, astronomers rely on “standard candles” – bright objects that are known to have a fixed absolute luminosity (brightness).

Since light flux falls off as the square of the distance, by measuring the actual brightness observed on Earth astronomers can calculate the distance.

One type of standard candle, which has been used since the 1920s, is Cepheid variable stars.

Distances determined using this scheme are believed accurate to within about 7% for more nearby galaxies, and 15-20% for the most distant galaxies.

Type Ia supernovas

In recent years scientists have used Type Ia supernovae. These occur in a binary star system when a white dwarf star starts to attract matter from a larger red dwarf star.

As the white dwarf gains more and more matter, it eventually undergoes a runaway nuclear explosion that may briefly outshine an entire galaxy.

Because this process can occur only within a very narrow range of total mass, the absolute luminosity of Type Ia supernovas is very predictable. The uncertainty in these measurements is typically 5%.

In August, worldwide attention was focused on a Type Ia supernova that exploded in the Pinwheel Galaxy (known as M101), a beautiful spiral galaxy located just above the handle of the Big Dipper in the Northern Hemisphere. This is the closest supernova to the earth since the 1987 supernova, which was visible in the Southern Hemisphere.

These and other techniques for astronomical measurements, collectively known as the “cosmic distance ladder”, are described in an excellent Wikipedia article. Such multiple schemes lend an additional measure of reliability to these measurements.

In short, distances to astronomical objects have been measured with a high degree of reliability, using calculations that mostly employ only high-school mathematics.

Thus the overall conclusion of a universe consisting of billions of galaxies, most of them many millions or even billions of light-years away, is now considered beyond reasonable doubt.

Right tools for the job

The kind of distances we’re dealing with above do cause consternation for some since, as we peer millions of light-years into space, we are also peering millions of years into the past.

Some creationists, for instance, have theorised that, in about 4,000 BCE, a Creator placed quadrillions of photons in space en route to Earth, with patterns suggestive of supernova explosions and other events millions of years ago.

Needless to say, most observers reject this notion. Kenneth Miller of Brown University commented, “Their [Creationists’] version of God is one who has filled the universe with so much bogus evidence that the tools of science can give us nothing more than a phony version of reality.”

There are plenty of things in the universe to marvel at, and plenty of tools to help us understand them. That should be enough to keep us engaged for now.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*