Math unlocks molecular interactions that open window to how life evolved

A “window to evolution” has opened after mathematicians uncovered the universal explanatory framework for how molecules interact with one another to adapt to new and variable conditions while maintaining tight control over key survival properties.

Landmark research published in Nature Communications by mathematicians Dr. Robyn Araujo at QUT and Professor Lance Liotta of George Mason University in the U.S. sets out the definitive picture of biological adaptation at the level of intermolecular interactions.

Dr. Araujo, from the QUT School of Mathematical Sciences, said the research findings represented a blueprint for adaptation-capable signaling networks across all domains of life and for the design of synthetic biosystems.

“Our study considers a process called robust perfect adaptation (RPA) whereby biological systems, from individual cells to entire organisms, maintain important molecules within narrow concentration ranges despite continually being bombarded with disturbances to the system,” Dr. Araujo said.

“Until now, no one had a general way to explain how this vital process was orchestrated at the molecular level through the vast, complex, often highly intricate networks of chemical reactions among different types of molecules, mostly proteins.

“We have now solved this problem, having discovered fundamental molecular-level design principles that organize all forms of biological complexity into robustness-promoting, and ultimately, survival-promoting, chemical reaction structures.”

Dr. Araujo said they had found that collections of interacting molecules in living systems cannot simply “transmit” biochemical signals but must actually make “computations” on these signals.

“These complex intermolecular interactions must implement a special type of regulation known as integral control—a design strategy known to engineers for almost a century.

“However, signaling networks in nature are vastly different, having evolved to rely on the physical interactions between discrete molecules. So, nature’s ‘solutions’ operate through remarkable and highly intricate collections of interactions, without engineering’s specially designed, integral-computing components, and often without feedback loops.

“We show that molecular network structures use a form of integral control in which multiple independent integrals, each with a very special and simple structure, can collaborate to confer the capacity for adaptation on specific molecules.

“Using an algebraic algorithm based on this finding, we have been able to demonstrate the existence of embedded integrals in biologically important chemical reaction networks whose ability to exhibit adaptation could never before be explained by any systematic method.”

Professor Liotta said the quest to uncover the fundamental design principles of biological systems throughout nature is considered to be one of the most important and far-reaching grand challenges in the life sciences.

“On the basis of this ground-breaking new research, RPA currently stands alone as a keystone biological response for which there now exists a universal explanatory framework.

“It’s a framework that imposes strict and inviolable design criteria on arbitrarily large and complex networks, and one that now accounts for the subtleties of intricate intermolecular interactions at the network microscale.

“At a practical level, this discovery could provide a completely fresh approach to tackle grand challenges in personalized medicine such as cancer drug resistance, addiction, and autoimmune diseases.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Queensland University of Technology


Venn: The man behind the famous diagrams, and why his work still matters today

April 2023 marks the 100th anniversary of the death of mathematician and philosopher John Venn. You may well be familiar with Venn diagrams—the ubiquitous pictures of typically two or three intersecting circles, illustrating the relationships between two or three collections of things.

For example, during the pandemic, Venn diagrams helped to illustrate symptoms of COVID-19 that are distinct from seasonal allergies. They are also often taught to school children and are typically part of the early curriculum for logic and databases in higher education.

Venn was born in Hull, UK, in 1834. His early life in Hull was influenced by his father, an Anglican priest—it was expected John would follow in his footstep. He did initially begin a career in the Anglican church, but later moved into academia at the University of Cambridge.

One of Venn’s major achievements was to find a way to visualize a mathematical area called set theory. Set theory is an area of mathematics which can help to formally describe properties of collections of objects.

For example, we could have a set of cars, C. Within this set, there could be subsets such as the set of electric cars, E, the set of petrol based cars, say P, and the set of diesel powered cars, D. Given these, we can operate on them, for example, to apply car charges to the sets P and D, and a discount to the set E.

These sorts of operations form the basis of databases, as well as being used in many fundamental areas of science. Other major works of Venn’s include probability theory and symbolic logic. Venn had initially used diagrams developed by the Swiss mathematician Leonard Euler to show some relationships between sets, which he then developed into his famous Venn diagrams.

Venn used the diagrams to prove a form of logical statement known as a categorical syllogism. This can be used to model reasoning. Here’s an example: “All computers need power. All AI systems are computers.” We can chain these together to the conclusion that “all AI systems need power.”

Today, we are familiar with such reasoning to illustrate how different collections relate to each other. For example, the SmartArt tool in Microsoft products lets you create a Venn diagram to illustrate the relationships between different sets. In our earlier car example, we could have a diagram showing electric cars, E, and petrol powered cars, P. The set of hybrid cars that have a petrol engine would be in the intersection of P and E.

Logic and computing

The visualization of sets (and databases) is helpful, but the importance of Venn’s work then—and now—is the way they allowed proof of George Boole’s ideas of logic as a formal science.

Venn used his diagrams to illustrate and explore such “symbolic logic”—defending and extending it. Symbolic logic underpins modern computing, and Boolean logic is a key part of the design of modern computer systems—making his work relevant today.

Venn’s work was also crucial to the work of philosopher Bertrand Russell, showing that there are problems that are unsolvable. We can express such problems with sets, in which each is an unsolvable problem. One such unsolvable problem can be expressed with the “Barber paradox.” Suppose we had an article in Wikipedia containing all the articles that don’t contain themselves—a set. Is this new article itself in that set?

Luckily we can visualize that with a Venn diagram with two circles, where one circle is the set of entries that don’t include themselves, A, and the other circle is the set of entries that do include themselves, B.

We can then ask the question: where do we put the article that contains all the articles that don’t contain themselves? Have a think about it, then see where you would put it.

The problem is that it cannot be on the left, as it would contain itself, and would therefore be inconsistent. And it cannot be on the right, as then it would be missing, or incomplete. And it can’t be in both. It must be in one or the other. This paradox illustrates how unsolvable statements can arise—they are valid in terms of expressing them within the logical system, but ultimately unanswerable. We could possibly extend our system to solve this, but then we would end up with another unanswerable question.

Venn’s diagrams were crucial in understanding this. And this area of science is still important, for example when considering the limitations of machine learning and AI, where we may ask questions that cannot be answered.

Venn also had an interest in building mechanical machines—including a bowling machinewhich proved so effective it was able to bowl out some top Australian batsmen of the day.

Following his abstract work on logic, he developed the concept of a logical-diagram machine with a lot of processing power: though this brilliant idea from 1881 would take many decades to appear as modern computers.

We remember Venn here in Hull, with a bridge close to his birthplace decorated with Venn circle inspired artwork. At the University of Hull’s main administration building, there’s an intersection of management and academia which is called the Venn building.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Neil Gordon, The Conversation


Expert reveals the fascinating link between math and card shuffling

Mathematics sometimes impacts our lives in seemingly unsuspecting ways, including card shuffling.

Math can answer the age-old question of how many times a deck of cards needs to be shuffled to ensure the cards are thoroughly mixed. It can even reveal the best method for dealing cards.

Jason Fulman, professor of mathematics at the USC Dornsife College of Letters, Arts and Sciences, studies card shuffling using math. He shares what is known on the topic in an upcoming book, “The Mathematics of Shuffling Cards” (American Mathematical Society), which he co-wrote with acclaimed mathematician Persi Diaconis. The book is due out in June.

Card shuffling is a numbers game

Among the many insights Fulman provides is that the number of shuffles required to thoroughly mix a deck of 52 cards depends on the shuffle type used.

The riffle shuffle—splitting the deck roughly in half then using the thumbs to quickly interleave the cards—is the most efficient. It requires just seven shuffles to mix a deck well.

Scattering the cards out flat on the table and randomly spreading them over each other, called “smooshing,” requires 30 to 60 seconds for thorough mixing.

The overhand method—taking sections of a stacked deck and moving them over to make a new stack—must be repeated a whopping 10,000 times to mix the cards well.

What the cards are being used for makes a difference, too. In blackjack, for example, card suits don’t matter, and certain cards are equivalent, so just four or five riffle shuffles are plenty for mixing.

Then there’s magic. Perfect shuffles can restore a deck to its original order, and specific sequences of shuffles can move a card to a desired position, enabling a magician to control the cards in a way that seems magical.

For mathematicians, fairness is a big deal

Fulman also explores card dealing, a key to ensuring fairness in card games.

Two commonly used methods of card dealing are the cyclic method and back-and-forth. In the cyclic method, cards are dealt in a repeating sequence such as one, two, three, four, one, two, three, four. Back-and-forth uses alternating directions such as one, two, three, four followed by four, three, two, one.

Back and forth dealing is faster and improves the cards’ randomness, thus requiring fewer shuffles for a well-mixed deck.

Card shuffling is not just fun and games

Card shuffling has practical applications beyond card games, magic tricks and gambling.

Analysing the mixing time of shuffling helps computer scientists determine the optimal distribution of files and folders in databases. And biologists have considered the mixing time of shuffles to study the order of genes, which can help them estimate the evolutionary distance between two organisms, Fulman says.

Studying “patience sorting,” dealing cards into piles, sheds light on passenger airline boarding, and researchers study card shuffling in hopes of understanding and improving traffic flow.

But mathematicians still puzzle over many questions about card shuffling, Fulman says.

For instance, they want to know the number of shuffles required to thoroughly mix a deck using the almost perfect shuffle technique employed by Las Vegas casino dealers, who perform “neater” riffle shuffles achieving near-perfect alternation from one hand to the other.

They also remain stumped by the optimal guessing strategy to maximize the expected number of correct guesses when turning up cards one at a time after a series of riffle shuffles. An answer to this question is also of interest to gamblers, who want to be able to guess as many correct cards as possible, and to casino executives, who want gamblers to be able to guess as few correct cards as possible.

But given enough time, the odds are that mathematicians like Fulman will sort out these and many other card-shuffling conundrums.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Ileana Wachtel, University of Southern California

 


Punctuation in literature of major languages is intriguingly mathematical

A moment’s hesitation… Yes, a full stop here—but shouldn’t there be a comma there? Or would a hyphen be better? Punctuation can be a nuisance; it is often simply neglected. Wrong! The most recent statistical analyses paint a different picture: punctuation seems to “grow out” of the foundations shared by all the (examined) languages, and its features are far from trivial.

To many, punctuation appears as a necessary evil, to be happily ignored whenever possible. Recent analyses of literature written in the world’s current major languages require us to alter this opinion. In fact, the same statistical features of punctuation usage patterns have been observed in several hundred works written in seven, mainly Western, languages.

Punctuation, all ten representatives of which can be found in the introduction to this text, turns out to be a universal and indispensable complement to the mathematical perfection of every language studied. Such a remarkable conclusion about the role of mere commas, exclamation marks or full stops comes from an article by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, published in the journal Chaos, Solitons & Fractals.

“The present analyses are an extension of our earlier results on the multifractal features of sentence length variation in works of world literature. After all, what is sentence length? It is nothing more than the distance to the next specific punctuation mark— the full stop. So now we have taken all punctuation marks under a statistical magnifying glass, and we have also looked at what happens to punctuation during translation,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

Two sets of texts were studied. The main analyses concerning punctuation within each language were carried out on 240 highly popular literary works written in seven major Western languages: English (44), German (34), French (32), Italian (32), Spanish (32), Polish (34) and Russian (32). This particular selection of languages was based on a criterion: the researchers assumed that no fewer than 50 million people should speak the language in question, and that the works written in it should have been awarded no fewer than five Nobel Prizes for Literature.

In addition, for the statistical validity of the research results, each book had to contain at least 1,500 word sequences separated by punctuation marks. A separate collection was prepared to observe the stability of punctuation in translation. It contained 14 works, each of which was available in each of the languages studied (two of the 98 language versions, however, were omitted due to their unavailability).

In total, authors in both collections included such writers as Conrad, Dickens, Doyle, Hemingway, Kipling, Orwell, Salinger, Woolf, Grass, Kafka, Mann, Nietzsche, Goethe, La Fayette, Dumas, Hugo, Proust, Verne, Eco, Cervantes, Sienkiewicz or Reymont.

The attention of the Cracow researchers was primarily drawn to the statistical distribution of the distance between consecutive punctuation marks. It soon became evident that in all the languages studied, it was best described by one of the precisely defined variants of the Weibull distribution.

A curve of this type has a characteristic shape: it grows rapidly at first and then, after reaching a maximum value, descends somewhat more slowly to a certain critical value, below which it reaches zero with small and constantly decreasing dynamics. The Weibull distribution is usually used to describe survival phenomena (e.g. population as a function of age), but also various physical processes, such as increasing fatigue of materials.

“The concordance of the distribution of word sequence lengths between punctuation marks with the functional form of the Weibull distribution was better the more types of punctuation marks we included in the analyses; for all marks the concordance turned out to be almost complete. At the same time, some differences in the distributions are apparent between the different languages, but these merely amount to the selection of slightly different values for the distribution parameters, specific to the language in question. Punctuation thus seems to be an integral part of all the languages studied,” notes Prof. Drozdz.

After a moment he adds with some amusement: “…and since the Weibull distribution is concerned with phenomena such as survival, it can be said with not too much tongue-in-cheek that punctuation has in its nature a literally embedded struggle for survival.”

The next stage of the analyses consisted of determining the hazard function. In the case of punctuation, it describes how the conditional probability of success—i.e., the probability of the next punctuation mark—changes if no such mark has yet appeared in the analysed sequence.

The results here are clear: the language characterized by the lowest propensity to use punctuation is English, with Spanish not far behind; Slavic languages proved to be the most punctuation-dependent. The hazard function curves for punctuation marks in the six languages studied appeared to follow a similar pattern, they differed mainly in vertical shift.

German proved to be the exception. Its hazard function is the only one that intersects most of the curves constructed for the other languages. German punctuation thus seems to combine the punctuation features of many languages, making it a kind of Esperanto punctuation.

The above observation dovetails with the next analysis, which was to see whether the punctuation features of original literary works can be seen in their translations. As expected, the language most faithfully transforming punctuation from the original language to the target language turned out to be German.

In spoken communication, pauses can be justified by human physiology, such as the need to catch one’s breath or to take a moment to structure what is to be said next in one’s mind. And in written communication?

“Creating a sentence by adding one word after another while ensuring that the message is clear and unambiguous is a bit like tightening the string of a bow: it is easy at first, but becomes more demanding with each passing moment. If there are no ordering elements in the text (and this is the role of punctuation), the difficulty of interpretation increases as the string of words lengthens. A bow that is too tight can break, and a sentence that is too long can become unintelligible. Therefore, the author is faced with the necessity of ‘freeing the arrow’, i.e. closing a passage of text with some sort of punctuation mark. This observation applies to all the languages analysed, so we are dealing with what could be called a linguistic law,” states Dr. Tomasz Stanisz (IFJ PAN), first author of the article in question.

Finally, it is worth noting that the invention of punctuation is relatively recent—punctuation marks did not occur at all in old texts. The emergence of optimal punctuation patterns in modern written languages can therefore be interpreted as the result of their evolutionary advancement. However, the excessive need for punctuation is not necessarily a sign of such sophistication.

English and Spanish, contemporarily the most universal languages, appear, in the light of the above studies, to be less strict about the frequency of punctuation use. It is likely that these languages are so formalized in terms of sentence construction that there is less room for ambiguity that would need to be resolved with punctuation marks.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences


Declines in math readiness underscore the urgency of math awareness

When President Ronald Reagan proclaimed the first National Math Awareness Week in April 1986, one of the problems he cited was that too few students were devoted to the study of math.

“Despite the increasing importance of mathematics to the progress of our economy and society, enrollment in mathematics programs has been declining at all levels of the American educational system,” Reagan wrote in his proclamation.

Nearly 40 years later, the problem that Reagan lamented during the first National Math Awareness Week—which has since evolved to become “Mathematics and Statistics Awareness Month”—not only remains but has gotten worse.

Whereas 1.63%, or about 16,000, of the nearly 1 million bachelor’s degrees awarded in the U.S. in the 1985–1986 school year went to math majors, in 2020, just 1.4%, or about 27,000, of the 1.9 million bachelor’s degrees were awarded in the field of math—a small but significant decrease in the proportion.

Post-pandemic data suggests the number of students majoring in math in the U.S. is likely to decrease in the future.

A key factor is the dramatic decline in math learning that took place during the lockdown. For instance, whereas 34% of eighth graders were proficient in math in 2019, test data shows the percentage dropped to 26% after the pandemic.

These declines will undoubtedly affect how much math U.S. students can do at the college level. For instance, in 2022, only 31% of graduating high school seniors were ready for college-level math—down from 39% in 2019.

These declines will also affect how many U.S. students are able to take advantage of the growing number of high-paying math occupations, such as data scientists and actuaries. Employment in math occupations is projected to increase by 29% in the period from 2021 to 2031.

About 30,600 math jobs are expected to open up per year from growth and replacement needs. That exceeds the 27,000 or so math graduates being produced each year—and not all math degree holders go into math fields. Shortages will also arise in several other areas, since math is a gateway to many STEM fields.

For all of these reasons and more, as a mathematician who thinks deeply about the importance of math and what it means to our world—and even to our existence as human beings—I believe this year, and probably for the foreseeable future, educators, policymakers and employers need to take Mathematics and Statistics Awareness Month more seriously than ever before.

Struggles with mastery

Subpar math achievement has been endemic in the U.S. for a long time.

Data from the National Assessment of Educational Progress shows that no more than 26% of 12th graders have been rated proficient in math since 2005.

The pandemic disproportionately affected racially and economically disadvantaged groups. During the lockdown, these groups had less access to the internet and quiet studying spaces than their peers. So securing Wi-Fi and places to study are key parts of the battle to improve math learning.

Some people believe math teaching techniques need to be revamped, as they were through the Common Core, a new set of educational standards that stressed alternative ways to solve math problems. Others want a return to more traditional methods. Advocates also argue there is a need for colleges to produce better-prepared teachers.

Other observers believe the problem lies with the “fixed mindset” many students have—where failure leads to the conviction that they can’t do math—and say the solution is to foster a “growth” mindset—by which failure spurs students to try harder.

Although all these factors are relevant, none address what in my opinion is a root cause of math underachievement: our nation’s ambivalent relationship with mathematics.

Low visibility

Many observers worry about how U.S. children fare in international rankings, even though math anxiety makes many adults in the U.S. steer clear of the subject themselves.

Mathematics is not like art or music, which people regularly enjoy all over the country by visiting museums or attending concerts. It’s true that there is a National Museum of Mathematics in New York, and some science centers in the U.S. devote exhibit space to mathematics, but these can be geographically inaccessible for many.

A 2020 study on media portrayals of math found an overall “invisibility of mathematics” in popular culture. Other findings were that math is presented as being irrelevant to the real world and of little interest to most people, while mathematicians are stereotyped to be singular geniuses or socially inept nerds, and white and male.

Math is tough and typically takes much discipline and perseverance to succeed in. It also calls for a cumulative learning approach—you need to master lessons at each level because you’re going to need them later.

While research in neuroscience shows almost everyone’s brain is equipped to take up the challenge, many students balk at putting in the effort when they don’t score well on tests. The myth that math is just about procedures and memorization can make it easier for students to give up. So can negative opinions about math ability conveyed by peers and parents, such as declarations of not being “a math person.”

A positive experience

Here’s the good news. A 2017 Pew poll found that despite the bad rap the subject gets, 58% of U.S. adults enjoyed their school math classes. It’s members of this legion who would make excellent recruits to help promote April’s math awareness. The initial charge is simple: Think of something you liked about math—a topic, a puzzle, a fun fact—and go over it with someone. It could be a child, a student, or just one of the many adults who have left school with a negative view of math.

Can something that sounds so simplistic make a difference? Based on my years of experience as a mathematician, I believe it can—if nothing else, for the person you talk to. The goal is to stimulate curiosity and convey that mathematics is much more about exhilarating ideas that inform our universe than it is about the school homework-type calculations so many dread.

Raising math awareness is a first step toward making sure people possess the basic math skills required not only for employment, but also to understand math-related issues—such as gerrymandering or climate change—well enough to be an informed and participating citizen. However, it’s not something that can be done in one month.

Given the decline in both math scores and the percentage of students studying math, it may take many years before America realizes the stronger relationship with math that President Reagan’s proclamation called for during the first National Math Awareness Week in 1986.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Manil Suri, The Conversation


From whiteboard work to random groups, these simple fixes could get students thinking more in maths lessons

Australian students’ performance and engagement in mathematics is an ongoing issue.

International studies show Australian students’ mean performance in maths has steadily declined since 2003. The latest Program for International Student Assessment (PISA) in 2018 showed only 10% of Australian teenagers scored in the top two levels, compared to 44% in China and 37% in Singapore.

Despite attempts to reform how we teach maths, it is unlikely students’ performance will improve if they are not engaging with their lessons.

What teachers, parents, and policymakers may not be aware of is research shows students are using “non-thinking behaviours” to avoid engaging with maths.

That is, when your child says they didn’t do anything in maths today, our research shows they’re probably right.

What are non-thinking behaviours?

There are four main non-thinking behaviours. These are:

  • slacking: where there is no attempt at a task. The student may talk or do nothing
  • stalling: where there is no real attempt at a task. This may involve legitimate off-task behaviours, such as sharpening a pencil
  • faking: where a student pretends to do a task, but achieves nothing. This may involve legitimate on-task behaviours such as drawing pictures or writing numbers
  • mimicking: this includes attempts to complete a task and can often involve completing it. It involves referring to others or previous examples.

Peter Liljedahl studied Canadian maths lessons in all years of school, over 15 years. This research found up to 80% of students exhibit non-thinking behaviours for 100% of the time in a typical hour-long lesson.

The most common behaviour was mimicking (53%), reflecting a trend of the teacher doing all the thinking, rather than the students.

It also found when students were given “now you try one” tasks (a teacher demonstrates something, then asks students to try it), the majority of students engaged in non-thinking behaviours.

Australian students are ‘non-thinking’ too

Tracey Muir conducted a smaller-scale study in 2021 with a Year ¾ class.

Some 63% of students were observed engaged in non-thinking behaviours, with slacking and stalling (54%) being the most common. These behaviours included rubbing out, sharpening pencils, and playing with counters, and were especially prevalent in unsupervised small groups.

One explanation for students slacking and stalling is teachers are doing most of the talking and directing, and not providing enough opportunities for students to think.

How can we build “thinking” maths classrooms and reduce the prevalence of non-thinking behaviours?

Here are two research-based ideas.

Form random groups

Often students are placed in groups to work through new skills or lessons. Sometimes these are arranged by the teacher or by the students themselves.

Students know why they have been placed in groups with certain individuals (even if this is not explicitly stated). Here they tend to “live down” to expectations.

If they are with their friends they also tend to distract each other.

Our studies found random groupings improved students’ willingness to collaborate, reduced social stress often caused by self-selecting groups, and increased enthusiasm for mathematics learning.

As one student told us:

“I’m starting to like maths now, and working with random people is better for me so I don’t get off track.”

Get kids to stand up

Classroom learning is often done at desks or sitting on the floor. This encourages passive behaviour and we know from physiology that standing is better than sitting. But we found groups of about three students standing together and working on a whiteboard can promote thinking behaviours. Just the physical act of standing can eliminate slacking, stalling, and faking behaviours. As one student said, “Standing helps me concentrate more because if I’m sitting down I’m just fiddling with stuff, but if I’m standing up, the only thing you can do is write and do maths. ”

The additional strategy of only allowing the student with the pen to record others’ thinking and not their own, has shown to be especially beneficial. As one teacher told us: “The people that don’t have the pen have to do the thinking […] so it’s a real group effort and they don’t have the ability to slack off as much.”

Simple changes can work

While our studies were conducted in maths classrooms, our strategies would be transferable to other discipline areas.

So, while parents and educators may feel concerned about Australia’s declining mathsresults, by introducing simple changes to the classroom, we can ensure students are not only learning and thinking deeply about mathematics, but hopefully, enjoying it, too.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Tracey Muir and Peter Liljedahl, The Conversation


Mathematics: Mapping a fixed point

Mathematicians have grappled with a so-called “fixed point” theorem. An EPFL-based team has now found an elegant, one-page solution that opens up new perspectives in physics and economics

Take a map of the world. Now put it down on the ground in Central Park, against a rock on Mount Everest, or on your kitchen table; there will always be a point on the map that sits exactly on the actual physical place it represents. Obvious? Not for mathematicians. A more complex theorem, called a “fixed point theorem,” has eluded them since 1963. “Some ideas seem evident to the human mind, but in reality involve complicated concepts that are difficult to demonstrate mathematically,” says Nicolas Monod, head of EPFL’s Chair of Ergodic and Geometric Group Theory. It turns out that the answer was there all along, simple and elegant. To reach it, the team of mathematicians had take a different approach to the problem. Their discovery will impress their fellow mathematicians, of course; but on the longer term, it also will be of interest to physicists and economists.

Surprisingly, this theorem works for all kinds of maps, from a diagram of a metro route to a map of spaces used in quantum physics. But to prove it, a fixed point must be found for every possible case. Since the number of possible maps is infinite, the mathematicians were looking for a universal, purely mathematical method — one that would work in any situation.

The challenge for the mathematicians was to find that fixed point. It was a bit like designing a method that could pinpoint the center of gravity of any object, real or purely mathematical. It seemed like an impossible task for the specialists. “That’s why this approach hadn’t been more fully explored,” Monod explains. “It was in thinking about another space and exchanging our ideas that we realized that we actually could find that center of gravity.” It was possible to determine it in a parallel space. The center of gravity was definitely there … but outside the space you started from. It was a counterintuitive result, but one that allowed them to prove the theorem.

In 2008, a thirty-page article, full of technical jargon, almost arrived at a proof. Even Barry Edward Johnson, who formulated the theorem and worked hard to find a proof all the way up to his death in 2002, was ultimately unsuccessful. Today, the proof is only a few pages long. In addition to the indisputable intellectual satisfaction this elegant result represents, it also opens up long-term perspectives in other disciplines; theories in physics and economics, for example, both make use of the idea of fixed points.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Ecole Polytechnique Federale de Lausanne


Deepmind AI Finds New Way To Multiply Numbers And Speed Up Computers

Matrix multiplication – where two grids of numbers are multiplied together – forms the basis of many computing tasks, and an improved technique discovered by an artificial intelligence could boost computation speeds by up to 20 per cent.

Multiplying numbers is a fundamental task for computers

An artificial intelligence created by the firm DeepMind has discovered a new way to multiply numbers, the first such advance in over 50 years. The find could boost some computation speeds by up to 20 per cent, as a range of software relies on carrying out the task at great scale.

Matrix multiplication – where two grids of numbers are multiplied together – is a fundamental computing task used in virtually all software to some extent, but particularly so in graphics, AI and scientific simulations. Even a small improvement in the efficiency of these algorithms could bring large performance gains, or significant energy savings.

For centuries, it was believed that the most efficient way of multiplying matrices would be proportional to the number of elements being multiplied, meaning that the task becomes proportionally harder for larger and larger matrices.

But the mathematician Volker Strassen proved in 1969 that multiplying a matrix of two rows of two numbers with another of the same size doesn’t necessarily involve eight multiplications and that, with a clever trick, it can be reduced to seven. This approach, called the Strassen algorithm, requires some extra addition, but this is acceptable because additions in a computer take far less time than multiplications.

The algorithm has stood as the most efficient approach on most matrix sizes for more than 50 years, although some slight improvements that aren’t easily adapted to computer code have been found. But DeepMind’s AI has now discovered a faster technique that works perfectly on current hardware. The company’s new AI, AlphaTensor, started with no knowledge of any solutions and was presented with the problem of creating a working algorithm that completed the task with the minimum number of steps.

It found an algorithm for multiplying two matrices of four rows of four numbers using just 47 multiplications, which outperforms Strassen’s 49 multiplications. It also developed improved techniques for multiplying matrices of other sizes, 70 in total.

AlphaTensor discovered thousands of functional algorithms for each size of matrix, including 14,000 for 4×4 matrices alone. But only a small minority were better than the state of the art. The research builds on AlphaZero, DeepMind’s game-playing model, and has been two years in the making.

Hussein Fawzi at DeepMind says the results are mathematically sound, but are far from intuitive for humans. “We don’t really know why the system came up with this, essentially,” he says. “Why is it the best way of multiplying matrices? It’s unclear.”

“Somehow, the neural networks get an intuition of what looks good and what looks bad. I honestly can’t tell you exactly how that works. I think there is some theoretical work to be done there on how exactly deep learning manages to do these kinds of things,” says Fawzi.

DeepMind found that the algorithms could boost computation speed by between 10 and 20 per cent on certain hardware such as an Nvidia V100 graphics processing unit (GPU) and a Google tensor processing unit (TPU) v2, but there is no guarantee that those gains would also be seen on common devices like a smartphone or laptop.

James Knight at the University of Sussex, UK, says that a range of software run on supercomputers and powerful hardware, like AI research and weather simulation, is effectively large-scale matrix multiplication.
“If this type of approach was actually implemented there, then it could be a sort of universal speed-up,” he says. “If Nvidia implemented this in their CUDA library [a tool that allows GPUs to work together], it would knock some percentage off most deep-learning workloads, I’d say.”

Oded Lachish at Birkbeck, University of London, says the new algorithms could boost the efficiency of a wide range of software, because matrix multiplication is such a common problem – and more algorithms are likely to follow.

“I believe we’ll be seeing AI-generated results for other problems of a similar nature, albeit rarely something as central as matrix multiplication. There’s significant motivation for such technology, since fewer operations in an algorithm doesn’t just mean faster results, it also means less energy spent,” he says. If a task can be completed slightly more efficiently, then it can be run on less powerful, less power-intensive hardware, or on the same hardware in less time, using less energy.

But DeepMind’s advances don’t necessarily mean human coders are out of a job. “Should programmers be worried? Maybe in the far future. Automatic optimisation has been done for decades in the microchip design industry and this is just another important tool in the coder’s arsenal,” says Lachish.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


The Number That Is Too Big For The Universe

TREE(3) is a number that turns up easily from just playing a simple mathematical game. Yet, it is so colossally large that it couldn’t conceivably fit in our universe, writes Antonio Padilla.

There are many numbers that fit quite naturally into our everyday lives. For example, the number five counts the founding members of popular UK band One Direction, the number 31 million, how many followers they have on Twitter and the number zero, the number of followers that actually have decent taste in music (sorry!).

But there are also numbers which are important to mathematicians that can never fit into our everyday lives. There are even those that could never fit into the universe. Like TREE(3). Let me explain.

TREE(3) is a colossus, a number so large that it dwarfs some of its gargantuan cousins like a googol (ten to the one hundred), or a googolplex (ten to the googol), or even the dreaded Graham’s number (too big to write). TREE(3) emerges, quite spectacularly, from a mathematical game known as the Games of Trees. The idea of the game is to build a forest of trees from different combinations of seeds. Mathematically, the trees are just coloured blobs (the seeds) connected by lines (the branches). As you build the forest, your first tree must only have at most one seed, your second tree must have at most two seeds, and so on. The forest dies whenever you build a tree that contains one of the older trees. There is a precise mathematical meaning to “contains one of the older trees”, but essentially you aren’t allowed to write down any combinations of blobs and branches that have gone before.

At the turn of the 1960s, the Game of Trees had piqued the interest of the great gossiping Hungarian mathematician Paul Erdős. Erdős is known for being a prolific collaborator, writing papers with over 500 other mathematicians. He was also an eccentric who would show up at the homes of his collaborators without warning. He would expect food and lodging and dismiss their children as “epsilons”, the term mathematicians often use for something infinitesimal. But Erdős would also be armed with a compendium of interesting mathematical problems, and if he had arrived at your door, chances are he thought you could solve it. In this particular story, Erdős was asking anyone who cared to listen if the Game of Trees could last forever. At Princeton University, a young mathematician who had just completed his doctorate was keen to take on Erdős’ latest problem. His name was Joseph Kruskal and he was able to prove that the Games of Trees could never last an eternity, but it could go on for a very long time.

So how long can the game actually last? This depends on how many different types of seed you have. If you only have one seed type, the forest cannot have more than one tree. For two types of seed, you have a maximum of three trees. As soon as we add a third type of seed, the game explodes. The maximum number of trees defies all comprehension, leaping towards a true numerical leviathan known as TREE(3).

Games like the Game of Trees are important. They can often be crucial in understanding processes that involve some sort of branching, such as decision algorithms in computer science, or the evolution of viruses and antibodies in epidemiology. And yet, despite these real-world applications, they can also generate a number that is too big for the universe.

TREE(3) really is that big. To see why, imagine you sit down with a friend and decide to play the Game of Trees with three different types of seed.  You know the game can last a while so you play as fast as you can without breaking up the space-time continuum. In other words, you draw a tree every 0.00000000000000000000000000000000000000000005 seconds. That’s equivalent to the Planck time, beyond which the fabric of space and time is overwhelmed by quantum effects.

After a year you will have drawn more than a trillion trillion trillion trillion trees, but you will be nowhere near the end of the game. You play for a lifetime before each of you is replaced by state-of-the-art artificial intelligence that shares your thoughts and personality. The game goes on. The AI mind-clones, powered using solar technology, continue playing long after humanity has destroyed itself through war or climate change or some other madness we haven’t even thought of yet.

After 300 million years, with the world’s continents now merged into one supercontinent and the sun noticeably brighter than before, AI you and your AI friend continue to play at breakneck speed. After 600 million years, the brightening sun has destroyed the Earth’s carbon cycle. Trees and forests can no longer grow, and the oxygen level begins to fall. The sun’s deadly ultraviolet radiation begins to break through Earth’s atmosphere, and by 800 million years, all complex life has been destroyed, except for the two AIs, who continue to play the Game of Trees.

After about 1.5 billion years, with Earth gripped by a runaway greenhouse effect, the Milky Way and Andromeda galaxies collide. The two AIs are too engrossed in their game to notice as the solar system is kicked unceremoniously out of the galaxy as a result of the collision. Billions of years pass as the sun runs out of fuel, turning into a red giant that comes dangerously close to swallowing Earth. Its outer layers drift away and the sun ends its life as a feeble white dwarf, barely bigger than Earth is now. The AIs are now struggling for a reliable source of energy but they continue to play. After a quadrillion years, the sun stops shining altogether. The AIs, starved of energy, have been replaced by an even more advanced technology, drawing energy from the bath of photons left over from the big bang, in the cosmic microwave background radiation. This technology continues to play the Game of Trees. The game is far from over, still some way short of its limit, at TREE(3) moves.

Between around 1040 years and the googolannum (a googol years), the game continues against the backdrop of a spectacular era of black hole dominance, in which all matter has been guzzled by an army of black holes that march relentlessly across the universe. Beyond the googolannum, those black holes have decayed via a process known as Hawking radiation, leaving behind a cold and empty universe, warmed ever so slightly by a gentle bath of radiated photons. And yet, despite all that has passed, the Game of Trees continues.

Can it reach the limit of TREE(3) moves?

It cannot.

After 10 to the 10 to the 122 years, long before the Game of Trees is complete, the universe undergoes a Poincaré recurrence. It resets itself. This is because our universe is thought to be a finite system that can only exist in a finite number of quantum states. Poincaré recurrence, named after the celebrated French mathematician Henri Poincaré, is a property of any finite system, whether it’s the universe or a pack of playing cards. It says that as you move through the system at random, you will return, inevitably, to where you began. With a pack of cards, you shuffle and shuffle, and then after a long wait you eventually shuffle the pack so that all the cards are lined up just as they were when you first opened them. With our universe, it shuffles and shuffles between its various quantum states, and after around 10 to the 10 to the 122 years, it finds itself back in its primordial state.

The Game of Trees could never finish but it did demonstrate our ability to comprehend the incomprehensible, to go to places with mathematics that the physical world could never achieve. The truth is TREE(3) wasn’t too big for Erdős or Kruskal or any of the other mathematicians who contemplated it, but it was too big for the universe.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Antonio Padilla*


How Many Knots Exist? A New Computing Trick Is Untangling The Answer

Finding how many knots there are for a given number of string crossings was thought to be an impossibly complex task. Now, algorithm engineering is cracking it – and showing us how to solve other fiendishly intricate maths problems.

IT USED to be one of the most frustrating part of any journey on public transport. You squeeze your way past the other bodies, sit down and fish your earphones out of your pocket. You didn’t bother to wind up the wires into a neat loop the last time you used them, and so – sigh – you now need to spend the next 5 minutes untangling this knot. Thank goodness for the invention of wireless earbuds.

Knots aren’t just an everyday annoyance, though. They are also a source of endless inspiration for researchers. Take mathematician Benjamin Burton, who is fascinated by one simple question: how many knots are there? “There is something tantalising about problems that you can describe to a 10-year-old, but that mathematicians have not yet solved,” he says.

Taking a census of knots is one of those problems that ought to be impossible to solve because of its complexity. There are so many ways the strings can be crossed and looped that even the fastest computer could never catalogue them all. Yet Burton has been giving it a shot, and along the way he is showing that, with a few clever computational tricks, many maths problems that seem insurmountable might not be.

Knots and science have been, ahem, entangled for quite a while. In the dying decades of the 19th century, scientists were grappling with how to understand atoms. One hypothesis saw them as little vortices of fluid that became stable when knotted. Lord Kelvin, who went on to become the president of the UK’s Royal Society, was the first to suggest that each chemical element corresponded to a different type of knot.

The idea was abandoned after the discovery of the electron, but at the time it seemed vital to understand knots. Physicist Peter Guthrie Tait was the first to make a stab at creating a comprehensive list of them. Cards on the table: there are an infinite number of possible knots, because you can keep on adding extra knotty flourishes forever. The question mathematicians are interested in is more subtle. A defining feature of a knot is its crossing number, the number of times the strings cross. The question is, for a given number of crossings, how many different knots are possible? In 1885, Tait, working with mathematician Thomas Kirkman, considered all the knots with up to and including 10 crossings. Drawing them all by hand, he tabulated 364 configurations.

Try to go further and things quickly get a lot more difficult. As you allow more crossings, the number of possible knots rapidly increases. The last major extension to the knot tables was published in 1998. Mathematicians Jim Hoste, Morwen Thistlethwaite and Jeff Weeks recorded all the knots up to and including 16 crossings – all 1.7 million of them.

Going beyond this has, until recently, been unfeasible. To see why, we need to know a bit more about how mathematicians think about knots (see “When is a knot not a knot?“). Unlike real-world knots that are often pieces of string with loose ends, mathematical knots are closed. Imagine tying a knot in a piece of spaghetti, then melding the free ends.

Stretch, twist, bend

Mathematicians treat knots according to the rules of topology, a branch of geometry. These say that a knot remains fundamentally the same if you stretch, twist or bend the strands – but making cuts or new joins is a no-no. This leads to the concept of a prime knot, a knot that can’t be mathematically broken down into two or more simpler knots.

The job of tabulating knots, then, boils down to comparing elaborate tangles to see if they are really the same prime knot. Don’t underestimate how tricky this can be. For 75 years, two knots with 10 crossings were listed separately in tables. But in 1973, Kenneth Perko, a New York lawyer who had studied maths, realised that if you take the mirror image of one, you can manipulate it to become equivalent to the other. The knots are known as the “Perko pair”.

When dealing with millions of knots, the job of identifying doppelgangers becomes so time-consuming that even the fastest supercomputers shouldn’t be theoretically capable of it in a reasonable amount of time. Still, in 2019, Burton, who is based at the University of Queensland in Australia, decided the time was ripe to take a punt.

He knew there is a difference between a knot-sorting algorithm in the abstract and how that algorithm is implemented in computer code. The art of bridging this gap is known as algorithm engineering and Burton thought that with the right tricks, the problem wouldn’t be as hard as it seemed. “Part of what motivated me was creating a showpiece to see if the tools were good enough,” he says.

He began by setting a computer the task of dreaming up all the possible ways that strings can be knotted with up to and including 19 crossings. This is a comparatively simple task and the computer spat out 21 billion knot candidates after just a few days.

The job was then to check each knot was prime and distinct. This involves storing a full topological description of the knots in a computer’s working memory or RAM. A data set of 21 billion knots would require more than 1 terabyte of RAM to store, and computers with that amount are rare. To get around this, Burton made use of a software package he has developed called Regina. It can convert knots into “knot signatures”, strings of letters that capture each tangle’s defining topological properties. The signatures can be stored far more economically than a knot description. Plus, knots that are tangled differently but are really equivalent have the same signature, making it easy to weed out duplicates. Burton managed to whittle down the candidate knots to about 7 billion in a day.

This method wasn’t powerful enough to identify every last duplicate, however. Burton’s next tactic involved calculating what is known as a knot invariant for each candidate knot. Invariants are mathematical objects that capture the essence of a knot – if two knots have different invariants, they are different knots. Invariants are more powerful than signatures, but they are harder to compute: it would have taken Burton’s computer more than a year to calculate these for all the remaining knots. By sorting them into groups and running them on parallel supercomputers, he got through them in days. The pool was down to 370 million.

Burton also had to grapple with non-hyperbolic knots, an especially challenging category. But after several more rounds of sorting, the supposedly impossible became possible. Burton’s census extended the knot tables to cover everything up to and including 19 crossings. The final tally: 352,152,252 knots.

Ian Agol at the University of California, Berkeley, says he is impressed with the calculations. “It will likely be useful to mathematicians searching for knots with specific properties, or who have a knot that they would like to identify,” he says.

He and other mathematicians think that Burton’s work is also an impressive example of algorithm engineering. “This is a growing trend in pure mathematics, where computational methods are being used in more creative ways,” says Hans Boden at McMaster University in Ontario, Canada.

Lorries and cameras

There are plenty of practical problems where algorithm engineering is used to find efficient solutions to hard problems. Some of the most important are in logistics, where optimised routing can save time and huge sums of money. Others include working out how to cover a space with CCTV cameras. In many cases, a perfect solution is impossible to obtain in a reasonable time frame.

This is also the case when it comes to mapping complex evolutionary histories, particularly for plants and bacteria. Algorithms are used to link DNA data in phylogenetic trees, graphs that group closely related species more closely than distantly related ones. However, sometimes genes don’t pass down from parent to offspring, but rather through what is known as horizontal gene transfer from one species to another. Algorithms designed to map these transfers can be very slow.

One algorithm engineering hack to get around this involves parametrising data inputs to make them smaller. For instance, if you fix the number of potential mutations you are considering, the problem can be solved efficiently. Recently, Edwin Jacox, now at the University of California, Santa Cruz, and his colleagues applied this method to cyanobacteria phylogenetic trees. Cyanobacteria played a major role in Earth’s changing biosphere more than 2 billion years ago. The researchers developed a parametrised algorithm that rebuilds the cyanobacteria phylogenetic trees in just 15 seconds.

Whether it is reconstructing evolutionary trees or listing knots, it is clear than the toughest computing problems can be made tractable. “With algorithm engineering and heuristics, things that are slow in practice turn out to be remarkably, surprisingly fast,” says Burton. It means that even the trickiest problems don’t have to leave us in an inescapable tangle.

When is a knot not a knot?

Here is a problem that needs unpicking: if you pull at a messy tangle of wires, how do you know if it will just get more tangled or come apart into a simple loop? This is a problem mathematicans call “unknot recognition” and it is one we might soon solve. Unknot recognition fascinates mathematicians and computer scientists alike because it is part of the famous “P vs NP” question.

To see how it works, take the classic travelling salesperson problem, where we must work out the shortest route for a salesperson to visit a number of cities. If an algorithm designed to solve this problem doesn’t get exponentially harder as more cities are included, it is said to be “P”. This means it is solvable in reasonable – or polynomial, in maths speak – amounts of time. Once you have an answer, you need to check it is correct. If it is easy to check your answer, the problem is classed as “NP”. The big question for mathematicians is whether all NP problems are also P. This is one of the Millennium Prize Problems – answer it conclusively and you will win $1 million.

Unknot recognition dwells in the twilight zone of P vs NP. We already know that this problem is definitely “NP”, or easy to check. If your algorithm can produce an unknotted loop, you can immediately see it has worked. Mathematicians also think it is likely to be P, and we are tantalisingly close to proving it.

In 2014, Benjamin Burton at the University of Queensland in Australia developed an unknotting algorithm that, in practice, solves in polynomial time. His algorithm has held up “for every knot we’ve thrown at it so far”, he says. More recently, Marc Lackenby at the University of Oxford developed an unknot recognition algorithm that is “quasi-P” (which, in maths parlance, means “almost there”). It is unlikely to be converted into executable computer code because it is so complicated, but Lackenby is confident that a simplified version “is going to be genuinely practical”.

Showing that unknot recognition is both P and NP won’t solve the wider Millennium Prize Problem, though it could give us useful pointers. Still, it is an important milestone and mathematicians will be celebrating once we get there.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Larissa Fedunik*