Technique could efficiently solve partial differential equations for numerous applications

In fields such as physics and engineering, partial differential equations (PDEs) are used to model complex physical processes to generate insight into how some of the most complicated physical and natural systems in the world function.

To solve these difficult equations, researchers use high-fidelity numerical solvers, which can be very time consuming and computationally expensive to run. The current simplified alternative, data-driven surrogate models, compute the goal property of a solution to PDEs rather than the whole solution. Those are trained on a set of data that has been generated by the high-fidelity solver, to predict the output of the PDEs for new inputs. This is data-intensive and expensive because complex physical systems require a large number of simulations to generate enough data.

In a new paper, “Physics-enhanced deep surrogates for partial differential equations,” published in December in Nature Machine Intelligence, a new method is proposed for developing data-driven surrogate models for complex physical systems in such fields as mechanics, optics, thermal transport, fluid dynamics, physical chemistry, and climate models.

The paper was authored by MIT’s professor of applied mathematics Steven G. Johnson along with Payel Das and Youssef Mroueh of the MIT-IBM Watson AI Lab and IBM Research; Chris Rackauckas of Julia Lab; and Raphaël Pestourie, a former MIT postdoc who is now at Georgia Tech. The authors call their method “physics-enhanced deep surrogate” (PEDS), which combines a low-fidelity, explainable physics simulator with a neural network generator. The neural network generator is trained end-to-end to match the output of the high-fidelity numerical solver.

“My aspiration is to replace the inefficient process of trial and error with systematic, computer-aided simulation and optimization,” says Pestourie. “Recent breakthroughs in AI like the large language model of ChatGPT rely on hundreds of billions of parameters and require vast amounts of resources to train and evaluate. In contrast, PEDS is affordable to all because it is incredibly efficient in computing resources and has a very low barrier in terms of infrastructure needed to use it.”

In the article, they show that PEDS surrogates can be up to three times more accurate than an ensemble of feedforward neural networks with limited data (approximately 1,000 training points), and reduce the training data needed by at least a factor of 100 to achieve a target error of 5%. Developed using the MIT-designed Julia programming language, this scientific machine-learning method is thus efficient in both computing and data.

The authors also report that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modeling complex systems. This technique offers accuracy, speed, data efficiency, and physical insights into the process.

Says Pestourie, “Since the 2000s, as computing capabilities improved, the trend of scientific models has been to increase the number of parameters to fit the data better, sometimes at the cost of a lower predictive accuracy. PEDS does the opposite by choosing its parameters smartly. It leverages the technology of automatic differentiation to train a neural network that makes a model with few parameters accurate.”

“The main challenge that prevents surrogate models from being used more widely in engineering is the curse of dimensionality—the fact that the needed data to train a model increases exponentially with the number of model variables,” says Pestourie. “PEDS reduces this curse by incorporating information from the data and from the field knowledge in the form of a low-fidelity model solver.”

The researchers say that PEDS has the potential to revive a whole body of the pre-2000 literature dedicated to minimal models—intuitive models that PEDS could make more accurate while also being predictive for surrogate model applications.

“The application of the PEDS framework is beyond what we showed in this study,” says Das. “Complex physical systems governed by PDEs are ubiquitous, from climate modeling to seismic modeling and beyond. Our physics-inspired fast and explainable surrogate models will be of great use in those applications, and play a complementary role to other emerging techniques, like foundation models.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Sandi Miller, Massachusetts Institute of Technology

 


Global symmetry found to be not completely necessary for the protection of topological boundary states

An international team led by researchers at Nankai University in China and at University of Zagreb in Croatia, along with team at the Institut national de la recherche scientifique (INRS) in Canada, led by Roberto Morandotti has made an important breakthrough in the study of topological phases. Their findings were recently published in Nature Physics.

In the last decade, topological photonics has attracted increasing attention due to the unique prospects to achieve light manipulation with high performance in terms of robustness and stability.

Discoveries in topological photonics have opened the way to the development of a novel generation of photonic devices, such as topological lasers and cavities, featuring topologically protected states that are immune to disorders and defects. The concept of topology in physics is inherited from mathematics, where topology is employed to study geometric properties of an object concerning quantities that are preserved under continuous deformation.

Two objects are topologically identical when the surface of one can be continuously deformed into that of the other one and vice versa, e.g., a coffee cup and a torus are equivalent from a topology viewpoint. In physics, the concept of topology is employed to describe the energy band characteristics, leading to prediction of novel topological states of matter and various topological materials.

Different topological phases (trivial and nontrivial) are distinguished by appropriately introducing quantized topological invariants, which enable establishing a link between the bulk properties and the emergence of the feature at the boundary of these materials, known as the bulk-boundary correspondence. In this regard, the most distinctive feature of a nontrivial topology is the existence of robust topological boundary states protected by specific spatial and/or intrinsic symmetries.

In general, in systems of symmetry-protected topological phase (SPT phase), it is believed that the close relationship between topological boundary states, topological invariants, and one or more overall symmetries is indispensable for maintaining topological protection against perturbations.

As consequence, both topological invariants and topological boundary states are irretrievably affected by any distortion that breaks the underlying symmetry. In this work, the international research team has challenged this traditional common belief, and thus broaden the understanding of SPT boundary states. They found that even if the system no longer has quantized topological invariants and some kinds of global symmetry, the topological boundary states can still exist in the corresponding subspaces, protected by the so-called sub-symmetries.

“Our discovery challenges the common thinking of the symmetry-protected topological phase in topology and renews the correspondence of topological invariant and boundary states,” said Domenico Bongiovanni one of the main investigators, Postdoctoral researcher at INRS-EMT. “Our idea has the potential to explain the topological origin of many unconventional states and can find application in different platforms and physical systems.”

The researchers, by introducing and exploring the concept of sub-symmetry, found that global symmetry in the traditional sense is not completely necessary for the protection of topological boundary states. In this regard, topological boundary states are preserved as long as the symmetries of specific subspaces are satisfied, even when the overall topological invariants no longer exist.

The research team cleverly designed and fabricated photonic lattice structures using a cw-laser writing technique to meet the conditions of different subspace symmetries. The experiments demonstrated a proof of concept with two most typical topological lattices: one-dimensional SSH and two-dimensional Kagome lattices.

In addition, the team innovatively introduced the concept of long-range coupling symmetry into the Kagome lattice model, which resolves the current controversies about the existence and topological protection of higher-order topological states in the Kagome lattice.

This study not only challenges the traditional comprehension of topological states protected by symmetry but also provides new ideas for the research and application of topological states in different physical backgrounds. This impact of this work is expected to further promote the development of topological photonics and its cutting-edge interdisciplinary fields, as well as the research and development of a new generation of topological photonic devices based on sub-symmetry-protected boundary states.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Institut national de la recherche scientifique – INRS


Super Models – Using Maths to Mitigate Natural Disasters

We can’t tame the oceans, but modelling can help us better understand them.

Last year will go on record as one of significant natural disasters both in Australia and overseas. Indeed, the flooding of the Brisbane River in January is still making news as the Queensland floods inquiry investigates whether water released from Wivenhoe Dam was responsible. Water modelling is being used to answer the question: could modelling have avoided the problem in the first place?

This natural disaster – as well as the Japanese tsunami in March and the flooding in Bangkok in October – involved the movement of fluids: water, mud or both. And all had a human cost – displaced persons, the spread of disease, disrupted transport, disrupted businesses, broken infrastructure and damaged or destroyed homes. With the planet now housing 7 billion people, the potential for adverse humanitarian effects from natural disasters is greater than ever.

Here in CSIRO’s division of Mathematical and Information Sciences, we’ve been working with various government agencies (in Australia and China) to model the flow of flood waters and the debris they carry. Governments are starting to realise just how powerful computational modelling is for understanding and analysing natural disasters and how to plan for them.

This power is based on two things – the power of computers and the power of the algorithms (computer processing steps) that run on the computers.

In recent years, the huge increase in computer power and speed coupled with advances in algorithm development has allowed mathematical modellers like us to make large strides in our research.

These advances have enabled us to model millions, even billions of water particles, allowing us to more accurately predict the effects of natural and man-made fluid flows, such as tsunamis, dam breaks, floods, mudslides, coastal inundation and storm surges.

So how does it work?

Well, fluids such as sea water can be represented as billions of particles moving around, filling spaces, flowing downwards, interacting with objects and in turn being interacted upon. Or they can be visualised as a mesh of the fluids’ shape.

Let’s consider a tsunami such as the one that struck the Japanese coast in March of last year. When a tsunami first emerges as a result of an earthquake, shallow water modelling techniques give us the most accurate view of the wave’s formation and early movement.

Mesh modelling of water being poured into a glass.

Once the wave is closer to the coast however, techniques known collectively as smoothed particle hydrodynamics (SPH) are better at predicting how the wave interacts with local geography. We’ve created models of a hypothetical tsunami off the northern Californian coastline to test this.

A dam break can also be modelled using SPH. The modelling shows how fast the water moves at certain times and in certain places, where water “overtops” hills and how quickly it reaches towns or infrastructure such as power stations.

This can help town planners to build mitigating structures and emergency services to co-ordinate an efficient response. Our models have been validated using historical data from a real dam that broke in California in 1928 – the St. Francis Dam.

Having established that our modelling techniques work better than others, we can apply them to a range of what-if situations.

In collaboration with the Satellite Surveying and Mapping Application Centre in China we tested scenarios such as the hypothetical collapse of the massive Geheyan Dam in China.

We combined our modelling techniques with digital terrain models to get a realistic picture of how such a disaster would unfold and, therefore, what actions could mitigate it.

Our experience in developing and using these techniques over several decades allows us to combine them in unique ways for each situation.

We’ve modelled fluids not just for natural disaster planning but also movie special effects, hot metal production, water sports and even something as everyday as insurance.

Insurance companies have been looking to us for help to understand how natural disasters unfold. They cop a lot of media flak after disasters for not covering people affected. People living in low-lying areas have traditionally had difficulty accessing flood insurance and find themselves unprotected in flood situations.

Insurers are starting to realise that the modelling of geophysical flows can provide a basis for predicting localised risk of damage due to flooding and make flood coverage a viable business proposition. One Australian insurance company has been working with us to quantify risk of inundation in particular areas.

Using data from the 1974 Brisbane floods, the floods of last year and fluid modelling data, an insurance company can reliably assess residents’ exposure to particular risks and thereby determine suitable premiums.

With evidence-based tools such as fluid modelling in their arsenal, decision-makers are better prepared for the future. That may be a future of more frequent natural disasters, a future with a more-densely-populated planet, or, more likely, a combination of both.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Mahesh Prakash*


Rubik’s Cube Solution Unlocked By Memorising 3915 Final Move Sequences

For the first time, a speedcuber has demonstrated a solution to the Rubik’s cube that combines the two final steps of the puzzle’s solution into one.

A Rubik’s cube solver has become the first person to show proof of successfully combining the final two steps of solving the mechanical puzzle into one move. The feat required the memorisation of thousands of possible sequences for the final step.

Most skilled speedcubers – people who compete to solve Rubik’s cubes with the most speed and efficiency – choose to solve the final layer of the cube with two separate moves that involve 57 possible sequences for the penultimate step and 21 possible sequences for the final move.

Combining those two separate actions into a single move requires a person to memorise 3915 possible sequences. These sequences were previously known to be possible, but nobody is reported to have successfully achieved this so-called “Full 1 Look Last Layer” (Full 1LLL) move until a speedcuber going by the online username “edmarter” shared a YouTube video demonstrating that accomplishment.

Edmarter says he decided to take up the challenge after seeing notable speedcubers try and fail. Over the course of about a year, he spent 10 hours each weekend and any free time during the week practising and memorising the necessary sequences, he told New Scientist. That often involved memorising 144 movement sequences in a single day.

All that effort paid off on 4 August 2022 when edmarter uploaded a video demonstrating the Full 1LLL over the course of 100 separate puzzle solves. He also posted his accomplishment to Reddit’s r/Cubers community.

His average solve time for each Rubik’s cube over the course of that video demonstration run was 14.79 seconds. He says he had an average solve time as low as 12.50 seconds during two practice runs before recording the video.

The Rubik’s cube community has reacted with overwhelming enthusiasm and awe. The top-voted comment on his Reddit post detailing the achievement simply reads: “This is absolutely insane.”

But he is not resting on his laurels. Next up, he plans to try practising some other methods for finishing the Rubik’s cube that have only previously been mastered by a handful of people.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jeremy Hsu*


Mathematicians model a puzzling breakdown in cooperative behaviour

A model developed by evolutionary mathematicians in Canada and Europe shows that as cooperation becomes easier, it can unexpectedly break down. The researchers at the University of British Columbia and Hungarian Research Network used computational spatial models to arrange individuals from the two species on separate lattices facing one another. Credit: Christoph Hauert and György Szabó

Darwin was puzzled by cooperation in nature—it ran directly against natural selection and the notion of survival of the fittest. But over the past decades, evolutionary mathematicians have used game theory to better understand why mutual cooperation persists when evolution should favour self-serving cheaters.

At a basic level, cooperation flourishes when the costs to cooperation are low or the benefits large. When cooperation becomes too costly, it disappears—at least in the realm of pure mathematics. Symbiotic relationships between species—like those between pollinators and plants–are more complex but follow similar patterns.

But new modeling published today in PNAS Nexus adds a wrinkle to that theory, indicating that cooperative behaviour between species may break down in situations where, theoretically at least, it should flourish.

“As we began to improve the conditions for cooperation in our model, the frequency of mutually beneficial behaviour in both species increases, as expected,” says Dr. Christoph Hauert, a mathematician at the University of British Columbia who studies evolutionary dynamics.

“But as the frequency of cooperation in our simulation gets higher—closer to 50%—suddenly there’s a split. More cooperators pool in one species and fewer in the other—and this asymmetry continues to get stronger as the conditions for cooperation get more benign.”

While this “symmetry breaking of cooperation” between two populations has been modeled by mathematicians before, this is the first model that enables individuals in each group to interact and join forces in a more natural way.

Dr. Hauert and colleague Dr. György Szabó from the Hungarian Research Network used computational spatial models to arrange individuals from the two species on separate lattices facing one another. This enables cooperators to form clusters and reduce their exposure to (and exploitation by) cheaters by more frequently interacting with other cooperators.

“Because we chose symmetric interactions, the level of cooperation is the same in both populations,” says Dr. Hauert. “Clusters can still form and protect cooperators but now they need to be synchronized across lattices because that’s where the interactions occur.”

“The odd symmetry breaking in cooperation shows parallels to phase transitions in magnetic materials and highlights the success of approaches developed in statistical and solid state physics,” says Dr. Szabó.

“At the same time the model sheds light on spikes in dramatic changes in behaviour that can significantly affect the interactions in complex living systems.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of British Columbia

 


How Many Knots Exist? A New Computing Trick Is Untangling The Answer

Finding how many knots there are for a given number of string crossings was thought to be an impossibly complex task. Now, algorithm engineering is cracking it – and showing us how to solve other fiendishly intricate maths problems.

IT USED to be one of the most frustrating part of any journey on public transport. You squeeze your way past the other bodies, sit down and fish your earphones out of your pocket. You didn’t bother to wind up the wires into a neat loop the last time you used them, and so – sigh – you now need to spend the next 5 minutes untangling this knot. Thank goodness for the invention of wireless earbuds.

Knots aren’t just an everyday annoyance, though. They are also a source of endless inspiration for researchers. Take mathematician Benjamin Burton, who is fascinated by one simple question: how many knots are there? “There is something tantalising about problems that you can describe to a 10-year-old, but that mathematicians have not yet solved,” he says.

Taking a census of knots is one of those problems that ought to be impossible to solve because of its complexity. There are so many ways the strings can be crossed and looped that even the fastest computer could never catalogue them all. Yet Burton has been giving it a shot, and along the way he is showing that, with a few clever computational tricks, many maths problems that seem insurmountable might not be.

Knots and science have been, ahem, entangled for quite a while. In the dying decades of the 19th century, scientists were grappling with how to understand atoms. One hypothesis saw them as little vortices of fluid that became stable when knotted. Lord Kelvin, who went on to become the president of the UK’s Royal Society, was the first to suggest that each chemical element corresponded to a different type of knot.

The idea was abandoned after the discovery of the electron, but at the time it seemed vital to understand knots. Physicist Peter Guthrie Tait was the first to make a stab at creating a comprehensive list of them. Cards on the table: there are an infinite number of possible knots, because you can keep on adding extra knotty flourishes forever. The question mathematicians are interested in is more subtle. A defining feature of a knot is its crossing number, the number of times the strings cross. The question is, for a given number of crossings, how many different knots are possible? In 1885, Tait, working with mathematician Thomas Kirkman, considered all the knots with up to and including 10 crossings. Drawing them all by hand, he tabulated 364 configurations.

Try to go further and things quickly get a lot more difficult. As you allow more crossings, the number of possible knots rapidly increases. The last major extension to the knot tables was published in 1998. Mathematicians Jim Hoste, Morwen Thistlethwaite and Jeff Weeks recorded all the knots up to and including 16 crossings – all 1.7 million of them.

Going beyond this has, until recently, been unfeasible. To see why, we need to know a bit more about how mathematicians think about knots (see “When is a knot not a knot?“). Unlike real-world knots that are often pieces of string with loose ends, mathematical knots are closed. Imagine tying a knot in a piece of spaghetti, then melding the free ends.

Stretch, twist, bend

Mathematicians treat knots according to the rules of topology, a branch of geometry. These say that a knot remains fundamentally the same if you stretch, twist or bend the strands – but making cuts or new joins is a no-no. This leads to the concept of a prime knot, a knot that can’t be mathematically broken down into two or more simpler knots.

The job of tabulating knots, then, boils down to comparing elaborate tangles to see if they are really the same prime knot. Don’t underestimate how tricky this can be. For 75 years, two knots with 10 crossings were listed separately in tables. But in 1973, Kenneth Perko, a New York lawyer who had studied maths, realised that if you take the mirror image of one, you can manipulate it to become equivalent to the other. The knots are known as the “Perko pair”.

When dealing with millions of knots, the job of identifying doppelgangers becomes so time-consuming that even the fastest supercomputers shouldn’t be theoretically capable of it in a reasonable amount of time. Still, in 2019, Burton, who is based at the University of Queensland in Australia, decided the time was ripe to take a punt.

He knew there is a difference between a knot-sorting algorithm in the abstract and how that algorithm is implemented in computer code. The art of bridging this gap is known as algorithm engineering and Burton thought that with the right tricks, the problem wouldn’t be as hard as it seemed. “Part of what motivated me was creating a showpiece to see if the tools were good enough,” he says.

He began by setting a computer the task of dreaming up all the possible ways that strings can be knotted with up to and including 19 crossings. This is a comparatively simple task and the computer spat out 21 billion knot candidates after just a few days.

The job was then to check each knot was prime and distinct. This involves storing a full topological description of the knots in a computer’s working memory or RAM. A data set of 21 billion knots would require more than 1 terabyte of RAM to store, and computers with that amount are rare. To get around this, Burton made use of a software package he has developed called Regina. It can convert knots into “knot signatures”, strings of letters that capture each tangle’s defining topological properties. The signatures can be stored far more economically than a knot description. Plus, knots that are tangled differently but are really equivalent have the same signature, making it easy to weed out duplicates. Burton managed to whittle down the candidate knots to about 7 billion in a day.

This method wasn’t powerful enough to identify every last duplicate, however. Burton’s next tactic involved calculating what is known as a knot invariant for each candidate knot. Invariants are mathematical objects that capture the essence of a knot – if two knots have different invariants, they are different knots. Invariants are more powerful than signatures, but they are harder to compute: it would have taken Burton’s computer more than a year to calculate these for all the remaining knots. By sorting them into groups and running them on parallel supercomputers, he got through them in days. The pool was down to 370 million.

Burton also had to grapple with non-hyperbolic knots, an especially challenging category. But after several more rounds of sorting, the supposedly impossible became possible. Burton’s census extended the knot tables to cover everything up to and including 19 crossings. The final tally: 352,152,252 knots.

Ian Agol at the University of California, Berkeley, says he is impressed with the calculations. “It will likely be useful to mathematicians searching for knots with specific properties, or who have a knot that they would like to identify,” he says.

He and other mathematicians think that Burton’s work is also an impressive example of algorithm engineering. “This is a growing trend in pure mathematics, where computational methods are being used in more creative ways,” says Hans Boden at McMaster University in Ontario, Canada.

Lorries and cameras

There are plenty of practical problems where algorithm engineering is used to find efficient solutions to hard problems. Some of the most important are in logistics, where optimised routing can save time and huge sums of money. Others include working out how to cover a space with CCTV cameras. In many cases, a perfect solution is impossible to obtain in a reasonable time frame.

This is also the case when it comes to mapping complex evolutionary histories, particularly for plants and bacteria. Algorithms are used to link DNA data in phylogenetic trees, graphs that group closely related species more closely than distantly related ones. However, sometimes genes don’t pass down from parent to offspring, but rather through what is known as horizontal gene transfer from one species to another. Algorithms designed to map these transfers can be very slow.

One algorithm engineering hack to get around this involves parametrising data inputs to make them smaller. For instance, if you fix the number of potential mutations you are considering, the problem can be solved efficiently. Recently, Edwin Jacox, now at the University of California, Santa Cruz, and his colleagues applied this method to cyanobacteria phylogenetic trees. Cyanobacteria played a major role in Earth’s changing biosphere more than 2 billion years ago. The researchers developed a parametrised algorithm that rebuilds the cyanobacteria phylogenetic trees in just 15 seconds.

Whether it is reconstructing evolutionary trees or listing knots, it is clear than the toughest computing problems can be made tractable. “With algorithm engineering and heuristics, things that are slow in practice turn out to be remarkably, surprisingly fast,” says Burton. It means that even the trickiest problems don’t have to leave us in an inescapable tangle.

When is a knot not a knot?

Here is a problem that needs unpicking: if you pull at a messy tangle of wires, how do you know if it will just get more tangled or come apart into a simple loop? This is a problem mathematicans call “unknot recognition” and it is one we might soon solve. Unknot recognition fascinates mathematicians and computer scientists alike because it is part of the famous “P vs NP” question.

To see how it works, take the classic travelling salesperson problem, where we must work out the shortest route for a salesperson to visit a number of cities. If an algorithm designed to solve this problem doesn’t get exponentially harder as more cities are included, it is said to be “P”. This means it is solvable in reasonable – or polynomial, in maths speak – amounts of time. Once you have an answer, you need to check it is correct. If it is easy to check your answer, the problem is classed as “NP”. The big question for mathematicians is whether all NP problems are also P. This is one of the Millennium Prize Problems – answer it conclusively and you will win $1 million.

Unknot recognition dwells in the twilight zone of P vs NP. We already know that this problem is definitely “NP”, or easy to check. If your algorithm can produce an unknotted loop, you can immediately see it has worked. Mathematicians also think it is likely to be P, and we are tantalisingly close to proving it.

In 2014, Benjamin Burton at the University of Queensland in Australia developed an unknotting algorithm that, in practice, solves in polynomial time. His algorithm has held up “for every knot we’ve thrown at it so far”, he says. More recently, Marc Lackenby at the University of Oxford developed an unknot recognition algorithm that is “quasi-P” (which, in maths parlance, means “almost there”). It is unlikely to be converted into executable computer code because it is so complicated, but Lackenby is confident that a simplified version “is going to be genuinely practical”.

Showing that unknot recognition is both P and NP won’t solve the wider Millennium Prize Problem, though it could give us useful pointers. Still, it is an important milestone and mathematicians will be celebrating once we get there.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Larissa Fedunik*


Pascal’s Triangle and Polygonal opsgf’s

The diagram above illustrates a surprising correspondence – if you take the reciprocal of a particular polynomial whose coefficients match the d+1 row of Pascal’s Triangle, you obtain a formal power series whose coefficients are precisely the elements in the dth column of the triangle.

The case for the third row is shown above (row and column numbering starts at 0 in Pascal’s Triangle), and the case for the fourth row is shown in the diagram below. To actually work out the power series that comes from the reciprocals (well, at least the first few coefficients), the grid method of dividing polynomials works well (I think).

In this way, the rows of the triangle  provide all the information required to obtain the columns – a nice correspondence between a finite sequence and an infinite one.

You’ll see this correspondence if you spend time investigating the “ordinary power series generating functions” (opsgf’s) for the higher dimensional triangular numbers. The row polynomials are just (1-x)^(d+1), while the columns correspond to formal power series whose coefficients are the d-dimensional triangular numbers. A great place to learn about these opsgf’s is H. Wilf’s book generatingfunctionology.

You can uncover the general opsgf for higher dimensional triangular numbers by starting with the (1-x)^(-1) rational function, and applying Wilf’s ‘rule 5’ from chapter 2 of generating functionology.

Applying a few other rules, you can obtain an opsgf for the k-polygonal numbers (= 4 is squares, = 5 is pentagonals, = 6 hexagonals, etc.).

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Study Finds Cooperation Can Still Evolve Even With Limited Payoff Memory

Direct reciprocity facilitates cooperation in repeated social interactions. Traditional models suggest that individuals learn to adopt conditionally cooperative strategies if they have multiple encounters with their partner. However, most existing models make rather strong assumptions about how individuals decide to keep or change their strategies. They assume individuals make these decisions based on a strategy’s average performance. This in turn suggests that individuals would remember their exact payoffs against everyone else.

In a recent study, researchers from the Max Planck Institute for Evolutionary Biology, the School of Data Science and Society, and the Department of Mathematics at the University of North Carolina at Chapel Hill examine the effects of realistic memory constraints. They find that cooperation can evolve even with minimal memory capacities. The research is published in the journal Proceedings of the Royal Society B: Biological Sciences.

Direct reciprocity is based on repeated interactions between two individuals. This concept, often described as “you scratch my back, I’ll scratch yours,” has proven to be a pivotal mechanism in maintaining cooperation within groups or societies.

While models of direct reciprocity have deepened our understanding of cooperation, they frequently make strong assumptions about individuals’ memory and decision-making processes. For example, when strategies are updated through social learning, it is commonly assumed that individuals compare their average payoffs.

This would require them to compute (or remember) their payoffs against everyone else in the population. To understand how more realistic constraints influence direct reciprocity, the current study considers the evolution of conditional behaviours when individuals learn based on more recent experiences.

Two extreme scenarios

This study first compares the classical modeling approach with another extreme approach. In the classical approach, individuals update their strategies based on their expected payoffs, considering every single interaction with each member of the population (perfect memory). Conversely, the opposite extreme is considering only the very last interaction (limited memory).

Comparing these two scenarios shows that individuals with limited payoff memory tend to adopt less generous strategies. They are less forgiving when someone defects against them. Yet, moderate levels of cooperation can still evolve.

Intermediate cases

The study also considers intermediate cases, where individuals consider their last two or three or four recent experiences. The results show that cooperation rates quickly approach the levels observed under perfect payoff memory.

Overall, this study contributes to a wider literature that explores which kinds of cognitive capacities are required for reciprocal altruism to be feasible. While more memory is always favourable, reciprocal cooperation can already be sustained if individuals have a record of two or three past outcomes.

This work’s results have been derived entirely within a theoretical model. The authors feel that such studies are crucial for making model-informed deductions about reciprocity in natural systems.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Michael Hesse, Max Planck Society


How Spiral Search Patterns And Lateral Thinking Cracked Our Puzzle

Rob Eastaway and Brian Hobbs take over our maths column to reveal who solved their puzzle and won a copy of their New Scientist puzzle book, Headscratchers.

When solving problems in the real world, it is rare that the solution is purely mathematical, but maths is often a key ingredient. The puzzle we set a few weeks ago (New Scientist, 30 September, p 45) embraced this by encouraging readers to come up with ingenious solutions that didn’t have to be exclusively maths-based.

Here is a reminder of the problem: Prince Golightly found himself tied to a chair near the centre of a square room, in the dark, with chained monsters in the four corners and an escape door in the middle of one wall. With him, he had a broom, a dictionary, some duct tape, a kitchen clock and a bucket of water with a russet fish.

John Offord was one of several readers to spot an ambiguity in our wording. Four monsters in each corner? Did this mean 16 monsters? John suggested the dictionary might help the captors brush up on their grammar.

The russet fish was deliberately inserted as a red herring (geddit?), but we loved that some readers found ways to incorporate it, either as a way of distracting the monsters or as a source of valuable protein for a hungry prince. Dave Wilson contrived a delightful monster detector, while Glenn Reid composed a limerick with the solution of turning off the computer game and going to bed.

And so to more practical solutions. Arlo Harding and Ed Schulz both suggested ways of creating a torch by igniting assorted materials with an electric spark from the light cable. But Ben Haller and Chris Armstrong had the cleverest mathematical approach. After locating the light fitting in the room’s centre with the broom, they used duct tape and rope to circle the centre, increasing the radius until they touched the wall at what must be its centre, and then continued circling to each wall till they found the escape door. Meanwhile, the duo of Denise and Emory (aged 11) used Pythagoras’s theorem to confirm the monsters in the corners would be safely beyond reach. They, plus Ben and Chris, win a copy of our New Scientist puzzle book Headscratchers.

It is unlikely you will ever have to escape monsters in this way, but spiral search patterns when visibility is limited are employed in various real-world scenarios: rescuers probing for survivors in avalanches, divers performing underwater searches and detectives examining crime scenes, for example. Some telescopes have automated spiral search algorithms that help locate celestial objects. These patterns allow for thorough searches while ensuring you don’t stray too far from your starting point.

Of course, like all real-world problems, mathematical nous isn’t enough. As our readers have displayed, lateral thinking and the ability to improvise are human skills that help us find the creative solutions an algorithm would miss.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Rob Eastaway and Brian Hobbs*


Here’s why you should (almost) never use a pie chart for your data

Our lives are becoming increasingly data driven. Our phones monitor our time and internet usage and online surveys discern our opinions and likes. These data harvests are used for telling us how well we’ve slept or what we might like to buy.

Numbers are becoming more important for everyday life, yet people’s numerical skills are falling behind. For example, the percentage of Year 12 schoolchildren in Australia taking higher and intermediate mathematics has been declining for decades.

To help the average person understand big data and numbers, we often use visual summaries, such as pie charts. But while non-numerate folk will avoid numbers, most numerate folk will avoid pie charts. Here’s why.

What is a pie chart?

A pie chart is a circular diagram that represents numerical percentages. The circle is divided into slices, with the size of each slice proportional to the category it represents. It is named because it resembles a sliced pie and can be “served” in many different ways.

An example pie chart below shows Australia’s two-party preferred vote before the last election, with Labor on 55% and the the Coalition on 45%. The two near semi-circles show the relatively tight race—this is a useful example of a pie chart.

What’s wrong with pie charts?

Once we have more than two categories, pie charts can easily misrepresent percentages and become hard to read.

The three charts below are a good example—it is very hard to work out which of the five areas is the largest. The pie chart’s circularity means the areas lack a common reference point.

Pie charts also do badly when there are lots of categories. For example, this chart from a study on data sources used for COVID data visualization shows hundreds of categories in one pie.

The tiny slices, lack of clear labeling and the kaleidoscope of colors make interpretation difficult for anyone.

It’s even harder for a color blind person. For example, this is a simulation of what the above chart would look like to a person with deuteranomaly or reduced sensitivity to green light. This is the most common type of color blindness, affecting roughly 4.6% of the population.

It can get even worse if we take pie charts and make them three-dimensional. This can lead to egregious misrepresentations of data.

Below, the yellow, red and green areas are all the same size (one-third), but appear to be different based on the angle and which slice is placed at the bottom of the pie.

So why are pie charts everywhere?

Despite the well known problems with pie charts, they are everywhere. They are in journal articles, Ph.D. theses, political polling, books, newspapers and government reports. They’ve even been used by the Australian Bureau of Statistics.

While statisticians have criticized them for decades, it’s hard to argue with this logic: “If pie charts are so bad, why are there so many of them?”

Possibly they are popular because they are popular, which is a circular argument that suits a pie chart.

What’s a good alternative to pie charts?

There’s a simple fix that can effectively summarize big data in a small space and still allow creative color schemes.

It’s the humble bar chart. Remember the brain-aching pie chart example above with the five categories? Here’s the same example using bars—we can now instantly see which category is the largest.

Linear bars are easier on the eye than the non-linear segments of a pie chart. But beware the temptation to make a humble bar chart look more interesting by adding a 3D effect. As you already saw, 3D charts distort perception and make it harder to find a reference point.

Below is a standard bar chart and a 3D alternative of the number of voters in the 1992 US presidential election split by family income (from under US$15K to over $75k). Using the 3D version, can you tell the number of voters for each candidate in the highest income category? Not easily.

Is it ever okay to use a pie chart?

We’ve shown some of the worst examples of pie charts to make a point. Pie charts can be okay when there are just a few categories and the percentages are dissimilar, for example with one large and one small category.

Overall, it is best to use pie charts sparingly, especially when there is a more “digestible” alternative—the bar chart.

Whenever we see pie charts, we think one of two things: their creators don’t know what they’re doing, or they know what they are doing and are deliberately trying to mislead.

A graphical summary aims to easily and quickly communicate the data. If you feel the need to spruce it up, you’re likely reducing understanding without meaning to do so.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Adrian Barnett and Victor Oguoma, The Conversation