What toilet paper and game shows can teach us about the spread of epidemics

How can we explain and predict human behaviour? Are mathematics and probability up to the task, or are humans too complex and irrational?

Often, people’s actions take us by surprise, particularly when they seem irrational. Take the COVID pandemic: one thing nobody saw coming was a rush on toilet paper that left supermarket shelves bare in many countries.

But by combining ideas from mathematics, economics and behavioural science, researchers were eventually able to make mathematical models of how panic spreads between people, which made sense of the toilet paper panic.

In new research published in the Journal of the Royal Society Interface, we have taken a similar approach to the spread of disease – and showed that human reactions to the spread of disease can be as important as the behaviour of the disease itself when it comes to determining how an outbreak develops.

The power of context

One thing we know is that context can shape people’s behaviour in surprising ways. A nightly example of this is the popular TV game show Deal or No Deal, in which contestants regularly turn down offers of free money because they hope they will get a larger sum later.

If you carry out a rational calculation of the probabilities, most of the time the contestant’s “best” move is to accept the offer. But in practice, people often turn down a reasonable offer and hold out for a tiny chance at the big bucks.

Would a person refuse $5,000 if they were offered it in any other context? In this situation, straightforward maths can’t predict how people will behave.

 

The science of irrationality

What if we go beyond maths? Behavioural science has much to say about what drives people to take specific actions.

In this case, it might suggest people behave more reasonably if they set a realistic goal (such as getting $5,000) and position the goal in a powerful motivational context (such as planning to use the money to pay for a holiday).

Yet time and again even people with clear, achievable goals are swept up by emotion and context. At the right time and place, they will believe that luck is with them and refuse a $5,000 offer in the hope of something bigger.

Nevertheless, researchers have found ways to understand the behaviour of Deal or No Deal contestants by combining ideas from mathematics, economics and the study of behaviour around risky choices.

In essence, the researchers found contestants’ decisions are “path-dependent”. This means their choice to accept a bank offer depends not only on their goal and the odds, but also the choices they have already made.

Group behaviours

Deal or No Deal, of course, is largely about individuals making decisions in a certain context. But when we’re trying to understand the spread of disease, we’re interested in how whole groups of people behave.

This is the realm of social psychology, where group behaviours and attitudes can influence individual actions. In some ways this makes groups easier to predict, and it’s where combining mathematics and behavioural science really starts to produce results.

Although some mass behaviours at the start of the COVID pandemic were highly visible – like panic-buying toilet paper – others were not. Mobility data from Google showed people were choosing to limit their own movement, for example, before any mandated restrictions were in place.

Feedback loops

Fear and perceived risk can promote self-preservation through positive mass behaviours. For example, as more sickness appears in the community, people are more likely to act to prevent themselves getting sick.

These actions in turn have a direct impact on the spread of the disease, which further affects human behaviour, and so on. Many mathematical models of how diseases spread have failed to take this feedback loop into account.

Our new study is a step toward combining population disease spread modelling with mass behaviour modelling, aimed at understanding the links between behaviour and infection.

Our framework accounts for dynamic and self-driven protective health behaviours in the presence of an infectious disease. This puts us in a better position to make informed choices and policy recommendations for future epidemics.

Notably, our approach allows us to understand how mass behaviours influence how great a burden the disease will impose on the population in the long term. There is still much work to develop in this area.

To better understand human behaviour from a mathematical perspective, we will need better data around human choices in the presence of an infectious disease. This lets us pick out patterns that can be used for prediction.

Predicting behaviour

So, to come back to the question: can we predict human behaviour? Well, it depends. Many factors contribute to our choices: emotion, context, risk perception, social observation, fear, excitement.

Understanding which of these factors to explore with mathematics is no easy feat. However, when society faces so many challenges related to changes in mass behaviour – from infectious diseases to climate change – using mathematics to describe and predict patterns is a powerful tool.

But no single discipline can provide the answer to global challenges which need changes in human behaviour at scale. We will need more interdisciplinary teams to achieve meaningful impacts.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to The Conversation

 


Mathematicians Can’t Agree What ‘Equals’ Means, And That’s A Problem

What does “equals” mean? For mathematicians, this simple question has more than one answer, which is causing issues when it comes to using computers to check proofs. The solution might be to tear up the foundations of maths.

When you see “2 + 2 = 4”, what does “=” mean? It turns out that’s a complicated question, because mathematicians can’t agree on the definition of what makes two things equal.

While this argument has been quietly simmering for decades, a recent push to make mathematical proofs checkable by computer programs, called formalisation, has given the argument new significance.

“Mathematicians use equality to mean two different things, and I was fine with that,” says Kevin Buzzard at Imperial College London. “Then I started doing maths on a computer.” Working with computer proof assistants made him realise that mathematicians must now confront what was, until recently, a useful ambiguity, he says – and it could force them to completely redefine the foundations of their subject.

The first definition of equality will be a familiar one. Most mathematicians take it to mean that each side of an equation represents the same mathematical object, which can be proven through a series of logical transformations from one side to the other. While “=”, the equals sign, only emerged in the 16th century, this concept of equality dates back to antiquity.

It was the late 19th century when things began to change, with the development of set theory, which provides the logical foundations for most modern mathematics. Set theory deals with collections, or sets, of mathematical objects, and introduced another definition of equality: if two sets contain the same elements, then they are equal, similar to the original mathematical definition. For example, the sets {1, 2, 3} and {3, 2, 1} are equal, because the order of the elements in a set doesn’t matter.

But as set theory developed, mathematicians started saying that two sets were equal if there was an obvious way to map between them, even if they didn’t contain exactly the same elements, says Buzzard.

To understand why, take the sets {1, 2, 3} and {a, b, c}. Clearly, the elements of each set are different, so the sets aren’t equal. But there are also ways of mapping between the two sets, by identifying each letter with a number. Mathematicians call this an isomorphism. In this case, there are multiple isomorphisms because you have a choice of which number to assign to each letter, but in many cases, there is only one clear choice, called the canonical isomorphism.

Because a canonical isomorphism of two sets is the only possible way to link them, many mathematicians now take this to mean they are equal, even though it isn’t technically the same concept of equality that most of us are used to. “These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well,” says Buzzard.

Having two definitions for equality is of no real concern to mathematicians when they write papers or give lectures, as the meaning is always clear from the context, but they present problems for computer programs that need strict, precise instructions, says Chris Birkbeck at the University of East Anglia, UK. “We’re finding that we were a little bit sloppy all along, and that maybe we should fix a few things.”

To address this, Buzzard has been investigating the way some mathematicians widely use canonical isomorphism as equality, and the problems this can cause with formal computer proof systems.

In particular, the work of Alexander Grothendieck, one of the leading mathematicians of the 20th century, is currently extremely difficult to formalise. “None of the systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol,” says Buzzard.

The problem has its roots in the way mathematicians put together proofs. To begin proving anything, you must first make assumptions called axioms that are taken to be true without proof, providing a logical framework to build upon. Since the early 20th century, mathematicians have settled on a collection of axioms within set theory that provide a firm foundation. This means they don’t generally have to use axioms directly in their day-to-day business, because common tools can be assumed to work correctly – in the same way you probably don’t worry about the inner workings of your kitchen before cooking a recipe.

“As a mathematician, you somehow know well enough what you’re doing that you don’t worry too much about it,” says Birkbeck. That falls down, however, when computers get involved, carrying out maths in a way that is similar to building a kitchen from scratch for every meal. “Once you have a computer checking everything you say, you can’t really be vague at all, you really have to be very precise,” says Birkbeck.

To solve the problem, some mathematicians argue we should just redefine the foundations of mathematics to make canonical isomorphisms and equality one and the same. Then, we can make computer programs work around that. “Isomorphism is equality,” says Thorsten Altenkirch at the University of Nottingham, UK. “I mean, what else? If you cannot distinguish two isomorphic objects, what else would it be? What else would you call this relationship?”

Efforts are already under way to do this in a mathematical field called homotopy type theory, in which traditional equality and canonical isomorphism are defined identically. Rather than trying to contort existing proof assistants to fit canonical isomorphism, says Altenkirch, mathematicians should adopt type theory and use alternative proof assistants that work with it directly.

Buzzard isn’t a fan of this suggestion, having already spent considerable effort using current tools to formalise mathematical proofs that are needed to check more advanced work, such as a proof of Fermat’s last theorem. The axioms of mathematics should be left as they are, rather than adopting type theory, and existing systems should be tweaked instead, he says. “Probably the way to fix it is just to leave mathematicians as they are,” says Buzzard. “It’s very difficult to change mathematicians. You have to make the computer systems better.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


How can we make good decisions by observing others? A videogame and computational model have the answer

How can disaster response teams benefit from understanding how people most efficiently pick strawberries together, or how they choose the perfect ice cream shop with friends?

All these scenarios are based on the very fundamental question of how and when human groups manage to adapt collectively to different circumstances. Two recent studies on collective dynamics by the Cluster of Excellence Science of Intelligence (SCIoI) in Berlin, Germany, lay the groundwork to promote better coordinated operations while showcasing the potential of the Cluster’s analytic-synthetic loop approach: an interconnection of a human-focused (analytic) study with a novel computer simulation (synthetic).

By understanding how individual decisions impact group performance, we can possibly enhance emergency services and everyday teamwork, and further develop effective decentralized robotic systems that could benefit society in multiple ways (think robots that explore potentially dangerous places such as a crumbling building).

How groups of people move and make collective decisions (analytic side)

Through a naturalistic immersive-reality experiment, Science of Intelligence researchers have presented new findings on the dynamics of human collective behaviour. The study “Collective incentives reduce over-exploitation of social information in unconstrained human groups,” published in Nature Communications, explores how individual decisions shape collective outcomes in realistic group settings.

In the experiment, groups of participants freely moved through a 3D virtual environment similar to a video game, searching for hidden treasures. This resembled scenario of hunting and gathering, extinguishing wildfires, or searching for survivors together.

The researchers varied how resources were distributed and how participants were incentivized. Individuals often benefited from staying close to others and taking advantage of their discoveries. However, on the group level, this caused poor group performance.

“It’s a bit like copying homework: You are benefitting yourself but not contributing to group performance in the long run,” said Dominik Deffner. “But it also turned out that rewards on the group level, similar to bonuses for team achievements, reduced this copying behaviour and thereby improved group performance.”

To extract individual decisions from naturalistic societal interactions, the researchers developed a computational modelhelping them to understand key decision-making processes. This model inferred sequences of decisions from visual and movement data and showed that group rewards made people less likely to follow social information, encouraging them to become more selective over time.

The study also looked at how groups moved and acted over time and space, finding a balance between exploring new areas and using known resources at different times. These findings are important for improving group strategies in many areas, like solving problems in businesses or improving search and rescue operations.

How visual perception and embodiment shapes collective decisions (synthetic side)

In a complementary study, called “Visual social information use in collective foraging” and published in PLOS Computational Biology, researchers introduced a new computational model that explores how individual decisions shape collective behaviour.

The model applies to any realistic situation where groups of people, animals, or robots are searching for rewards together. This computational model addresses two main questions: how do individuals make decisions according to visible information around them? And how do they move in a physical space at the same time?

In this study, a simulated swarm of robots searches for resources in a virtual playground very similar to the one by Deffner described above. The resources are in patches and when depleted, they reappear in new spots. The virtual robots can choose between exploring the environment to find new resource patches, following other robots consuming resources, or staying and consuming resources until they’re gone.

The findings show how simple decisions, for example where to go next, can lead to complex group behaviour.

“The environment plays an important role in how groups work efficiently together” said David Mezey. “When resources are concentrated, working closely together and relying on shared information is the most efficient solution. However, when the resources are spread out it’s better for individuals or smaller subgroups to work independently. This explains some everyday group behaviours that many of us may be familiar with.

“Imagine a group of firefighters tasked with putting out a large fire in the forest. If the flames are concentrated in one well-defined area, the best strategy would be for all of them to work together in that specific location. But, if the fire has already spread across patches, it is more effective for the firefighters to split into smaller subgroups to find and tackle the distributed patches independently.”

The study also highlights how physical and visual limitations affect group performance. The authors included real-world limitations in their computer simulations, for example, individuals bumping into each other when too close, or blocking each other’s views.

They discovered that these limitations can fundamentally change collective behaviour and, interestingly, in some cases, even improve group performance. For example, virtual robots with restricted vision focus only on nearby individuals, improving their search strategy. Imagine strawberry picking with friends: even if a friend finds some fruits far away from you, you might want to stay in your area to avoid reaching an already empty patch.

These limitations had similar effects on virtual robots, and this study shows why it’s so important to think about such limitations when studying group behaviour.

Analysing, synthesizing and looping back again

We’ve understood certain animal collective behaviours especially in fish, birds and sheep through simple interaction rules often based on physical principles. However, to understand collective behaviour in humans, we need to understand all the individual decisions that people make and the cognitive processes that produce them.

In the two studies, the researchers link individual cognition to collective outcomes in realistic environments and thus explain complex group outcomes based on individual decisions. In other words, insights from the human-focused study (analytic side) are used to create computational models (synthetic side) that can be applied to better understand phenomena such as collective behaviour and social learning (loop).

This provides a fruitful path forward, hopefully making it possible to understand, predict, and guide collective outcomes in crucial areas.Together, these studies offer a comprehensive understanding of the mechanisms linking individual cognition to collective outcomes in collective foraging tasks, providing new perspectives on optimizing collective performance across various fields. The implications for decentralized robotic systems are particularly promising.

Understanding realistic constraints on group performance might reshape how we develop efficient swarm robotic applications in the future.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Maria Ott, Technische Universität Berlin – Science of Intelligence


Decision-Making Analysis for a New Variant of the Classical Secretary Problem

The classic “secretary problem” involves interviewing job candidates in a random order. Candidates are interviewed one by one, and the interviewer ranks them. After each interview, the interviewer must either accept or reject the candidate. If they accept a candidate, the process stops; otherwise, the next candidate is interviewed and so on.

Of course, if a candidate is accepted, then a subsequent candidate who may well be better suited to the job will never be interviewed and so is never selected. Nevertheless, the goal is to maximize the probability of selecting the best candidate.

Since its introduction in the 1950s, this problem has been researched extensively because it is a fundamental example of optimal stopping problems. Many variants of the problem, such as multiple choices, regret-permit, and weighted versions, have been studied.

Research published in the International Journal of Mathematics in Operational Research has looked at a variant of the secretary problem.

Yu Wu of Southwest Jiaotong University in Chengdu, Sichuan, China, explains that in this variant the interviewer has a “look-ahead privilege” and can see some of the details regarding subsequent candidates before making a decision about the current interviewee at each step. Wu defines the degree of look-ahead privilege as the number of candidates interviewed between the first interview and the final decision.

In one sense, this version of the problem is a more realistic sequential interviewing scenario wherein the interviewer may well have seen the resumes of all candidates or perhaps even have met them all before the interviewing process begins.

This contrasts with the blind sequential interviewing of the classic problem and allows a decision to be deferred until subsequent candidates have been interviewed.

It should therefore allow a better decision to be made regarding the choice of candidate who is offered the job. This is the first time this variant has been studied in detail in this way.

Wu has proposed a general optimal decision strategy framework to maximize the probability of selecting the best candidate. He focuses on a specific look-ahead privilege structure, applying the strategy framework to derive a closed-form probability of success.

This provides for an optimal strategy. Computational experiments have been carried out to explore the relationships between the various factors in the process and to show how this variant of the problem can be solved.

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Bradley, Inderscience

 


People Underestimate The Probability of Including at Least One Minority Member in a Group, Research Suggests

Human society includes various minority groups. However, it is often difficult to know whether someone is a minority member simply by looking at the person, as minority traits may not be visually apparent (e.g., sexual orientation, color vision deficiency). In addition, minorities may hide their minority traits or identities. Consequently, we may have been unaware of the presence of minorities in daily life. Probabilistic thinking is critical in such uncertain situations.

The people with whom we interact in our daily lives are typically a group of several dozen individuals (e.g., a school class). How do we judge the probability of including at least one minoritymember in such groups? For example, how does a school teacher estimate the probability of having a minority in the class?

Cognitive psychology states that humans often make unrealistic judgments about probabilities, such as risk. So, do we also misperceive the probability of minority inclusion in a group or can we accurately assess the probability through heuristics or knowledge?

Associate Professor Niimi of Niigata University demonstrates that people unrealistically underestimate such probabilities. The study is published in the Journal of Cognitive Psychology.

First, the researchers examine how the probabilities are computed mathematically. If the prevalence of the minority in question is 0.03 (3%) and the group size is 30, the probability of including one or more minority members in the group is one minus the probability that all 30 members are NOT the minority.

Because the probability that one person is not a minority is 0.97, the probability of minority inclusion is given by 1– (0.97)30 (if there is no other information). The computer tells us that the result is 0.60 (60%). When the minority prevalence is 7%, it increases to 89%. These mathematical probabilities appear to be higher than those of naive intuition.

Indeed, most respondents estimated probabilities far below the mathematical probabilities. The second image shows examples of the questions and results. Approximately 90% of the respondents estimated below-mathematical probabilities, and the majority of the estimates were lower than 10%. This underestimation was repeatedly observed under a variety of conditions (online worker and student samples, revised wording, etc.).

Why are the probabilities of minority inclusion underestimated? Is this a result of prejudice or stereotyping against minorities? The answer was “No.” The same underestimation occurred even when minorities unlikely to be associated with negative stereotypes were used (e.g., people with absolute pitch and fictional minorities). Of course, the mathematical calculations cannot be performed mentally. No wonder the respondents’ estimates were inaccurate.

The problem was why the estimates were not random, but strongly biased toward underestimation. Even if one does not know how to calculate it, one may have learned from daily experience that the probability of inclusion is much higher than the prevalence (e.g., the probability of including a woman in a group of randomly selected 100 individuals should be greater than 50%). However, the present results suggest that most people are unfamiliar with the concept of probability of inclusion and do not know how to think about it.

Further analysis revealed that the major source of underestimation was the use of heuristics, such as ignoring group size and reporting prevalence, or calculating the expected value of the number of minorities. Although most heuristics were erroneous, some yielded relatively reasonable estimates (e.g., assuming a high probability if the expected value exceeded one).

Underestimating the probability of minority inclusion may lead to the misconception that minorities are irrelevant in our daily lives. However, there was one promising finding in the present study.

When the respondents were given the mathematical probability of minority inclusion, their attitudes changed in favour of inclusive views about minorities compared to conditions in which mathematical probability was not given. Knowledge may compensate for cognitive bias.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Niigata University


Data scientists aim to improve humanitarian support for displaced populations

In times of crisis, effective humanitarian aid depends largely on the fast and efficient allocation of resources and personnel. Accurate data about the locations and movements of affected people in these situations is essential for this.

Researchers from the University of Tokyo, working with the World Bank, have produced a framework to analyse and visualize population mobility data, which could help in such cases. The research is publishedin the journal Scientific Reports.

Wars, famines, outbreaks, natural disasters—there are unfortunately many reasons why populations might be forced or feel compelled to leave their homes in search of refuge elsewhere, and these cases continue to grow.

The United Nations estimated in 2023 that there were more than 100 million forcibly displaced people in the world. More than 62 million of these individuals are considered internally displaced people (IDPs), those in particularly vulnerable situations due to being stuck within the borders of their countries, from which they might be trying to flee.

The circumstances that displace populations are inevitably chaotic and certainly, but not exclusively, in cases of conflict, information infrastructure can be impeded. So, authorities and agencies trying to get a handle on crises are often operating with limited data on the people they are trying to help. But the lack of data alone is not the only problem; being able to easily interpret data, so that nonexperts can make effective decisions based on it, is also an issue, especially in rapidly evolving situations where the stakes, and tensions, are high.

“It’s practically impossible to provide aid agencies and others with accurate real time data on affected populations. The available data will often be too fragmented to be useful directly,” said Associate Professor Yuya Shibuya from the Interfaculty Initiative in Information Studies.

“There have been many efforts to use GPS data for such things, and in normal situations, it has been shown to be useful to model population behaviour. But in times of crisis, patterns of predictability break down and the quality of data decreases.

“As data scientists, we explore ways to mitigate these problems and have developed a tracking framework for monitoring population movements by studying IDPs displaced in Russia’s invasion of Ukraine in 2022.”

Even though Ukraine has good enough network coverage throughout to acquire GPS data, the data generated is not representative of the entire population. There are also privacy concerns, and likely other significant gaps in data due to the nature of conflict itself. As such, it’s no trivial task to model the way populations move.

Shibuya and her team had access to a limited dataset which covered the period a few weeks before and a few weeks after the initial invasion on Feb. 24, 2022. This data contained more than 9 million location records from more than 100,000 anonymous IDPs who opted in to share their location data.

“From these records, we could estimate people’s home locations at the regional level based on regular patterns in advance of the invasion. To make sure this limited data could be used to represent the entire population, we compared our estimates to survey data from the International Organization for Migration of the U.N.,” said Shibuya.

“From there, we looked at when and where people moved just prior to and for some time after the invasion began. The majority of IDPs were from the capital, Kyiv, and some people left as early as five weeks before Feb. 24, perhaps in anticipation, though it was two weeks after that day that four times as many people left. However, a week later still, there was evidence some people started to return.”

That some people return to afflicted areas is just one factor that confounds population mobility models—in actual fact, people may move between locations, sometimes multiple times. Trying to represent this with a simple map with arrows to show populations could get cluttered fast. Shibuya’s team used color-coded charts to visualize its data, which allow you to see population movements in and out of regions at different times, or dynamic data, in a single image.

“WE want visualizations like these to help humanitarian agencies gauge how to allocate human resources and physical resources like food and medicine. As they tell you about dynamic changes in populations, not just A to B movements, WEthink it could mean aid gets to where it’s needed and when it’s needed more efficiently, reducing waste and overheads,” said Shibuya.

“Another thing we found that could be useful is that people’s migration patterns vary, and socioeconomic status seems to be a factor in this. People from more affluent areas tended to move farther from their homes than others. There is demographic diversity and good simulations ought to reflect this diversity and not make too many assumptions.”

The team worked with the World Bank on this study, as the international organization could provide the data necessary for the analyses. They hope to look into other kinds of situations too, such as natural disasters, political conflicts, environmental issues and more. Ultimately, by performing research like this, Shibuya hopes to produce better general models of human behaviour in crisis situations in order to alleviate some of the impacts those situations can create.

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Tokyo

 

 


How science, math, and tech can propel swimmers to new heights

One hundred years ago, in the 1924 Paris Olympics, American Johnny Weissmuller won the men’s 100m freestyle with a time of 59 seconds. Nearly 100 years later, in the most recent Olympics, the delayed 2020 Games in Tokyo, Caeleb Dressel took home the same event with a time that was 12 seconds faster than Weissmuller’s.

Swimming times across the board have become much faster over the past century, a result of several factors, including innovations in training, recovery strategy, nutrition, and some equipment advances.

One component in the improvement in swimming performances over the years is the role of biomechanics—that is, how swimmers optimize their stroke, whether it’s the backstroke, breaststroke, butterfly, or freestyle.

Swimmers for decades have experimented with different techniques to gain an edge over their competitors. But in more recent years, the application of mathematics and science principles as well as the use of wearable sensor technology in training regimens has allowed some athletes to elevate their performances to new heights, including members of the University of Virginia’s swim team.

 

In a new research paper, a UVA professor who introduced these concepts and methods to the team and some of the swimmers who have embraced this novel approach to training lay out how the use of data is helping to transform how competitive swimmers become elite. The paper is published in The Mathematical Intelligencer journal.

‘Swimming in data’

Ken Ono thought his time working with swim teams was over. Ono—a UVA mathematics professor, professor of data science by courtesy, and STEM advisor to the University provost—had spent years working with competitive swimmers, first during his time at Emory University in Atlanta and then with other college teams, including Olympians, over the years.

However, he didn’t plan to continue that aspect of his work when he arrived at UVA in 2019. But after a meeting with Todd DeSorbo, who took over the UVA swim program in 2017, Ono soon found himself once again working closely with athletes, beginning his work as a consultant for the team during the 2020-21 season. The UVA women’s swim team would win their first of four consecutive national championships that year.

“One of the things that WElike quite a bit about this work is that swimming is crazy hard,” Ono said. “We were never meant to be swimmers, and it is both an athletic challenge as well as a scientific challenge—it has it all.”

Last fall, following a suggestion from DeSorbo, Ono offered a class that outlined the science-focused approach to improving swimming performances that had proven so successful at UVA, but he wanted to make sure there were no misconceptions about the seriousness of the material.

“We don’t want people thinking that it’s a cupcake course that’s offered for the swimmers,” Ono said.

So, Ono teamed up with UVA students Kate Douglass, August Lamb, and Will Tenpas, as well as MIT graduate student Jerry Lu, who had worked with Ono and the UVA swim team while an undergraduate at the University, to produce a paper that covered the key elements of the class and Ono’s work with swimmers.

Tenpas and Lamb both recently completed the residential master’s program at the School of Data Science as well as their careers as competitive collegiate swimmers. Douglass, who finished her UVA swim career in 2023 as one of the most decorated swimmers in NCAA history, is a graduate student in statistics at the University and is set to compete in the Paris Olympics after winning a bronze medal in the 2020 games.

The group drafted the paper, which they titled “Swimming in Data,” over the course of two months, and it was quickly accepted by The Mathematical Intelligencer. There, Ono said, it has become one of the most-read papers on a STEM subject since tracking began. In July, a version of the paper will also be published in Scientific American.

“It seems to have taken off,” Ono said.

The impact of digital twins

After outlining the evolution of swimming over the past 100 years, the paper explains how an understanding of math and physics, combined with the use of technology to acquire individual-level data, can help maximize performances.

Essential to understanding the scientific principles involved with the swimming stroke, the paper says, are Newton’s laws of motion. The laws—which cover inertia, the idea that acceleration depends on an object’s mass and the amount of force applied, and the principle that an action exerted by an object on another elicits an equal and opposite reaction—help simplify how one should think about the many biomechanical factors involved with swimming, according to Tenpas.

“There are all sorts of flexibility limitations. You have water moving at you, you have wakes, you have currents—it’s easy to kind of get paralyzed by the number of factors,” said Tenpas, who after four years at Duke, where he studied mechanical engineering, enrolled in UVA’s data science program and joined the swim team with a fifth year of eligibility.

“WEthink having Newton’s laws is nice as it gives you this baseline we can all agree on,” he added.

It’s a way to understand pool mechanics given the counterintuitive motion swimmers must use to propel themselves forward, according to Ono.

“The reason that we go to great extent to recall Newton’s laws of motion is so that we can break down the factors that matter when you test a swimmer,” he said.

To conduct these tests, Ono and his team use sensors that can be placed on swimmers’ wrists, ankles, or backs to gather acceleration data, measured as inertial measurement units. That information is then used to generate what are called digital twins, which precisely replicate a swimmer’s movements.

These twins reveal strengths and weaknesses, allowing Ono and the coaching staff to make recommendations on technique and strategy—such as how to reduce drag force, a swimmer’s true opponent—that will result in immediate improvement. In fact, through the analysis of data and the use of Newton’s laws, it is possible to make an accurate prediction about how much time a swimmer can save by making a given adjustment.

Lamb, who swam for UVA for five years while a computer science undergrad, then as a data science master’s student, likened digital twins to a feature in the popular Nintendo game Mario Kart where you can race against a ghost version of yourself.

“Being able to have this resource where you can test for one month and then spend a month or two making that adjustment and then test again and see what the difference is—it’s an incredibly valuable resource,” he said.

To understand the potential of digital twins, one need only look at the example of Douglass, one of the co-authors, who is cited in the paper.

A flaw was identified in her head position in the 200m breaststroke. Using her digital twin, Ono and the coaching staff were able to quantify how much time she could save per streamline glide by making a modification, given her obvious talent and aerobic capacity. She did, and the results were remarkable. In November 2020, when her technique was tested, the 200m breaststroke wasn’t even on her event list. Three years later, she held the American record.

‘Everyone’s doing it now’

Swimming will be front and center in the national consciousness this summer. First, the U.S. Olympic Team Trials will be held in Indianapolis in June, leading up to the Paris Olympics in July and August, where DeSorbo, UVA’s coach who embraced Ono’s data-driven strategic advice, will lead the women’s team.

Many aspiring swimmers will undoubtedly be watching over the coming weeks, wondering how they might realize their full athletic potential at whatever level that might be.

For those who have access to technology and data about their technique, Tenpas encourages young swimmers to take advantage.

He noted the significant amount of time a swimmer must put in to reach the highest levels of the sport, estimating that he had been swimming six times per week since he was 12 years old.

“If you’re going to put all of this work in, at least do it smart,” Tenpas said.

At the same time, Lamb urged young swimmers who may not yet have access to this technology to not lose faith in their potential to improve.

“While this is an incredibly useful tool to make improvements to your technique and to your stroke, it’s not the end all, be all,” he said.

“There are so many different ways to make improvements, and we’re hopeful that this will become more accessible as time goes on,” Lamb said of the data methods used at UVA.

As for where this is all going, with the rapidly expanding use and availability of data and wearable technology, Ono thinks his scientific approach to crafting swimming strategies will soon be the norm.

“We think five years from now, our story won’t be a story. It’ll be, “Oh, everyone’s doing it now,'” he said.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Cooper Allen, University of Virginia

 


A surprising result for a group’s optimal path to cooperation

What is the best way for a group of individuals to cooperate? This is a longstanding question with roots in game theory, a branch of science which uses mathematical models of how individuals should best strategize for the optimal result.

A simple example is the prisoner’s dilemma: Two people are arrested for an alleged bank robbery. The police take them downtown and place them in individual, isolated interrogation rooms.

The police admit they don’t have enough evidence to convict them both, and give each the same option: if he confesses and his partner does not, they will release the confessor and convict the other of the serious charge of bank robbery. But if one does not confess and the other does, the first will get a lengthy prison sentence and the other will be released. If both confess, they will both be put away for many years. If neither confesses, they will be arraigned on a lesser charge of gun possession.

What should each do to minimize their time in jail? Does an individual stay silent, trusting his partner to do the same and accept a shorter prison sentence? Or does he confess, hoping the other stays silent. But what if the other confesses too? It is an unenviable position.

There is no correct solution to the prisoner’s dilemma. Other similar problems are the game of chicken, where each driver races towards the other, risking a head-on crash, or swerving away at the last minute and risking humiliation—being called “chicken” for a lack of courage. Many other simple games exist.

Now imagine a group—they may be people, or they may be cellular organisms of some sort. What kind of cooperation gives the optimal result, when each individual is connected to some others and pays a cost (money, energy, time) to create a result that benefits all? It’s a given that individuals are selfish and act in their own best interests, but we also know that cooperation can result in a better outcome for all. Will any take the risk, or look out only for themselves?

A long-standing result is that, in a homogeneous network where all individuals have the same number of neighbours, cooperation is favoured if the ratio between the benefit provided by a cooperator and their associated cost paid exceeds the average number of neighbours.

But people are not homogeneous, they’re heterogeneous, and they don’t usually have the same number of links to neighbours as does everyone else or change their strategy at the same rates.

It is also known that allowing each individual to update their strategy at exactly the same time, such as immediately mimicking their neighbour, significantly alters the evolution of cooperation. Previous investigations have reported that pervasive heterogeneous individual connections hinder cooperation when it’s assumed that individuals update their strategies at identical rates.

Now a group of researchers located in China, Canada and the US have found a surprising result: when individuals’ strategy update rates vary inversely with their number of connections, heterogeneous connections outperform homogeneous ones in promoting cooperation. The study is published in the journal Nature Communications.

“How to analyse the quantitative impact of the prevalent heterogeneous network structures on the emergence of group optimal strategies is a long-standing open question that has attracted much attention,” said Aming Li, a co-author and Assistant Professor in Dynamics and Control at Peking University.

Li’s team solved the problem by analytical calculations backed up by computer simulations, to find the fundamental rule for maintaining collective cooperation: “The nodes with substantial connections within the complex system should update their strategies infrequently,” he says. That is, individual strategy update rates should vary inversely with the number of connections they have in the network. In this way, a network with heterogeneous connections between individuals outperforms a network with homogeneous connections in promoting cooperation.

The team has also developed an algorithm that most efficiently finds the optimal strategy update rates that brings about the group’s optimal strategies, which they call OptUpRat. This algorithm helps collective utility in groups and, Li says, “is also essential in developing robotic collaborative systems.” The finding will be useful to researchers in such multidisciplinary fields as cybernetics, artificial intelligence, systems science, game theory and network science.

“We believe that utilizing AI-related techniques to optimize individual decisions and drive collective intelligence will be the next research hotspot.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Appell , Phys.org

 

 


The Monty Hall Problem Shows How Tricky Judging The Odds Can Be

Calculating probabilities can be complicated, as this classic “what’s behind the doors” problem shows, says Peter Rowlett.

Calculating probabilities can be tricky, with subtle changes in context giving quite different results. I was reminded of this recently after setting BrainTwister #10 for New Scientist readers, which was about the odds of seating two pairs of people adjacently in a row of 22 chairs.

Several readers wrote to say my solution was wrong. I had figured out all the possible seating arrangements and counted the ones that had the two groups adjacent. The readers, meanwhile, seated one pair first and then counted the ways of seating the second pair adjacently. Neither approach was wrong, depending on how you read the question.

This subtlety with probability is illustrated nicely by the Monty Hall problem, which is based on the long-running US game show Let’s Make a Deal. A contestant tries to guess which of three doors conceals a big prize. They guess at random, with ⅓ probability of finding the prize. In the puzzle, host Monty Hall doesn’t open the chosen door. Instead, he opens one of the other doors to reveal a “zonk”, an item of little value. He then offers the contestant the opportunity to switch to the remaining door or stick with their first choice.

Hall said in 1991 that the game is designed so contestants make the mistaken assumption that, since there are now two choices, their ⅓ probability has increased to ½. This, combined with a psychological preference to avoid giving up a prize already won, means people tend to stick

Marilyn vos Savant published the problem in her column in Parade magazine in 1990 along with the answer that you are much more likely to win if you switch. She received thousands of letters, many from mathematicians and scientists, telling her she was wrong.

Imagine the host opened one of the unchosen doors at random: one-third of the time, they would reveal the prize. But in the remaining cases, the prize would be behind the chosen door half the time, for a probability of ½.

But that isn’t really the problem being solved. The missing piece of information is that the host knows where the prize is, and of course the show must go on. There is a ⅓ probability that the prize is behind the chosen door, and therefore a ⅔ probability that it is behind one of the other two. Being shown a zonk behind one of the other two hasn’t changed this set-up – the door chosen still has a probability of ⅓, so the other door carries a ⅔ probability. You should switch.

Probability problems depend on the precise question more than people realise. This is why it might seem surprising when you run into a friend, because you aren’t considering the number of people you walked past and how many friends you might see. And for scientists, it is why they have to be very careful about what their evidence is really telling them.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


New Mathematical Proof Helps to Solve Equations with Random Components

Whether it’s physical phenomena, share prices or climate models—many dynamic processes in our world can be described mathematically with the aid of partial differential equations. Thanks to stochastics—an area of mathematics which deals with probabilities—this is even possible when randomness plays a role in these processes.

Something researchers have been working on for some decades now are so-called stochastic partial differential equations. Working together with other researchers, Dr. Markus Tempelmayr at the Cluster of Excellence Mathematics Münster at the University of Münster has found a method which helps to solve a certain class of such equations.

The results have been published in the journal Inventiones mathematicae.

The basis for their work is a theory by Prof. Martin Hairer, recipient of the Fields Medal, developed in 2014 with international colleagues. It is seen as a great breakthrough in the research field of singular stochastic partial differential equations. “Up to then,” Tempelmayr explains, “it was something of a mystery how to solve these equations. The new theory has provided a complete ‘toolbox,’ so to speak, on how such equations can be tackled.”

The problem, Tempelmayr continues, is that the theory is relatively complex, with the result that applying the ‘toolbox’ and adapting it to other situations is sometimes difficult.

“So, in our work, we looked at aspects of the ‘toolbox’ from a different perspective and found and proved a method which can be used more easily and flexibly.”

The study, in which Tempelmayr was involved as a doctoral student under Prof. Felix Otto at the Max Planck Institute for Mathematics in the Sciences, published in 2021 as a pre-print. Since then, several research groups have successfully applied this alternative approach in their research work.

Stochastic partial differential equations can be used to model a wide range of dynamic processes, for example, the surface growth of bacteria, the evolution of thin liquid films, or interacting particle models in magnetism. However, these concrete areas of application play no role in basic research in mathematics as, irrespective of them, it is always the same class of equations which is involved.

The mathematicians are concentrating on solving the equations in spite of the stochastic terms and the resulting challenges such as overlapping frequencies which lead to resonances.

Various techniques are used for this purpose. In Hairer’s theory, methods are used which result in illustrative tree diagrams. “Here, tools are applied from the fields of stochastic analysis, algebra and combinatorics,” explains Tempelmayr. He and his colleagues selected, rather, an analytical approach. What interests them in particular is the question of how the solution of the equation changes if the underlying stochastic process is changed slightly.

The approach they took was not to tackle the solution of complicated stochastic partial differential equations directly, but, instead, to solve many different simpler equations and prove certain statements about them.

“The solutions of the simple equations can then be combined—simply added up, so to speak—to arrive at a solution for the complicated equation which we’re actually interested in.” This knowledge is something which is used by other research groups who themselves work with other methods.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Kathrin Kottke, University of Münster