The first validation of the Lillo Mike Farmer Model on a large financial market dataset

Economics and physics are distinct fields of study, yet some researchers have been bridging the two together to tackle complex economics problems in innovative ways. This resulted in the establishment of an interdisciplinary research field, known as econophysics, which specializes in solving problems rooted in economics using physics theories and experimental methods.

Researchers at Kyoto University carried out an econophysics study aimed at studying financial market behaviour using a statistical physics framework, known as the Lillo, Mike, and Farmer (LMF) model. Their paper, published in Physical Review Letters, outlines the first quantitative validation of a key prediction of this physics model, which the team used to analyse microscopic data containing fluctuations in the Tokyo Stock Exchange market spanning over a period of nine years.

“If you observe the high-frequency financial data, you can find a slight predictability of the order signs regarding buy or sell market orders at a glance,” Kiyoshi Kanazawa, one of the researchers who carried out the study, told Phys.org.

“Lillo, Mike, and Farmer hypothetically modeled this appealing character in 2005, but the empirical validation of their model was absent due to a lack of large, microscopic datasets. We decided to solve this long-standing problem in econophysics by analysing large, microscopic data.”

The LMF model is a simple statistical physics model that describes so-called order-splitting behaviour. A key prediction of this model is that the order of signs representing buy or sell orders in the stock market is associated with the microscopic distribution of metaorders.

This hypothesis has been largely debated within the field of econophysics. So far, validating it was unfeasible, as it required large microscopic datasets representing financial market behaviour over the course of several years and with high resolution.

“The first key aspect of our study is that we used a large, microscopic dataset of the Tokyo Stock Exchange,” Kanazawa said. “Without such a unique dataset, it is challenging to validate the LMF model’s hypothesis. The second key point for us was to remove the statistical bias due to the long-memory character of the market-order flow. While statistical estimation is challenging regarding long-memory processes, we did our best to remove such biases using computational statistical methods.”

Kanazawa and his colleagues were the first to perform a quantitative test of the LMF model on a large microscopic financial market dataset. Notably, the results of their analyses were aligned with this model’s predictions, thus highlighting its promise for tackling economic problems and studying the financial market’s microstructure.

“Our work shows that the long memory in the market-order flows has microscopic information about the latent market demand, which might be used for designing new metrics for liquidity measurements,” Kanazawa said.

“We showed that the quantitative power of statistical physics in clarifying financial market behaviour with large, microscopic datasets. By analysing this microscopic dataset further, we would now like to establish a unifying theory of financial market microstructure parallel to the statistical physics programs from microscopic dynamics.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Ingrid Fadelli , Phys.org


AI can teach math teachers how to improve student skills

When middle school math teachers completed an online professional development program that uses artificial intelligence to improve their math knowledge and teaching skills, their students’ math performance improved.

My colleagues and I developed this online professional development program, which relies on a virtual facilitator that can—among other things—present problems to the teacher around teaching math and provide feedback on the teacher’s answers.

Our goal was to enhance teachers’ mastery of knowledge and skills required to teach math effectively. These include understanding why the mathematical rules and procedures taught in school work. The program also focuses on common struggles students have as they learn a particular math concept and how to use instructional tools and strategies to help them overcome these struggles.

We then conducted an experiment in which 53 middle school math teachers were randomly assigned to either this AI-based professional development or no additional training. On average, teachers spent 11 hours to complete the program. We then gave 1,727 of their students a math test. While students of these two groups of teachers started off with no difference in their math performance, the students taught by teachers who completed the program increased their mathematics performance by 0.18 of a standard deviation more on average. This is a statistically significant gain that is equal to the average math performance difference between sixth and seventh graders in the study.

Why it matters

This study demonstrates the potential for using AI technologies to create effective, widely accessible professional development for teachers. This is important because teachers often have limited access to high-quality professional development programs to improve their knowledge and teaching skills. Time conflicts or living in rural areas that are far from in-person professional development programs can prevent teachers from receiving the support they need.

Additionally, many existing in-person professional development programs for teachers have been shown to enhance participants’ teaching knowledge and practices but to have little impact on student achievement.

Effective professional development programs include opportunities for teachers to solve problems, analyse students’ work and observe teaching practices. Teachers also receive real-time support from the program facilitators. This is often a challenge for asynchronous online programs.

Our program addresses the limitations of asynchronous programs because the AI-supported virtual facilitator acts as a human instructor. It gives teachers authentic teaching activities to work on, asks questions to gauge their understanding and provides real-time feedback and guidance.

What’s next

Advancements in AI technologies will allow researchers to develop more interactive, personalized learning environments for teachers. For example, the language processing systems used in generative AI programs such as ChatGPT can improve the ability of these programs to analyse teachers’ responses more accurately and provide more personalized learning opportunities. Also, AI technologies can be used to develop new learning materials so that programs similar to ours can be developed faster.

More importantly, AI-based professional development programs can collect rich, real-time interaction data. Such data makes it possible to investigate how learning from professional development occurs and therefore how programs can be made more effective. Despite billions of dollars being spent each year on professional development for teachers, research suggests that how teachers learn through professional development is not yet well understood.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Yasemin Copur-Gencturk, The Conversation

 


New math approach provides insight into memory formation

The simple activity of walking through a room jumpstarts the neurons in the human brain. An explosion of electrochemical events or “neuronal spikes” appears at various times during the action. These spikes in activity, otherwise known as action potentials, are electrical impulses that occur when neurons communicate with one another.

Researchers have long thought that spike rates are connected to behaviour and memory. When an animal moves through a corridor, neuronal spikes occur in the hippocampus—an area of the brain involved in memory formation—in a manner resembling a GPS map. However, the timing in which these spikes happen and their connection to events in real-time, was thought of as random until it was discovered that these spikes happen with a specific and precise pattern.

Developing a new approach to studying this phenomenon, Western University neuroscientists are now able to analyse the timing of neuronal spikes. Their research found that spike timing may be just as important as spike rate for behaviour and memory.

“More and more experimental evidence is accumulating for the importance of spike times in sensory, motor, and cognitive systems,” said Lyle Muller, senior author of the paper and assistant professor in the Faculty of Science.

“Yet, the exact computations that are being done through spike times remain unclear. One reason for this may be that there isn’t a clear mathematical language for talking about spike-time patterns across neurons—which is what we set out to develop.”

Published recently in the journal Physical Review E, the paper outlines a new mathematical technique to study the neural codes taking place during spike-time sequences.

“Neurons fire at really specific times with respect to an ‘internal clock,’ and we wanted to know why,” said Alex Busch, co-first author of the paper and a Western BrainsCAN Scholar. “If neurons are already keeping track of the animal’s position through spike rates, why do we need to have specific times on top of that? What additional information does that provide?”

Busch, along with co-first author Federico Pasini, assistant professor in the department of mathematics at Huron College, identified spike times from known experimental data. Studying the patterns as a code, the researchers were able to transfer the spike times into a mathematical equation.

“This is the first time we are able to ask what computation could be done with these spike times. What we found was that it’s more than just current location; the pattern of spike times actually creates a link between the recent past and future predictions that’s encoded in the timing of spikes itself,” said Busch, a Ph.D. student in the department of mathematics now working to create new mathematical approaches to analyse and understand spike times. “These are the sorts of patterns that may be important for learning and memory.”

Beyond giving researchers a method to study spike times and their relation to behaviour and memory, this study also paves the way for studying deficits found in neurodegenerative diseases. A better understanding of the significance of spike times may lead to a better understanding of what happens when spike patterns break down in Alzheimer’s disease and other memory disorders.

“If we have a language for spike times, we can understand the computations that might be occurring. If we can understand the computations, we can understand how they break down and suggest new techniques to fix them,” said Muller.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Maggie MacLellan, University of Western Ontario


Math anxiety’ causes students to disengage, says study

A new Sussex study has revealed that “math anxiety” can lead to disengagement and create significant barriers to learning. According to charity National Numeracy, more than one-third of adults in the U.K. report feeling worried or stressed when faced with math, a condition known as math anxiety.

The new paper, titled “Understanding mathematics anxiety: loss aversion and student engagement” and published in Teaching Mathematics and its Applications finds that teaching which relies on negative framing, such as punishing students for failure, or humiliating them for being disengaged, is more likely to exacerbate math anxiety and disengagement.

The paper says that in order to successfully engage students in math, educators and parents must build a safe environment for trial and error and allow students space to make mistakes and stop learners from reaching the point where the threat of failure becomes debilitating.

Author Dr. C. Rashaad Shabab, Reader in Economics at the University of Sussex Business School, said, “As the government seeks to implement universal math education throughout higher secondary school, potentially a million more people will be required to study math who might otherwise have chosen not to.

“The results of this study deliver important guiding principles and interventions to educators and parents alike who face the prospect of teaching math to children who might be a little scared of it and so are at heightened risk of developing mathematics anxiety.

“Teachers should tell students to look at math as a puzzle, or a game. If we put a piece of a puzzle in the wrong place, we just pick it up and try again. That’s how math should feel. Students should be told that it’s okay to get it wrong, and in fact that getting it wrong is part of how we learn math. They should be encouraged to track their own improvement over time, rather than comparing their achievements with other classmates.

“All of these interventions, basically take the ‘sting’ out of getting it wrong, and it’s the fear of that ‘sting’ that keeps students from disengaging. The findings could pave the way for tailored interventions to support students who find themselves overwhelmed by the fear of failure.”

Using behavioural economics, which combines elements of economics and psychology to understand how and why people behave the way they do, the research, from the University of Sussex’s Business School, identifies math anxiety as a reason why even dedicated students can become disengaged. This often results in significant barriers to learning, both for the individual in question and others in the classroom.

The paper goes on to say that modern technology and elements of video game design can help those struggling with mathematics anxiety through a technique called “dynamic difficulty adjustment.” This would allow the development of specialist mathematics education computer programs to match the difficulty of math exercises to the ability of each student. Such a technique, if adopted, would keep the problems simple enough to avoid triggering anxiety, but challenging enough to improve learning.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Tom Walters, University of Sussex

 


New theory links topology and finance

In a new study published in The Journal of Finance and Data Science, a researcher from the International School of Business at HAN University of Applied Sciences in the Netherlands introduced the topological tail dependence theory—a new methodology for predicting stock market volatility in times of turbulence.

“The research bridges the gap between the abstract field of topology and the practical world of finance. What’s truly exciting is that this merger has provided us with a powerful tool to better understand and predict stock market behaviour during turbulent times,” said Hugo Gobato Souto, sole author of the study.

Through empirical tests, Souto demonstrated that the incorporation of persistent homology (PH) information significantly enhances the accuracy of non-linear and neural network models in forecasting stock market volatility during turbulent periods.

“These findings signal a significant shift in the world of financial forecasting, offering more reliable tools for investors, financial institutions and economists,” added Souto.

Notably, the approach sidesteps the barrier of dimensionality, making it particularly useful for detecting complex correlations and nonlinear patterns that often elude conventional methods.

“It was fascinating to observe the consistent improvements in forecasting accuracy, particularly during the 2020 crisis,” said Souto.

The findings are not confined to one specific type of model. It spans across various models, from linear to non-linear, and even advanced neural network models. These findings open the door to improved financial forecasting across the board.

“The findings confirm the theory’s validity and encourage the scientific community to delve deeper into this exciting new intersection of mathematics and finance,” concluded Souto.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to KeAi Communications Co.

 


Coping with uncertainty in customer demand: How mathematics can improve logistics processes

How do you distribute drinking water fairly across an area recently hit by a natural disaster? Or how can you make sure you have enough bottles of water, granola bars and fruit in your delivery van to refill all the vending machines at a school when you don’t know how full they are?

Eindhoven University of Technology researcher Natasja Sluijk has developed mathematical models to address these challenges in transportation planning. On Thursday 23 November she successfully defended her dissertation at the Department of Industrial Engineering & Innovation Sciences.

Sluijk obtained her master’s degree at Erasmus University in the field of Operations Research, an area of research focused on the application of mathematical methods in order to optimize processes.

“I’ve always been interested in mathematics and I decided I wanted to do something with it,” she says. On top of that, her father and grandfather, both of whom used to be truck drivers, fueled her interest in transportation and logistics. “That’s how the seed was planted.” The Ph.D. candidate is also very intrigued by uncertainty. “Well, in my research that is, not in my life,” she adds with a laugh. Her doctoral research is where these worlds meet.

Reducing emissions

Her dissertation can be divided into two parts. The first part focuses on so-called two-echelon distribution. “First, you transport the goods in big trucks, because that way you can take many items at once, so you need fewer drivers and you reduce the costs,” she explains.

However, due to environmental zones and emissions regulations, trucks cannot enter cities, which is why smaller vehicles take over the goods at the city limits and bring them to their final destination. These include bicycle couriers or electric vans, which are smaller and more compact.

By dividing the distribution chain into two steps, you can keep costs low while still complying with regulations. Not only does the use of greener vehicles in cities reduce emissions, it also reduces noise pollution and parking problems. “These are the reasons why more and more research is being conducted on two-echelon distribution, on how to optimize it and how to plan routes efficiently,” says Sluijk.

Customer demand uncertainty

The primary focus of her doctoral research is dealing with uncertain customer demand. Normally, a route plan is drawn up for a set of customers with known locations and demands. But what if you don’t know in advance exactly how much you need to deliver?

Sluijk did not include home package deliveries in her research, but rather focused on deliveries from companies to other companies, the so-called B2B market. “Think, for example, of deliveries to locations that require product restocking, such as vending machines,” she explains.

“What you can see in advance is how much has been sold, but it’s only when you arrive at the vending machine that you can see the current demand. Basically, between the time of planning and the time of delivery, the demand can change.” As such, the challenge here is to meet all demands without being left with a surplus of goods.

Sluijk has developed exact mathematical models and algorithms that allow for better handling of uncertain customer demand and optimal route planning solutions within a two-echelon distribution. This enables us to improve the structure of the two-echelon distribution system, making it more sustainable and cost-efficient.

“The most optimal solution ultimately depends on the company’s exact goals,” she emphasizes. Do they want as many satisfied customers as possible or do they prioritize low costs? The mathematical models make it possible to calculate different scenarios and, for example, accurately assess how enhancing customer service affects costs.

Fair distribution

In the second part of her dissertation, she focuses on situations where the total demand exceeds the capacity, in other words, the amount you can supply. Besides cost and efficiency, fairness is another important consideration here.

“For example, I arrive at a customer who asks for eight items, but I decide to supply only six so that I have enough left for the other customers in the delivery route. If I don’t do this, I disadvantage the customers later in the route,” she explains.

The key question here is: how do you ensure a fair distribution of goods when the customer demand is uncertain? Sluijk developed mathematical models that ensure everyone is treated equally. “This is something that has to be done proportionally, because if a customer asks for a hundred items, supplying one fewer item is much less of an issue than if they asked for only five items. So that’s how we factor that in,” she explains.

Humanitarian organizations

The models are applicable not only in B2B supply chains, but also in non-commercial sectors, such as humanitarian organizations. “Suppose there has been a natural disaster and you need to deliver water to different locations, but you don’t know exactly how much to deliver to each location,” she says.

“The same thing applies to food banks; they often collect the food at a central location and then distribute it among the regions.” In these situations, it is crucial to fairly distribute the available resources between the different locations.

Here, the exact methods she has developed can be of great help. “However, we still need to bridge the gap between theory and practice; but in principle, the models are widely applicable and provide a good starting point in the search for desirable solutions. Not only do mathematical models help you arrive at solutions, they also allow you to properly substantiate the decisions made. That is the most transparent approach and also prevents arguments,” she concludes.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Eindhoven University of Technology


New research demonstrates more effective method for measuring impact of scientific publications

Newly published research reexamines the evaluation of scientific findings, proposing a network-based methodology for contextualizing a publication’s impact.

This new method, which is laid out in an article co-authored by Alex Gates, an assistant professor with the University of Virginia’s School of Data Science, will allow the scientific community to more fairly measure the impact of interdisciplinary scientific discoveries across different fields and time periods.

The findings are published in the journal Proceedings of the National Academy of Sciences.

The impact of a scientific publication has long been quantified by citation count. However, this approach is vulnerable to variations in citation practices, limiting the ability of researchers to accurately appraise the true importance of a scientific achievement.

Recognizing this shortcoming, Gates and his co-authors—Qing Ke of the School of Data Science at City University of Hong Kong and Albert-László Barabási of Northeastern University—propose a network-normalized impact measure. By normalizing citation counts, their approach will help the scientific community avoid biases when assessing a diverse body of scientific findings—both going forward and retrospectively.

In addition to the published findings, the authors have also implemented the method in an open-source package where anyone who is interested can find instructions on how to try this approach themselves on different examples of scientific research.

Gates joined UVA’s School of Data Science in 2022.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Virginia

 


How master chess players choose their opening gambits

What influences the choices we make, and what role does the behaviour of others have on these choices? These questions underlie many aspects of human behaviour, including the products we buy, fashion trends, and even the breed of pet we choose as our companion.

Now, a new Stanford study that used population and statistical models to analyse the frequency of specific moves in 3.45 million chess games helps reveal the factors that influence chess players’ decisions. The researchers’ analysis of chess games revealed three types of biases described by the field of cultural evolution, which uses ideas from biology to explain how behaviours are passed from person to person. Specifically, they found evidence of players copying winning moves (success bias), choosing atypical moves (anti-conformity bias), and copying moves by celebrity players (prestige bias).

The study summarizing their results was published Nov. 15 in the Proceedings of the Royal Society B: Biological Sciences.

“We are all subject to biases,” said Marcus Feldman, the Burnet C. and Mildred Finley Wohlford Professor in the Stanford School of Humanities and Sciences and senior author. “Most biases are acquired from our parents or learned from our teachers, peers, or relatives.”

Feldman, a professor of biology, co-founded the field of cultural evolution 50 years ago with the late Luca Cavalli-Sforza, professor of genetics at Stanford School of Medicine, as a framework for studying changes in human behaviour that can be learned and transmitted between people. In the past, many studies of cultural evolution were theoretical because large datasets of cultural behaviour didn’t exist. But now they do.

The way chess is played has evolved over time too.

“Over the last several hundred years, paintings of chess playing show a change from crowded disorganized scenes to the quiet concentration we associate with the game today,” said Noah Rosenberg, the Stanford Professor in Population Genetics and Society in H&S.

“In the 18th century, players subscribed to a knightly sort of behaviour,” said Egor Lappo, lead author and a graduate student in Rosenberg’s lab. “Even if a move obviously led to a win, if it could be interpreted as cowardly, the player would reject it. Today, this is no longer the case.”

“The thesis of the paper is that when an expert player makes a move, many factors could influence move choice,” Rosenberg said. “The baseline is to choose a move randomly among the moves played recently by other expert players. Any deviations from this random choice are known in the field of cultural evolution as cultural biases.”

“In the mid-century players eschewed the Queen’s Gambit,” Feldman said. “There didn’t seem to be anything rational about this choice. In a large database of chess games by master-level players, the players’ biases can change over time, and that makes chess an ideal subject to use to explore cultural evolution.”

Playing the game

Chess is often called a game of perfect information because all pieces and their positions are clearly visible to both players. Yet simply knowing the present location of all pieces won’t win a chess game. Games are won by visualizing the future positions of pieces, and players develop this skill by studying the moves made by top chess players in different situations.

Fortunately for chess players (and researchers), the moves and game outcomes of top-level chess matches are recorded in books and, more recently, online chess databases.

In chess, two players take turns moving white (player 1) and black (player 2) pieces on a board checkered with 64 positions. The player with the white pieces makes the first move, each piece type (e.g., knight, pawn) moves a specific way, and (except for a special move called castling) each player moves one piece each turn.

There are few move options in the opening (beginning) of a chess game, and players often stick to tried-and-true sequences of moves, called lines, which are frequently given names like Ruy Lopez and the Frankenstein-Dracula Variation. The opening lines of master and grandmaster (top-level) players are often memorized by other players for use in their own games.

The researchers considered chess matches of master-level players between 1971 and 2019, millions of which have been digitized and are publicly available for analysis by enthusiasts.

“We used a population genetics model that treats all chess games played in a year as a population,” Lappo said. “The population of games in the following year is produced by players picking moves from the previous year to play in their own games.”

To search for possible cultural biases in the dataset of chess moves and games, the researchers used mathematical models to describe patterns that correspond to each kind of bias. Then they used statistical methods to see if the data matched (“fit”) the patterns corresponding to those cultural biases.

A value consistent with players choosing randomly from the moves played the year before indicated there was no cultural bias. This was the average “baseline” strategy. Success bias (copying winning moves) corresponded to values that were played by winning players in the previous year. Prestige bias (copying celebrity moves) corresponded to values that matched the frequencies of lines and moves played by the top 50 players in the previous year. Anti-conformity bias (unpopular moves) corresponded to choosing moves played infrequently in the previous year.

In the paper, the researchers focused on three frequently played moves at different depths of the opening to explore possible biases in early game play—the Queen’s Pawn opening, the Caro-Kann opening, and the Najdorf Sicilian opening.

Before the Queen’s Gambit was cool

For a game that is synonymous with strategy, relatively little is known about the factors that could affect a player’s choice of strategy. This study revealed evidence of cultural biases in the openings of master-level games played between 1971 and 2019.

In the Queen’s Pawn opening, players sometimes choose outlandish moves to rattle their opponents (anti-conformity bias). In the Caro-Kann opening, the study found that players mimic moves associated with winning chess games more often than expected by chance (success bias). And in the Najdorf Sicilian, players copy moves played by top players in famous games (prestige bias).

“The way people get their information about chess games changed between 1971 and 2019,” Rosenberg said. “It is easier now for players to see recent games of master- and grandmaster-level players.”

“The data also show that over time it is increasingly hard for the player with white pieces to make use of their first-move advantage,” Lappo said.

Many of the results align with ideas that are common knowledge among chess players, such as the concept that playing well-known lines is generally preferable to in-the-moment strategies in the opening. The researchers suggest that their statistical approach could be applied to other games and cultural trends in areas where long-term data on choices exist.

“This dataset makes questions related to the theory of cultural evolution useful and applicable in a way that wasn’t possible before,” Feldman said. “The big questions are what behaviour is transmitted, how is it transmitted, and to whom is it transmitted. With respect to the moves we analysed, Egor has the answers, and that is very satisfying.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Holly Alyssa MacCormick, Stanford University

 


The math problem that took nearly a century to solve

We’ve all been there: staring at a math test with a problem that seems impossible to solve. What if finding the solution to a problem took almost a century? For mathematicians who dabble in Ramsey theory, this is very much the case. In fact, little progress had been made in solving Ramsey problems since the 1930s.

Now, University of California San Diego researchers Jacques Verstraete and Sam Mattheus have found the answer to r(4,t), a longstanding Ramsey problem that has perplexed the math world for decades.

What was Ramsey’s problem, anyway?

In mathematical parlance, a graph is a series of points and the lines in between those points. Ramsey theory suggests that if the graph is large enough, you’re guaranteed to find some kind of order within it—either a set of points with no lines between them or a set of points with all possible lines between them (these sets are called “cliques”). This is written as r(s,t) where s are the points with lines and t are the points without lines.

To those of us who don’t deal in graph theory, the most well-known Ramsey problem, r(3,3), is sometimes called “the theorem on friends and strangers” and is explained by way of a party: in a group of six people, you will find at least three people who all know each other or three people who all don’t know each other. The answer to r(3,3) is six.

“It’s a fact of nature, an absolute truth,” Verstraete states. “It doesn’t matter what the situation is or which six people you pick—you will find three people who all know each other or three people who all don’t know each other. You may be able to find more, but you are guaranteed that there will be at least three in one clique or the other.”

What happened after mathematicians found that r(3,3) = 6? Naturally, they wanted to know r(4,4), r(5,5), and r(4,t) where the number of points that are not connected is variable. The solution to r(4,4) is 18 and is proved using a theorem created by Paul Erdös and George Szekeres in the 1930s.

Currently r(5,5) is still unknown.

A good problem fights back

Why is something so simple to state so hard to solve? It turns out to be more complicated than it appears. Let’s say you knew the solution to r(5,5) was somewhere between 40–50. If you started with 45 points, there would be more than 10234 graphs to consider.

“Because these numbers are so notoriously difficult to find, mathematicians look for estimations,” Verstraete explained. “This is what Sam and I have achieved in our recent work. How do we find not the exact answer, but the best estimates for what these Ramsey numbers might be?”

Math students learn about Ramsey problems early on, so r(4,t) has been on Verstraete’s radar for most of his professional career. In fact, he first saw the problem in print in Erdös on Graphs: His Legacy of Unsolved Problems, written by two UC San Diego professors, Fan Chung and the late Ron Graham. The problem is a conjecture from Erdös, who offered $250 to the first person who could solve it.

“Many people have thought about r(4,t)—it’s been an open problem for over 90 years,” Verstraete said. “But it wasn’t something that was at the forefront of my research. Everybody knows it’s hard and everyone’s tried to figure it out, so unless you have a new idea, you’re not likely to get anywhere.”

Then about four years ago, Verstraete was working on a different Ramsey problem with a mathematician at the University of Illinois-Chicago, Dhruv Mubayi. Together they discovered that pseudorandom graphs could advance the current knowledge on these old problems.

In 1937, Erdös discovered that using random graphs could give good lower bounds on Ramsey problems. What Verstraete and Mubayi discovered was that sampling from pseudorandom graphs frequently gives better bounds on Ramsey numbers than random graphs. These bounds—upper and lower limits on the possible answer—tightened the range of estimations they could make. In other words, they were getting closer to the truth.

In 2019, to the delight of the math world, Verstraete and Mubayi used pseudorandom graphs to solve r(3,t). However, Verstraete struggled to build a pseudorandom graph that could help solve r(4,t).

He began pulling in different areas of math outside of combinatorics, including finite geometry, algebra and probability. Eventually he joined forces with Mattheus, a postdoctoral scholar in his group whose background was in finite geometry.

“It turned out that the pseudorandom graph we needed could be found in finite geometry,” Verstraete stated. “Sam was the perfect person to come along and help build what we needed.”

Once they had the pseudorandom graph in place, they still had to puzzle out several pieces of math. It took almost a year, but eventually they realized they had a solution: r(4,t) is close to a cubic function of t. If you want a party where there will always be four people who all know each other or t people who all don’t know each other, you will need roughly t3 people present. There is a small asterisk (actually an o) because, remember, this is an estimate, not an exact answer. But t3 is very close to the exact answer.

The findings are currently under review with the Annals of Mathematics. A preprint can be viewed on arXiv.

“It really did take us years to solve,” Verstraete stated. “And there were many times where we were stuck and wondered if we’d be able to solve it at all. But one should never give up, no matter how long it takes.”

Verstraete emphasizes the importance of perseverance—something he reminds his students of often. “If you find that the problem is hard and you’re stuck, that means it’s a good problem. Fan Chung said a good problem fights back. You can’t expect it just to reveal itself.”

Verstraete knows such dogged determination is well-rewarded: “I got a call from Fan saying she owes me $250.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of California – San Diego


Mathematician creates mass extinction model regarding climate change and adaptation

A RUDN University mathematician and a colleague developed a theoretical model of mass extinction. The model for the first time took into account two important factors—the inverse effect of vegetation on climate change and the evolutionary adaptation of species. The results were published in Chaos, Solitons & Fractals.

Over the past half-billion years, there have been five known major mass extinctions, when the number of species dropped by more than half. Several dozen smaller extinctions also occurred. There is debate about the causes of mass extinctions of species. Among them are global warming and cooling. However, exactly what climate change factors lead to extinction and what processes occur is unknown.

A RUDN mathematician and a colleague have built a theoretical model of mass extinction due to climate change taking into account important parameters that have so far been overlooked.

“Mass extinctions are an important part of the history of life on Earth. It is widely believed that the main cause of mass extinction is climate change. A significant change in the Earth’s average temperature leads to global warming or cooling and triggers various mechanisms that may lead to species extinction.”

“Over the past two decades, significant progress has been made in understanding the underlying causes and triggers, but many questions remain open. For example, it is well known that not every climate change in Earth’s history has resulted in a mass extinction. Therefore, there must be factors or feedback that weaken the impact of climate change,” said Sergei Petrovsky, professor at RUDN University.

The mathematicians took into account that some key players in climate change, such as vegetation, contribute to active feedback. The ratio of solar radiation reflected by the Earth to the total (albedo) depends, among other things, on the properties of the surface, that is, on its coverage with vegetation. A second important factor that is commonly overlooked is how species adapt to climate change.

Analysis of the mathematical model showed that whether a species goes extinct depends on the delicate balance between the scale of climate change and the speed of evolutionary response. It also turned out that adaptation of species can lead to so-called false extinction when population density remains low for a long time, but then recovers to a safe value.

Mathematicians also verified the adequacy of the model by comparing its predictions with paleontological data. Extinction frequency distributions are consistent with data obtained from fossil analysis.

“Our model shows how climate-vegetation interactions and the evolutionary response of individual species affect extinction. These two factors are important but are practically not studied. The model’s predictions about the extent of extinction are generally consistent with paleontological data.”

“Although fossil evidence provides, at best, only a partial picture of the true scale of the extinction, with softer-bodied species typically disappearing without leaving any trace. The question of how it will change if data on soft-bodied species is included in the analysis remains open. This may partly explain the discrepancy between our model and fossil data,” said Sergei Petrovsky, professor at RUDN University.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Scientific Project Lomonosov