What game theory can teach us about standing up to bullies

In a time of income inequality and ruthless politics, people with outsized power or an unrelenting willingness to browbeat others often seem to come out ahead.

New research from Dartmouth, however, shows that being uncooperative can help people on the weaker side of the power dynamic achieve a more equal outcome—and even inflict some loss on their abusive counterpart.

The findings provide a tool based in game theory—the field of mathematics focused on optimizing competitive strategies—that could be applied to help equalize the balance of power in labor negotiations or international relations, and could even be used to integrate cooperation into interconnected artificial intelligence systems such as driverless cars.

Published in PNAS Nexus, the study takes a fresh look at what are known in game theoryas “zero-determinant strategies” developed by renowned scientists William Press, now at the University of Texas at Austin, and the late Freeman Dyson at the Institute for Advanced Study in Princeton, New Jersey.

Zero-determinant strategies dictate that “extortionists” control situations to their advantage by becoming less and less cooperative—though just cooperative enough to keep the other party engaged—and by never being the first to concede when there’s a stalemate. Theoretically, they will always outperform their opponent by demanding and receiving a larger share of what’s at stake.

But the Dartmouth paper uses mathematical models of interactions to uncover an “Achilles heel” to these seemingly uncrackable scenarios, said senior author Feng Fu, an associate professor of mathematics. Fu and first author Xingru Chen, who received her Ph.D. in mathematics from Dartmouth in 2021, discovered an “unbending strategy” in which resistance to being steamrolled not only causes an extortionist to ultimately lose more than their opponent but can result in a more equal outcome as the overbearing party compromises in a scramble to get the best payoff.

“Unbending players who choose not to be extorted can resist by refusing to fully cooperate. They also give up part of their own payoff, but the extortioner loses even more,” said Chen, who is now an assistant professor at the Beijing University of Posts and Telecommunications.

“Our work shows that when an extortioner is faced with an unbending player, their best response is to offer a fair split, thereby guaranteeing an equal payoff for both parties,” she said. “In other words, fairness and cooperation can be cultivated and enforced by unbending players.”

These scenarios frequently play out in the real world, Fu said. Labor relations provide a poignant model. A large corporation can strong-arm suppliers and producers such as farmworkers to accept lower prices for their effort by threatening to replace them and cut them off from a lucrative market. But a strike or protest can turn the balance of power back toward the workers’ favour and result in more fairness and cooperation, such as when a labor union wins some concessions from an employer.

While the power dynamic in these scenarios is never equal, Fu said, his and Chen’s work shows that unbending players can reap benefits by defecting from time to time and sabotaging what extortioners are truly after—the highest payoff for themselves.

“The practical insight from our work is for weaker parties to be unbending and resist being the first to compromise, thereby transforming the interaction into an ultimatum game in which extortioners are incentivized to be fairer and more cooperative to avoid ‘lose-lose’ situations,” Fu said.

“Consider the dynamics of power between dominant entities such as Donald Trump and the lack of unbending from the Republican Party, or, on the other hand, the military and political resistance to Russia’s invasion of Ukraine that has helped counteract incredible asymmetry,” he said. “These results can be applied to real-world situations, from social equity and fair pay to developing systems that promote cooperation among AI agents, such as autonomous driving.”

Chen and Fu’s paper expands the theoretical understanding of zero-determinant interactions while also outlining how the outsized power of extortioners can be checked, said mathematician Christian Hilbe, leader of the Dynamics of Social Behaviour research group at the Max Planck Institute for Evolutionary Biology in Germany

“Among the technical contributions, they stress that even extortioners can be outperformed in some games. I don’t think that has been fully appreciated by the community before,” said Hilbe, who was not involved in the study but is familiar with it. “Among the conceptual insights, I like the idea of unbending strategies, behaviours that encourage an extortionate player to eventually settle at a fairer outcome.”

Behavioural research involving human participants has shown that extortioners may constitute a significant portion of our everyday interactions, said Hilbe, who published a 2016 paper in the journal PLOS ONE reporting just that. He also co-authored a 2014 study in Nature Communications that found people playing against a computerized opponent strongly resisted when the computer engaged in threatening conduct, even when it reduced their own payout.

“The empirical evidence to date suggests that people do engage in these extortionate behaviours, especially in asymmetric situations, and that the extorted party often tries to resist it, which is then costly to both parties,” Hilbe said.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Morgan Kelly, Dartmouth College


Ninth Dedekind number discovered: Scientists solve long-known problem in mathematics

Making history with 42 digits, scientists at Paderborn University and KU Leuven have unlocked a decades-old mystery of mathematics with the so-called ninth Dedekind number.

Experts worldwide have been searching for the value since 1991. The Paderborn scientists arrived at the exact sequence of numbers with the help of the Noctua supercomputer located there. The results will be presented in September at the International Workshop on Boolean Functions and their Applications (BFA) in Norway.

What started as a master’s thesis project by Lennart Van Hirtum, then a computer science student at KU Leuven and now a research associate at the University of Paderborn, has become a huge success. The scientists join an illustrious group with their work. Earlier numbers in the series were found by mathematician Richard Dedekind himself when he defined the problem in 1897, and later by greats of early computer science such as Randolph Church and Morgan Ward. “For 32 years, the calculation of D(9) was an open challenge, and it was questionable whether it would ever be possible to calculate this number at all,” Van Hirtum says.

The previous number in the Dedekind sequence, the 8th Dedekind number, was found in 1991 using a Cray 2, the most powerful supercomputer at the time. “It therefore seemed conceivable to us that it should be possible by now to calculate the 9th number on a large supercomputer,” says Van Hirtum, describing the motivation for the ambitious project, which he initially implemented jointly with the supervisors of his master’s thesis at KU Leuven.

Grains of sand, chess and supercomputers

The main subject of Dedekind numbers are so-called monotone Boolean functions. Van Hirtum explains, “Basically, you can think of a monotone Boolean function in two, three, and infinite dimensions as a game with an n-dimensional cube. You balance the cube on one corner and then color each of the remaining corners either white or red. There is only one rule: you must never place a white corner above a red one. This creates a kind of vertical red-white intersection.

“The object of the game is to count how many different cuts there are. Their number is what is defined as the Dedekind number. Even if it doesn’t seem like it, the numbers quickly become gigantic in the process: the 8th Dedekind number already has 23 digits.”

Comparably large—but incomparably easier to calculate—numbers are known from a legend concerning the invention of the game of chess. “According to this legend, the inventor of the chess game asked the king for only a few grains of rice on each square of the chess board as a reward: one grain on the first square, two grains on the second, four on the third, and twice as many on each of the following squares. The king quickly realized that this request was impossible to fulfill, because so much rice does not exist in the whole world.

“The number of grains of rice on the complete board would have 20 digits—an unimaginable amount, but still less than D(8). When you realize these orders of magnitude, it is obvious that both an efficient computational method and a very fast computer would be needed to find D(9),” Van Hirtum said.

Milestone: Years become months

To calculate D(9), the scientists used a technique developed by master’s thesis advisor Patrick De Causmaecker known as the P-coefficient formula. It provides a way to calculate Dedekind numbers not by counting, but by a very large sum. This allows D(8) to be decoded in just eight minutes on a normal laptop. But, “What takes eight minutes for D(8) becomes hundreds of thousands of years for D(9). Even if you used a large supercomputer exclusively for this task, it would still take many years to complete the calculation,” Van Hirtum points out.

The main problem is that the number of terms in this formula grows incredibly fast. “In our case, by exploiting symmetries in the formula, we were able to reduce the number of terms to ‘only’ 5.5×1018—an enormous amount. By comparison, the number of grains of sand on Earth is about 7.5×1018, which is nothing to sneeze at, but for a modern supercomputer, 5.5×1018 operations are quite manageable,” the computer scientist said.

The problem: The calculation of these terms on normal processors is slow and also the use of GPUs as currently the fastest hardware accelerator technology for many AI applications is not efficient for this algorithm.

The solution: Application-specific hardware using highly specialized and parallel arithmetic units—so-called FPGAs (field programmable gate arrays). Van Hirtum developed an initial prototype for the hardware accelerator and began looking for a supercomputer that had the necessary FPGA cards. In the process, he became aware of the Noctua 2 computer at the “Paderborn Center for Parallel Computing (PC2)” at the University of Paderborn, which has one of the world’s most powerful FPGA systems.

Prof. Dr. Christian Plessl, head of PC2, explains, “When Lennart Van Hirtum and Patrick De Causmaeker contacted us, it was immediately clear to us that we wanted to support this moonshot project. Solving hard combinatorial problems with FPGAs is a promising field of application and Noctua 2 is one of the few supercomputers worldwide with which the experiment is feasible at all. The extreme reliability and stability requirements also pose a challenge and test for our infrastructure. The FPGA expert consulting team worked closely with Lennart to adapt and optimize the application for our environment.”

After several years of development, the program ran on the supercomputer for about five months. And then the time had come: on March 8, the scientists found the 9th Dedekind number: 286386577668298411128469151667598498812366.

Today, three years after the start of the Dedekind project, Van Hirtum is working as a fellow of the NHR Graduate School at the Paderborn Center for Parallel Computing to develop the next generation of hardware tools in his Ph.D. The NHR (National High Performance Computing) Graduate School is the joint graduate school of the NHR centers. He will report on his extraordinary success together with Patrick De Causmaecker on June 27 at 2 p.m. in Lecture Hall O2 of the University of Paderborn.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Universität Paderborn


Bridging traditional economics and econophysics

In a new study, researchers of the Complexity Science Hub highlight the connecting elements between traditional financial market research and econophysics. “We want to create an overview of the models that exist in financial economics and those that researchers in physics and mathematics have developed so that everybody can benefit from it,” explains Matthias Raddant from the Complexity Science Hub and the University for Continuing Education Krems.

Scientists from both fields try to classify or even predict how the market will behave. They aim to create a large-scale correlation matrix describing the correlation of one stock to all other stocks. “Progress, however, is often barely noticed, if at all, by researchers in other disciplines. Researchers in finance hardly know that physicists are researching similar topics and just call it something different. That’s why we want to build a bridge,” says Raddant.

What are the differences?

Experts in the traditional financial markets field are very concerned with accurately describing how volatile stocks are statistically. However, their fine-grained models no longer work adequately when the data set becomes too large and includes tens of thousands of stocks.

Physicists, on the other hand, can handle large amounts of data very well. Their motto is: “The more data I have, the nicer it is because then I can see certain regularities better,” explains Raddant. They also work based on correlations, but they model financial markets as evolving complex networks.

These networks describe dependencies that can reveal asset comovement, i.e., which stocks behave fundamentally similarly and therefore group together. However, physicists and mathematicians may not know what insights already exist in the finance literature and what factors need to be considered.

Different language

In their study, Raddant and his co-author, CSH external faculty member Tiziana Di Matteo of King’s College London, note that the mechanical parts that go into these models are often relatively similar, but their language is different. On the one hand, researchers in finance try to discover companies’ connecting features.

On the other hand, physicists and mathematicians are working on creating order out of many time series of stocks, where certain regularities occur. “What physicists and mathematicians call regularities, economists call properties of companies, for example,” says Raddant.

Avoiding research that gets lost

“Through this study, we wish to sensitize young scientists, in particular, who are working on an interdisciplinary basis in financial markets, to the connecting elements between the disciplines,” says Raddant. So that researchers who do not come from financial economics know what the vocabulary is and what the essential research questions are that they have to address. Otherwise, there is a risk of producing research that is of no interest to anyone in finance and financial economics.

On the other hand, scientists from the disciplines traditionally involved with financial markets must understand how to describe large data sets and statistical regularities with methods from physics and network science.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Complexity Science Hub Vienna.