What does a physicist see when looking at the NFT market?

The market for collectible digital assets, or non-fungible tokens, is an interesting example of a physical system with a large scale of complexity, non-trivial dynamics, and an original logic of financial transactions. At the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, its global statistical features have been analysed more extensively.

In the past, the value of money was determined by the amount of precious metals it contained. Today, we attribute it to certain sequences of digital zeros and ones, simply agreeing that they correspond to coins or banknotes. Non-fungible tokens (NFTs) operate by a similar convention: their owners assign a measurable value to certain sets of ones and zeros, treating them as virtual equivalents of assets such as works of art or properties.

NFTs are closely linked to the cryptocurrency markets but change their holders in a different way to, for example, bitcoins. While each bitcoin is exactly the same and has the same value, each NFT is a unique entity with an individually determined value, integrally linked to information about its current owner.

“Trading in digital assets treated in this way is not guided by the logic of typical currency markets, but by the logic of markets trading in objects of a collector’s nature, such as paintings by famous painters,” explains Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology.)

“We have already become familiar with the statistical characteristics of cryptocurrency markets through previous analyses. The question of the characteristics of a new, very young and at the same time fundamentally different market, also built on blockchain technology, therefore arose very naturally.”

The market for NFTs was initiated in 2017 with the blockchain created for the Ethereum cryptocurrency. The popularization of the idea and the rapid growth of trading took place during the pandemic. At that time, a record-breaking transaction was made at an auction organized by the famous English auction house Christie’s, when the art token Everyday: The First 5000 Days, created by Mike Winkelmann, was sold for $69 million.

Tokens are generally grouped into collections of different sizes, and the less frequently certain characteristics of a token occur in a collection, the higher its value tends to be. Statisticians from IFJ PAN examined publicly available data from the CryptoSlam (cryptoslam.io) and Magic Eden (magiceden.io) portals on five popular collections running on the Solana cryptocurrency blockchain.

These were sets of images and animations known as Blocksmith Labs Smyths, Famous Fox Federation, Lifinity Flares, Okay Bears, and Solana Monkey Business, each containing several thousand tokens with an average transaction value of close to a thousand dollars.

“We focused on analysing changes in the financial parameters of a collection such as its capitalization, minimum price, the number of transactions executed on individual tokens per unit of time (hour), the time interval between successive transactions, or the value of transaction volume. The data covered the period from the launch date of a particular collection up to and including August 2023,” says Dr. Marcin Watorek (PK).

For stabilized financial markets, the presence of certain power laws is characteristic, signaling that the likelihood of large events occurring is greater than would result from a typical Gaussian probability distribution. It appears that the operation of such laws is already evident in the fluctuations of NFT market parameters, for example, in the distribution of times between individual trades or in volume fluctuations.

Among the statistical parameters analysed by the researchers from the IFJ PAN was the Hurst exponent, which describes the reluctance of a system to change its trend. The value of this exponent falls below 0.5 when the system has a tendency to fluctuate: all rises increase the probability of a decrease (or vice versa).

In contrast, values above 0.5 indicate the existence of a certain long-term memory: after a rise, there is a higher probability of another rise; after a fall, there is a higher probability of another fall. For the token collections studied, the values of the Hurst exponent were between 0.6 and 0.8, thus at a level characteristic of highly reputable markets. In practice, this property means that the trading prices of tokens from a given collection fluctuate in a similar manner in many cases.

The existence of a certain long-term memory of the system, reaching up to two months in the NFT market, may indicate the presence of multifractality. When we start to magnify a fragment of an ordinary fractal, sooner or later, we see a structure resembling the initial object, always after using the same magnification. Meanwhile, in the case of multifractals, their different fragments have to be magnified at different speeds.

It is precisely this non-linear nature of self-similarity that has also been observed in the digital collectors’ market, among others, for minimum prices, numbers of transactions per unit of time, and intervals between transactions. However, this multifractality was not fully developed and was best revealed in those situations where the greatest fluctuations were observed in the system under study.

“Our research also shows that the price of the cryptocurrency for which collections are sold directly affects the volume they generate. This is an important observation, as cryptocurrency markets are already known to show many signs of statistical maturity,” notes Pawel Szydlo, first author of the article in Chaos: An Interdisciplinary Journal of Nonlinear Science.

The analyses carried out at IFJ PAN lead to the conclusion that, despite its young age and slightly different trading mechanisms, the NFT market is beginning to function in a manner that is statistically similar to established financial markets. This fact seems to indicate the existence of a kind of universalism among financial markets, even of a significantly different nature. However, its closer understanding will require further research.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Polish Academy of Sciences


Use of decimal point is 1.5 centuries older than historians thought

A mathematical historian at Trinity Wester University in Canada, has found use of a decimal point by a Venetian merchant 150 years before its first known use by German mathematician Christopher Clavius. In his paper published in the journal Historia Mathematica, Glen Van Brummelen describes how he found the evidence of decimal use in a volume called “Tabulae,” and its significance to the history of mathematics.

The invention of the decimal point led to the development of the decimal system, and that in turn made it easier for people working in multiple fields to calculate non-whole numbers (fractions) as easily as whole numbers. Prior to this new discovery, the earliest known use of the decimal point was by Christopher Clavius as he was creating astronomical tables—the resulting work was published in 1593.

The new discovery was made in a part of a manuscript written by Giovanni Bianchini in the 1440s—Van Brummelen was discussing a section of trigonometric tables with a colleague when he noticed some of the numbers included a dot in the middle. One example was 10.4, which Bianchini then multiplied by 8 in the same way as is done with modern mathematics. The finding shows that a decimal point to represent non-whole numbers occurred approximately 150 years earlier than previously thought by math historians.

Giovanni Bianchini worked as a merchant in Venice for many years before being appointed to an administrative role with a major estate owned by the powerful d’Este family. In this role, he also managed assets and investments, giving him a strong background in mathematics. He also published astronomy texts, demonstrating his ability to plot planetary motion and to predict when an eclipse would occur.

The finding suggests that Bianchini played a more important role in the development of math fundamentals than previously known. Van Brummelen notes that, as a merchant, Bianchini would have traveled extensively, including to places in the Islamic world, where math concepts were being developed, possibly influencing his use of non-whole numbers and ways to represent them more easily.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bob Yirka , Phys.org

 


Why prices are so high—8 ways retail pricing algorithms gouge consumers

The just-released report of the inquiry into price gouging and unfair pricing conducted by Allan Fels for the Australian Council of Trades Unions does more than identify the likely offenders.

It finds the biggest are supermarkets, banks, airlines and electricity companies.

It’s not enough to know their tricks. Fels wants to give the Australian Competition and Consumer Commission more power to investigate and more power to prohibit mergers.

But it helps to know how they try to trick us, and how technology has enabled them to get better at it. After reading the report, I’ve identified eight key maneuvers.

  1. Asymmetric price movements

Otherwise known as Rocket and Feather, this is where businesses push up prices quickly when costs rise, but cut them slowly or late after costs fall.

It seems to happen for petrol and mortgage rates, and the Fels inquiry was presented with evidence suggesting it happens in supermarkets.

Brendan O’Keeffe from NSW Farmers told the inquiry wholesale lamb prices had been falling for six months before six Woolworths announced a cut in the prices of lamb it was selling as a “Christmas gift.”

  1. Punishment for loyal customers

A loyalty tax is what happens when a business imposes higher charges on customers who have been with it for a long time, on the assumption that they won’t move.

The Australian Securities and Investments Commission has alleged a big insurer does it, setting premiums not only on the basis of risk, but also on the basis of what a computer model tells them about the likelihood of each customer tolerating a price hike. The insurer disputes the claim.

It’s often done by offering discounts or new products to new customers and leaving existing customers on old or discontinued products.

It happens a lot in the electricity industry. The plans look good at first, and then less good as providers bank on customers not making the effort to shop around.

Loyalty taxes appear to be less common among mobile phone providers. Australian laws make it easy to switch and keep your number.

  1. Loyalty schemes that provide little value

Fels says loyalty schemes can be a “low-cost means of retaining and exploiting consumers by providing them with low-value rewards of dubious benefit.”

Their purpose is to lock in (or at least bias) customers to choices already made.

Examples include airline frequent flyer points, cafe cards that give you your tenth coffee free, and supermarket points programs. The purpose is to lock in (or at least bias) consumers to products already chosen.

The Australian Competition and Consumer Commission has found many require users to spend a lot of money or time to earn enough points for a reward.

Others allow points to expire or rules to change without notice or offer rewards that are not worth the effort to redeem.

They also enable businesses to collect data on spending habits, preferences, locations, and personal information that can be used to construct customer profiles that allow them to target advertising and offers and high prices to some customers and not others.

  1. Drip pricing that hides true costs

The Competition and Consumer Commission describes drip pricing as “when a price is advertised at the beginning of an online purchase, but then extra fees and charges (such as booking and service fees) are gradually added during the purchase process.”

The extras can add up quickly and make final bills much higher than expected.

Airlines are among the best-known users of the strategy. They often offer initially attractive base fares, but then add charges for baggage, seat selection, in-flight meals and other extras.

  1. Confusion pricing

Related to drip pricing is confusion pricing where a provider offers a range of plans, discounts and fees so complex they are overwhelming.

Financial products like insurance have convoluted fee structures, as do electricity providers. Supermarkets do it by bombarding shoppers with “specials” and “sales.”

When prices change frequently and without notice, it adds to the confusion.

  1. Algorithmic pricing

Algorithmic pricing is the practice of using algorithms to set prices automatically taking into account competitor responses, which is something akin to computers talking to each other.

When computers get together in this way they can act as it they are colluding even if the humans involved in running the businesses never talk to each other.

It can act even more this way when multiple competitors use the same third-party pricing algorithm, effectively allowing a single company to influence prices.

  1. Price discrimination

Price discrimination involves charging different customers different prices for the same product, setting each price in accordance with how much each customer is prepared to pay.

Banks do it when they offer better rates to customers likely to leave them, electricity companies do it when they offer better prices for business customers than households, and medical specialists do it when they offer vastly different prices for the same service to consumers with different incomes.

It is made easier by digital technology and data collection. While it can make prices lower for some customers, it can make prices much more expensive to customers in a hurry or in urgent need of something.

  1. Excuse-flation

Excuse-flation is where general inflation provides “cover” for businesses to raise prices without justification, blaming nothing other than general inflation.

It means that in times of general high inflation businesses can increase their prices even if their costs haven’t increased by as much.

On Thursday Reserve Bank Governor Michele Bullock seemed to confirm that she though some firms were doing this saying that when inflation had been brought back to the Bank’s target, it would be “much more difficult, I think, for firms to use high inflation as cover for this sort of putting up their prices.”

A political solution is needed

Ultimately, our own vigilance won’t be enough. We will need political help. The government’s recently announced competition review might be a step in this direction.

The legislative changes should police business practices and prioritize fairness. Only then can we create a marketplace where ethics and competition align, ensuring both business prosperity and consumer well-being.

This isn’t just about economics, it’s about building a fairer, more sustainable Australia.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Tuffley, The Conversation


Studies recommend increased research into achievement, engagement to raise student math scores

A new study into classroom practices, led by Dr. Steve Murphy, has found extensive research fails to uncover how teachers can remedy poor student engagement and perform well in math.

More than 3,000 research papers were reviewed over the course of the study, but only 26 contained detailed steps for teachers to improve both student engagement and results in math. The review is published in the journal Teaching and Teacher Education.

Dr. Murphy said the scarcity of research involving young childrenwas concerning.

“Children’s engagement in math begins to decline from the beginning of primary school while their mathematical identity begins to solidify,” Dr. Murphy said.

“We need more research that investigates achievement and engagement together to give teachers good advice on how to engage students in mathematics and perform well.

“La Trobe has developed a model for research that can achieve this.”

While teachers play an important role in making decisions that impact the learning environment, Dr. Murphy said parents are also highly influential in children’s math education journeys.

“We often hear parents say, ‘It’s OK, I was never good at math,’ but they’d never say that to their child about reading or writing,” Dr. Murphy said.

La Trobe’s School of Education is determined to improve mathematical outcomes for students, arguing it’s an important school subject that is highly applicable in today’s technologically rich society.

Previous research led by Dr. Murphy published in Educational Studies in Mathematics found many parents were unfamiliar with the modern ways of teaching math and lacked self-confidence to independently assist their children learning math during the COVID-19 pandemic.

“The implication for parents is that you don’t need to be a great mathematician to support your children in math, you just need to be willing to learn a little about how schools teach math today,” Dr. Murphy said.

“It’s not all bad news for educators and parents. Parents don’t need to teach math; they just need to support what their children’s teacher is doing.

“Keeping positive, being encouraging and interested in their children’s math learning goes a long way.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given La Trobe Universit

 


Children’s positive attitude toward mathematics fades during the early school years, finds study

Children’s interest in, and competence perceptions of, mathematics are generally quite positive as they begin school, but turn less positive during the first three years. Changes in interest and self-concept are also associated with each other. In other words, if a child’s interest fades, so does their competence perception, and vice versa.

This is shown by a recent study from Finland published in the British Journal of Educational Psychology that explores the development of children’s motivation for mathematics during the early school years and how that development is associated with their mathematics competence. The researchers followed nearly 300 children for three years.

“A significant observation was that both school beginners’ higher initial motivation, and less decline in motivation during the follow-up, predicted better competence in the third grade, after accounting for initial differences in competence,” says Professor Markku Niemivirta of the University of Eastern Finland.

There were no gender differences in school beginners’ motivation and competence, but at the end of the follow-up, girls’ motivation had, on average, declined more than that of boys.

Gendered development is starting to show

The study shows that children are able to assess their motivation for mathematics rather accurately already when beginning school. In addition, children’s assessments of their interest and competence are already differentiated, despite being closely related.

“It is only natural that children are more interested in things they feel good at. And vice versa, they may do better in something they’re interested in.”

On average however, school beginners’ positive motivation starts to decline during the early school years, and the scale of this decline is associated with later differences in competence. Although there are no gender differences in competence, girls’ more negative change in motivation on average reflects an unfortunate gendered development, the traces of which remain visible until much later.

Practices for maintaining interest and having experiences of success

Although the negative change observed in the study may partly reflect children’s more realistic self-assessment over time, the researchers suspect that a role is also played by mathematics gradually getting more difficult, and an emphasis being placed on performance.

“The observed association between a change in motivation and competence shows, however, the added value of positive interest and self-concept. It would be important to develop and apply teaching practices that support and maintain children’s interest in mathematics and strengthen their experiences of success,” Niemivirta says.

In the three-year study conducted by the Motivation, Learning and Well-being research collective, MoLeWe, children assessed their interest in, and competence perceptions of, mathematics annually. Mathematics competence was assessed by tests and teacher evaluations.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given University of Eastern Finland

 


Sports deflation: Study shows NBA 3-point shot has lost its value

When the NBA celebrated the start of its 75th season in fall 2021, it was clear that the 3-point shot adopted by the league in 1979-80 had transformed the sport.

The number of attempts beyond the arc had increased in each of the previous 10 seasons, from 22.2% in 2010-11 to 39.2% in 2020-21, and it had been nearly five years since a team won a game without making at least one 3-pointer (that streak is now up to eight years). Led by 3-point specialists Steph Curry and Klay Thompson, the Golden State Warriors had won three of the previous seven NBA titles and were about to win a fourth in 2022.

It appeared that the 3-point revolution would never end. But a recent study by Falk College of Sport and Human Dynamics sport analytics professor Shane Sanders and associate professor Justin Ehrlich shows that while the number of 3-point shots continues to increase, the average expected value of 3-pointers has become less than 2-pointers since the 2017-18 season.

“When taking fouled shots and made free throws into consideration, we found that what had long been a premium for the 3-point shot started to become a dispremium in the 2017-18 season and that trend is continuing,” Ehrlich says. “The implication of these findings is enormous in terms of potential impact on roster construction and offensive philosophies.”

The research preprint from Sanders and Ehrlich, “Estimating NBA Team Shot Selection Efficiency from Aggregations of True, Continuous Shot Charts: A Generalized Additive Model Approach,” is available through the Social Science Research Network website. Sanders and Ehrlich will present their paper as one of seven finalists in the research competition at the NBA-centric MIT Sloan Sports Analytics Conference March 1-2 in Boston, Massachusetts.

“In past conferences, there has been a lot of discussion among NBA executives about how basketball analytics created the 3-point ‘moneyball’ era of basketball and how this has impacted the popularity of the game,” Sanders says. “Perhaps ironically, our research uses basketball analytics, along with a fully specified team offensive objective function, to say there is now too much 3-point shooting for a point-maximizing offense.”

To conduct their research, Sanders and Ehrlich developed a new shot chart that uses a generalized additive model to estimate total shot proficiency continuously in the half-court. Their shot chart incorporates missed shots that draw a shooting foul—and shot-pursuant free throw scoring—to determine total scoring yield following a shot decision.

Current expected value formulas fall short by not including this additional information, which, when combined with the outcome of the initial shot attempt, results in what Sanders and Ehrlich call the “true point value” of a shot. For the 2022-23 NBA season:

  • True Value from 2-point shot attempts=1.181
  • True Value from 3-point shot attempts=1.094

And even when not factoring in free throws, the researchers found that the expected value from 3-point shots are now worth less than 2-point shots. For the 2022-23 NBA season:

  • Expected value from 2P field goal attempt=2P% * 2 = .548 * 2= 1.096
  • Expected value from 3P field goal attempt=3P% * 3 = .361 * 3= 1.083

The true value data can be found in this dashboard, and the graph above shows the expected and true values of 2- and 3-point shots from 2016-22.

According to this research, the expected value from average 2-point field goal attempts (FGA) is now worth 0.013 points more than average 3-point FGA, even before factoring in shot-pursuant free throw scoring. In other words, if you multiply the probability of making a 3-point FGA times the value of a 3-point FGA, it’s worth less than if you multiple a 2-point FGA times the value of a 2-point FGA.

When discussing true point value, the researchers use the term “shot attempts” instead of “field goal attempts” because their formula includes missed shots when a player is fouled, which is not included in standard field-goal attempt statistics. So, when including made and missed free throws, the disparity based on this new true value metric is even greater as average 2-point shot attempts are now worth 0.087 more points than 3-point shot attempts.

Officials from NBA teams and the league have discussed moving the 3-point line back from its current distance of 23 feet, 9 inches (22 feet in the corners). But as this study shows, the value of a 3-pointer is decreasing at the current distance, and teams are already starting to alter their shot selection to emphasize more high-percentage 2-point shots.

“These research findings do not coincide completely with the unresearched musings of NBA analysts Charles Barkley and Shaquille O’Neal,” Sanders says.

“For example, our findings do not suggest that such perimeter stars as Stephen Curry or Damian Lillard should not shoot a lot of threes. It means marginal stretch fours and other marginal outside shooters should not pull up for a 3 as often and that some marginal outside shooters should not extend their range to 25-26 feet or more. Players can still achieve the offensive spacing benefits of positioning on the perimeter without some players shooting from there quite as often.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Matt Michael, Syracuse University

 


Air pollution found to impair performance in matriculation exams in mathematical subjects

Researchers from the University of Oulu, Finland, investigated how air pollution affects students’ performance in matriculation exams, particularly in mathematical subjects. The study revealed that performance declines in exams involving thinking and memorization when fine particulate matter (PM2.5) levels in the school’s vicinity increase even slightly.

The research is the first to examine the same student’s performance in a test measuring the same skill in a short time frame. Skills refer to linguistic and mathematical abilities, measured by exams in the Finnish language, writing, reading, mathematics, and physics.

Researchers from the University of Oulu Business School examined the effects of very short-term exposure to air pollution on students’ performance in matriculation exams in Finland from 2006 to 2016.

According to the study, a one-unit increase in PM2.5 particle concentration (particles smaller than 2.5 micrometers) reduced the average student’s performance in a mathematical exam by approximately 0.13 percentage points compared to performance in a similar exam with lower fine particulate concentrations.

The study found no impact on linguistic skills due to an increase in fine particulate matter, and there were no gender differences observed.

Researchers were surprised to find significant effects on matriculation exam performance in Finland, a country with relatively low air pollution levels. This is the first time such effects have been demonstrated in Finland. The researchers emphasize that even in countries like Finland, where air pollution levels generally comply with the World Health Organization’s recommendations, reducing air pollution remains crucial.

“Increasing evidence suggests that exposure to air pollution during exams may have a decisive impact on the progression of students into further studies, especially if matriculation exam results are used as a significant selection criterion,” says University Researcher Marko Korhonen.

The primary data for the study came from Statistics Finland, covering all matriculation exams in Finland from spring 2006 to autumn 2016, including 22 academic terms. The study included over 370,000 final exams from Finnish high schools, involving 172,414 students from 253 schools in 54 municipalities.

Student performance was assessed using hourly air quality measurements from monitoring points located near the exam venues. The structure of Finnish high school final exams, where students take multiple exams in different courses, allowed the examination of each student’s test results in various final exams. Exams were conducted on different days in the same schools, and air quality was measured during the exams near each school.

The study, titled “The impact of ambient PM2.5 air pollution on student performance: Evidence from Finnish matriculation examinations,” has been published in Economics Letters.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Oulu

 


Driving the best possible bargain now isn’t the best long-term strategy, according to game theory

Conventional wisdom says that you should never leave money on the table when negotiating. But research in my field suggests this could be exactly the wrong approach.

There’s mounting evidence that a short-term win at the bargaining table can mean a loss in terms of overall trust and cooperation. That can leave everyone—including the “winner”—worse off.

As a former executive, I’ve managed large contracts as both a buyer and a seller. Now, as a business professor, I study these trading partner relationships, exploring what works in practice. My work supports what economic theorists and social scientists have been arguing for years: The best results come when people collaborate to create long-term value instead of fighting for short-term wins.

What game are you playing?

Research into art, science and practice of collaborative approaches dates back to the 1940s when the mathematician John von Neumann and economist Oskar Morgenstern used mathematical analysis to model competition and cooperation in living things.

Interest in collaborative approaches grew when researchers John Nash, John C. Harsanyi and Reinhard Selten won a Nobel Memorial Prize in Economic Sciences in 1994. Their work inspired academics around the world to delve deeper into what’s known as game theory.

Game theory is the study of the outcome of strategic interactions among decision makers. By using rigorous statistical methods, researchers can model what happens when people choose to cooperate or choose to take an aggressive, power-based approach to negotiation.

Many business leaders are taught strategies focusing on using their power and playing to win—often at the other party’s expense. In game theory, this is known as a zero-sum game, and it’s an easy trap to fall into.

But not every game has a clear winner or loser. In economics, a win-win game is called a nonzero-sum game. In this sort of situation, people aren’t fighting over whose slice of a pie will be larger. They’re working to grow the pie for everyone.

Kate Vitasek lays out five rules for developing a value creation strategy.

A second dimension of game theory is whether people are playing a one-shot or a repeated game. Think of a one-shot game as like going to the flea market: You probably won’t see your trading partner again, so if you’re a jerk to them, the risk of facing the consequences is low.

An interesting twist uncovered by studying repeated games is that when one party uses their power in a negotiation, it creates the urge for the other party to retaliate.

The University of Michigan’s Robert Axelrod, a mathematician turned game theorist, coined this a “tit-for-tat” strategy. His research, perhaps best known in the book “The Evolution of Cooperation,” uses statistics to show that when individuals cooperate, they come out better than when they don’t.

The case for leaving money on the table

Another Nobel laureate, American economist Oliver Williamson, has offered negotiating advice that most would call a paradigm shift—and some, a heresy.

That advice? Always leave money on the table—especially when you’ll be returning to the same “game” again. Why? According to Williamson, it sends a powerful signal of trustworthiness and credibility to one’s negotiating partner when someone consciously chooses to cooperate and build trust.

The opposite approach leads to lost trust and what the Nobel laureate economist Oliver Hart calls “shading.” This is a retaliatory behaviour that happens when a party isn’t getting the outcome it expected from a deal and feels the other party is to blame.

Simply put, noncollaborative approaches causes distrust and create friction, which adds transaction costs and inefficiencies.

The million-dollar question is whether collaborative approaches work in practice. And from my vantage point as a scholar, the answer is yes. In fields as diverse as health care to high-tech, I see growing real-world evidence backing up the insights of game theory.

The lessons are simple yet profound: Playing a game together to achieve mutual interests is better than playing exclusively with self-interest in mind.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Kate Vitasek, The Conversation

 


Is there a typical rate of cultural evolution?

Are cultural evolution rates similar across human societies? The emerging field of Cliodynamics uses mathematical models to study history.

Tobias Wand and colleagues used a data-driven approach to estimate the rates of cultural evolution in 23 geographic areas, using data from Seshat: Global History Databank, which records nine “complexity characteristics” for 370 polities over 10,000 years, ending in the nineteenth century. The work is published in the journal PNAS Nexus.

The complexity characteristics are polity population; extent of polity territory; the size of the largest urban center; hierarchical complexity; the presence of professional soldiers, officers, priests, bureaucrats, and judges; the variety of observable structures; characteristics of writing and recordkeeping; variety of types of texts; and monetary system.

The authors focused on 23 Natural Geographic Areas (NGAs) and restricted their analysis to the central part of each NGA’s timeline so that they could focus on periods of steady, uninterrupted growth rather than moments of invasion or civilization-ending crisis. A universal logistic growth model was fitted to a variable combining all nine complexity characteristics.

On average, it takes societies that develop in isolation about 2,500 years to reach high levels of social complexity, according to the authors. Full development of complexity was spread out over multiple cultural or institutional phases in every NGA.

According to the authors, the data suggest that there are common pathways in human social evolution across cultures.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to PNAS Nexus

 


Mathematical model connects innovation and obsolescence to unify insights across diverse fields

In Lewis Carroll’s Through the Looking-Glass, the Red Queen tells Alice, “It takes all the running you can do, to keep in the same place.” The race between innovation and obsolescence is like this.

Recent evidence about the slowing of technological and scientific progress in contrast to the accelerating epidemiological risks in a globalized world—in the opposite direction—indicates the importance of the relative rates of innovation and obsolescence.

When does innovation outpace, or fail to outpace, obsolescence? Understanding this dynamic is nascent, and the way that innovation is discussed is largely fragmented across fields. Despite some qualitative efforts to bridge this gap, insights are rarely transferred.

In research led by Complexity Science Hub (CSH), Eddie Lee and colleagues have taken an important step towards building those bridges with a quantitative mathematical theory that models this dynamic.

The paper, “Idea engines: Unifying innovation & obsolescence from markets & genetic evolution to science,” is published in Proceedings of the National Academy of Sciences.

“You could say this is an exercise in translation,” says Lee, the first author of the paper. “There’s a plethora of theories on innovation and obsolescence in different fields: from economist Joseph Schumpeter’s theory of innovation, to other ideas proposed by theoretical biologist Stuart Kauffman, or philosopher of science Thomas Kuhn. Through our work, we try to open the doors to the scientific process and connect aspects of the different theories into one mathematical model,” explains Lee, a postdoc researcher at CSH.

Space of the possible, and its boundaries

Lee, together with Geoffrey West and Christopher Kempes at the Santa Fe Institute, conceives of innovation as expanding the space of the possible while obsolescence shrinks it. The “space of the possible” encompasses the set of all realized potentialities within a system.

“Within the space of the possible, you might think of different manufacturing technologies available in firms. All the living mutation species would be a good example in biology. In science, you might think of scientific theories that are feasible and empirically supported,” says Lee.

The space of the possible grows as innovations are pulled in from the “adjacent possible,” Stuart Kauffman’s term for the set of all things that lie one step away from what is possible. Lee and his co-authors compare this with an obsolescent front, which is the set of all things that are on the verge of being discarded.

Three possible scenarios

Based on this picture of the space of the possible, the team modeled a general dynamics of innovation and obsolescence to identify three possible scenarios. There is an ever-expanding scenario, where the possibilities agents are capable of growth without end. Schumpeterian dystopia is the opposite of this world, where innovation fails to outpace obsolescence. A third scenario follows the original Schumpeterian concept of creation and destruction, in which new ways of production survive by eliminating old ones.

The model was tested with real-world data from a variety of fields, from measures of firm productivity to COVID-19 mutations and scientific citations. Thus, the researchers were able to bring together examples that have heretofore been considered in isolation from one another. Both the model and the data are for the average set of dynamics rather than focusing on specific innovations, which allows for the generalization emphasized in the paper.

“We saw a remarkable similarity between all the data, from economics, biology, and science of science,” states the CSH researcher. One key discovery is that all the systems seem to live around the innovative frontier. “Moreover, agents at the boundary of innovative explosion, whether close to it or far away, share the same characteristic profile,” adds Lee, where few agents are innovative and many are near obsolescence. West likens this to systems living on the “edge of chaos,” where a small change in the dynamics can lead to a large change in the state of the system.

Universal phenomenon

The novel approach could transform our understanding of the dynamics of innovation in complex systems. By trying to capture the essence of innovation and obsolescence as a universal phenomenon, the work brings divergent viewpoints together into a unified mathematical theory. “Our framework provides a way of unifying a phenomenon that has so far been studied separately with a quantitative theory,” say the authors.

“Given the critical role that innovation in all its multiple manifestations plays in society, it’s quite surprising that our work appears to be the first attempt to develop a sort of grand unified mathematical theory which is testable to understand its dynamics,” says West. “It’s still very crude but hopefully can provide a point of departure for developing a more detailed realistic theory that can help inform policy and practitioners.”

“We provide an average model of the combined dynamics of innovation and obsolescence,” says Kempes. “In the future it is exciting and important to think about how this average model meets up with detailed theories of how innovations actually occur. For example, how do current objects or technologies get combined to form new things in something like the recently proposed Assembly Theory?”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Complexity Science Hub