What are ‘multiplication facts’? Why are they essential to your child’s success in math?

One of the essential skills students need to master in primary school mathematics are “multiplication facts.”

What are they? What are they so important? And how can you help your child master them?

What are multiplication facts?

Multiplication facts typically describe the answers to multiplication sums up to 10×10. Sums up to 10×10 are called “facts” as it is expected they can be easily and quickly recalled. You may recall learning multiplication facts in school from a list of times tables.

The shift from “times tables” to “multiplication facts” is not just about language. It stems from teachers wanting children to see how multiplication facts can be used to solve a variety of problems beyond the finite times table format.

For example, if you learned your times tables in school (which typically went up to 12×12 and no further), you might be stumped by being asked to solve 15×8 off the top of your head. In contrast, we hope today’s students can use their multiplication facts knowledge to quickly see how 15×8 is equivalent to 10×8 plus 5×8.

The shift in terminology also means we are encouraging students to think about the connections between facts. For example, when presented only in separate tables, it is tricky to see how 4×3 and 3×4 are directly connected.

Math education has changed

In a previous piece, we talked about how mathematics education has changed over the past 30 years.

In today’s mathematics classrooms, teachers still focus on developing students’ mathematical accuracy and fast recall of essential facts, including multiplication facts.

But we also focus on developing essential problem-solving skills. This helps students form connections between concepts, and learn how to reason through a variety of real-world mathematical tasks.

Why are multiplication facts so important?

By the end of primary school, it is expected students will know multiplication facts up to 10×10 and can recall the related division fact (for example, 10×9=90, therefore 90÷10=9).

Learning multiplication facts is also essential for developing “multiplicative thinking.” This is an understanding of the relationships between quantities, and is something we need to know how to do on a daily basis.

When we are deciding whether it is better to purchase a 100g product for $3 or a 200g product for $4.50, we use multiplicative thinking to consider that 100g for $3 is equivalent to 200g for $6—not the best deal!

Multiplicative thinking is needed in nearly all math topics in high school and beyond. It is used in many topics across algebra, geometry, statistics and probability.

This kind of thinking is profoundly important. Research shows students who are more proficient in multiplicative thinking perform significantly better in mathematics overall.

In 2001, an extensive RMIT study found there can be as much as a seven-year difference in student ability within one mathematics class due to differences in students’ ability to access multiplicative thinking.

These findings have been confirmed in more recent studies, including a 2021 paper.

So, supporting your child to develop their confidence and proficiency with multiplication is key to their success in high school mathematics. How can you help?

Below are three research-based tips to help support children from Year 2 and beyond to learn their multiplication facts.

  1. Discuss strategies

One way to help your child’s confidence is to discuss strategies for when they encounter new multiplication facts.

Prompt them to think of facts they already and how they can be used for the new fact.

For example, once your child has mastered the x2 multiplication facts, you can discuss how 3×6 (3 sixes) can be calculated by doubling 6 (2×6) and adding one more 6. We’ve now realized that x3 facts are just x2 facts “and one more”!

Strategies can be individual: students should be using the strategy that makes the most sense to them. So you could ask a questions such as “if you’ve forgotten 6×7, how could you work it out?” (we might personally think of 6×6=36 and add one more 6, but your child might do something different and equally valid).

This is a great activity for any quiet car trip. It can also be a great drawing activity where you both have a go at drawing your strategy and then compare. Identifying multiple strategies develops flexible thinking.

  1. Help them practice

Practicing recalling facts under a friendly time crunch can be helpful in achieving what teachers call “fluency” (that is, answering quickly and easily).

A great game you could play with your children is “multiplication heads up” . Using a deck of cards, your child places a card to their forehead where you can see but they cannot. You then flip over the top card on the deck and reveal it to your child. Using the revealed card and the card on your child’s head you tell them the result of the multiplication (for example, if you flip a 2 and they have a 3 card, then you tell them “6!”).

Based on knowing the result, your child then guesses what their card was.

If it is challenging to organize time to pull out cards, you can make an easier game by simply quizzing your child. Try to mix it up and ask questions that include a range of things they know well with and ones they are learning.

Repetition and rehearsal will mean things become stored in long-term memory.

  1. Find patterns

Another great activity to do at home is print some multiplication grids and explore patterns with your child.

A first start might be to give your child a blank or partially blank multiplication grid which they can practice completing.

Then, using colored pencils, they can color in patterns they notice. For example, the x6 column is always double the answer in the x3 column. Another pattern they might see is all the even answers are products of 2, 4, 6, 8, 10. They can also notice half of the grid is repeated along the diagonal.

This also helps your child become a mathematical thinker, not just a calculator.

The importance of multiplication for developing your child’s success and confidence in mathematics cannot be understated. We believe these ideas will give you the tools you need to help your child develop these essential skills.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bronwyn Reid O’Connor and Benjamin Zunica, The Conversation

 


Why prices are so high—8 ways retail pricing algorithms gouge consumers

The just-released report of the inquiry into price gouging and unfair pricing conducted by Allan Fels for the Australian Council of Trades Unions does more than identify the likely offenders.

It finds the biggest are supermarkets, banks, airlines and electricity companies.

It’s not enough to know their tricks. Fels wants to give the Australian Competition and Consumer Commission more power to investigate and more power to prohibit mergers.

But it helps to know how they try to trick us, and how technology has enabled them to get better at it. After reading the report, I’ve identified eight key maneuvers.

  1. Asymmetric price movements

Otherwise known as Rocket and Feather, this is where businesses push up prices quickly when costs rise, but cut them slowly or late after costs fall.

It seems to happen for petrol and mortgage rates, and the Fels inquiry was presented with evidence suggesting it happens in supermarkets.

Brendan O’Keeffe from NSW Farmers told the inquiry wholesale lamb prices had been falling for six months before six Woolworths announced a cut in the prices of lamb it was selling as a “Christmas gift.”

  1. Punishment for loyal customers

A loyalty tax is what happens when a business imposes higher charges on customers who have been with it for a long time, on the assumption that they won’t move.

The Australian Securities and Investments Commission has alleged a big insurer does it, setting premiums not only on the basis of risk, but also on the basis of what a computer model tells them about the likelihood of each customer tolerating a price hike. The insurer disputes the claim.

It’s often done by offering discounts or new products to new customers and leaving existing customers on old or discontinued products.

It happens a lot in the electricity industry. The plans look good at first, and then less good as providers bank on customers not making the effort to shop around.

Loyalty taxes appear to be less common among mobile phone providers. Australian laws make it easy to switch and keep your number.

  1. Loyalty schemes that provide little value

Fels says loyalty schemes can be a “low-cost means of retaining and exploiting consumers by providing them with low-value rewards of dubious benefit.”

Their purpose is to lock in (or at least bias) customers to choices already made.

Examples include airline frequent flyer points, cafe cards that give you your tenth coffee free, and supermarket points programs. The purpose is to lock in (or at least bias) consumers to products already chosen.

The Australian Competition and Consumer Commission has found many require users to spend a lot of money or time to earn enough points for a reward.

Others allow points to expire or rules to change without notice or offer rewards that are not worth the effort to redeem.

They also enable businesses to collect data on spending habits, preferences, locations, and personal information that can be used to construct customer profiles that allow them to target advertising and offers and high prices to some customers and not others.

  1. Drip pricing that hides true costs

The Competition and Consumer Commission describes drip pricing as “when a price is advertised at the beginning of an online purchase, but then extra fees and charges (such as booking and service fees) are gradually added during the purchase process.”

The extras can add up quickly and make final bills much higher than expected.

Airlines are among the best-known users of the strategy. They often offer initially attractive base fares, but then add charges for baggage, seat selection, in-flight meals and other extras.

  1. Confusion pricing

Related to drip pricing is confusion pricing where a provider offers a range of plans, discounts and fees so complex they are overwhelming.

Financial products like insurance have convoluted fee structures, as do electricity providers. Supermarkets do it by bombarding shoppers with “specials” and “sales.”

When prices change frequently and without notice, it adds to the confusion.

  1. Algorithmic pricing

Algorithmic pricing is the practice of using algorithms to set prices automatically taking into account competitor responses, which is something akin to computers talking to each other.

When computers get together in this way they can act as it they are colluding even if the humans involved in running the businesses never talk to each other.

It can act even more this way when multiple competitors use the same third-party pricing algorithm, effectively allowing a single company to influence prices.

  1. Price discrimination

Price discrimination involves charging different customers different prices for the same product, setting each price in accordance with how much each customer is prepared to pay.

Banks do it when they offer better rates to customers likely to leave them, electricity companies do it when they offer better prices for business customers than households, and medical specialists do it when they offer vastly different prices for the same service to consumers with different incomes.

It is made easier by digital technology and data collection. While it can make prices lower for some customers, it can make prices much more expensive to customers in a hurry or in urgent need of something.

  1. Excuse-flation

Excuse-flation is where general inflation provides “cover” for businesses to raise prices without justification, blaming nothing other than general inflation.

It means that in times of general high inflation businesses can increase their prices even if their costs haven’t increased by as much.

On Thursday Reserve Bank Governor Michele Bullock seemed to confirm that she though some firms were doing this saying that when inflation had been brought back to the Bank’s target, it would be “much more difficult, I think, for firms to use high inflation as cover for this sort of putting up their prices.”

A political solution is needed

Ultimately, our own vigilance won’t be enough. We will need political help. The government’s recently announced competition review might be a step in this direction.

The legislative changes should police business practices and prioritize fairness. Only then can we create a marketplace where ethics and competition align, ensuring both business prosperity and consumer well-being.

This isn’t just about economics, it’s about building a fairer, more sustainable Australia.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Tuffley, The Conversation


Studies recommend increased research into achievement, engagement to raise student math scores

A new study into classroom practices, led by Dr. Steve Murphy, has found extensive research fails to uncover how teachers can remedy poor student engagement and perform well in math.

More than 3,000 research papers were reviewed over the course of the study, but only 26 contained detailed steps for teachers to improve both student engagement and results in math. The review is published in the journal Teaching and Teacher Education.

Dr. Murphy said the scarcity of research involving young childrenwas concerning.

“Children’s engagement in math begins to decline from the beginning of primary school while their mathematical identity begins to solidify,” Dr. Murphy said.

“We need more research that investigates achievement and engagement together to give teachers good advice on how to engage students in mathematics and perform well.

“La Trobe has developed a model for research that can achieve this.”

While teachers play an important role in making decisions that impact the learning environment, Dr. Murphy said parents are also highly influential in children’s math education journeys.

“We often hear parents say, ‘It’s OK, I was never good at math,’ but they’d never say that to their child about reading or writing,” Dr. Murphy said.

La Trobe’s School of Education is determined to improve mathematical outcomes for students, arguing it’s an important school subject that is highly applicable in today’s technologically rich society.

Previous research led by Dr. Murphy published in Educational Studies in Mathematics found many parents were unfamiliar with the modern ways of teaching math and lacked self-confidence to independently assist their children learning math during the COVID-19 pandemic.

“The implication for parents is that you don’t need to be a great mathematician to support your children in math, you just need to be willing to learn a little about how schools teach math today,” Dr. Murphy said.

“It’s not all bad news for educators and parents. Parents don’t need to teach math; they just need to support what their children’s teacher is doing.

“Keeping positive, being encouraging and interested in their children’s math learning goes a long way.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given La Trobe Universit

 


Children’s positive attitude toward mathematics fades during the early school years, finds study

Children’s interest in, and competence perceptions of, mathematics are generally quite positive as they begin school, but turn less positive during the first three years. Changes in interest and self-concept are also associated with each other. In other words, if a child’s interest fades, so does their competence perception, and vice versa.

This is shown by a recent study from Finland published in the British Journal of Educational Psychology that explores the development of children’s motivation for mathematics during the early school years and how that development is associated with their mathematics competence. The researchers followed nearly 300 children for three years.

“A significant observation was that both school beginners’ higher initial motivation, and less decline in motivation during the follow-up, predicted better competence in the third grade, after accounting for initial differences in competence,” says Professor Markku Niemivirta of the University of Eastern Finland.

There were no gender differences in school beginners’ motivation and competence, but at the end of the follow-up, girls’ motivation had, on average, declined more than that of boys.

Gendered development is starting to show

The study shows that children are able to assess their motivation for mathematics rather accurately already when beginning school. In addition, children’s assessments of their interest and competence are already differentiated, despite being closely related.

“It is only natural that children are more interested in things they feel good at. And vice versa, they may do better in something they’re interested in.”

On average however, school beginners’ positive motivation starts to decline during the early school years, and the scale of this decline is associated with later differences in competence. Although there are no gender differences in competence, girls’ more negative change in motivation on average reflects an unfortunate gendered development, the traces of which remain visible until much later.

Practices for maintaining interest and having experiences of success

Although the negative change observed in the study may partly reflect children’s more realistic self-assessment over time, the researchers suspect that a role is also played by mathematics gradually getting more difficult, and an emphasis being placed on performance.

“The observed association between a change in motivation and competence shows, however, the added value of positive interest and self-concept. It would be important to develop and apply teaching practices that support and maintain children’s interest in mathematics and strengthen their experiences of success,” Niemivirta says.

In the three-year study conducted by the Motivation, Learning and Well-being research collective, MoLeWe, children assessed their interest in, and competence perceptions of, mathematics annually. Mathematics competence was assessed by tests and teacher evaluations.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given University of Eastern Finland

 


Sports deflation: Study shows NBA 3-point shot has lost its value

When the NBA celebrated the start of its 75th season in fall 2021, it was clear that the 3-point shot adopted by the league in 1979-80 had transformed the sport.

The number of attempts beyond the arc had increased in each of the previous 10 seasons, from 22.2% in 2010-11 to 39.2% in 2020-21, and it had been nearly five years since a team won a game without making at least one 3-pointer (that streak is now up to eight years). Led by 3-point specialists Steph Curry and Klay Thompson, the Golden State Warriors had won three of the previous seven NBA titles and were about to win a fourth in 2022.

It appeared that the 3-point revolution would never end. But a recent study by Falk College of Sport and Human Dynamics sport analytics professor Shane Sanders and associate professor Justin Ehrlich shows that while the number of 3-point shots continues to increase, the average expected value of 3-pointers has become less than 2-pointers since the 2017-18 season.

“When taking fouled shots and made free throws into consideration, we found that what had long been a premium for the 3-point shot started to become a dispremium in the 2017-18 season and that trend is continuing,” Ehrlich says. “The implication of these findings is enormous in terms of potential impact on roster construction and offensive philosophies.”

The research preprint from Sanders and Ehrlich, “Estimating NBA Team Shot Selection Efficiency from Aggregations of True, Continuous Shot Charts: A Generalized Additive Model Approach,” is available through the Social Science Research Network website. Sanders and Ehrlich will present their paper as one of seven finalists in the research competition at the NBA-centric MIT Sloan Sports Analytics Conference March 1-2 in Boston, Massachusetts.

“In past conferences, there has been a lot of discussion among NBA executives about how basketball analytics created the 3-point ‘moneyball’ era of basketball and how this has impacted the popularity of the game,” Sanders says. “Perhaps ironically, our research uses basketball analytics, along with a fully specified team offensive objective function, to say there is now too much 3-point shooting for a point-maximizing offense.”

To conduct their research, Sanders and Ehrlich developed a new shot chart that uses a generalized additive model to estimate total shot proficiency continuously in the half-court. Their shot chart incorporates missed shots that draw a shooting foul—and shot-pursuant free throw scoring—to determine total scoring yield following a shot decision.

Current expected value formulas fall short by not including this additional information, which, when combined with the outcome of the initial shot attempt, results in what Sanders and Ehrlich call the “true point value” of a shot. For the 2022-23 NBA season:

  • True Value from 2-point shot attempts=1.181
  • True Value from 3-point shot attempts=1.094

And even when not factoring in free throws, the researchers found that the expected value from 3-point shots are now worth less than 2-point shots. For the 2022-23 NBA season:

  • Expected value from 2P field goal attempt=2P% * 2 = .548 * 2= 1.096
  • Expected value from 3P field goal attempt=3P% * 3 = .361 * 3= 1.083

The true value data can be found in this dashboard, and the graph above shows the expected and true values of 2- and 3-point shots from 2016-22.

According to this research, the expected value from average 2-point field goal attempts (FGA) is now worth 0.013 points more than average 3-point FGA, even before factoring in shot-pursuant free throw scoring. In other words, if you multiply the probability of making a 3-point FGA times the value of a 3-point FGA, it’s worth less than if you multiple a 2-point FGA times the value of a 2-point FGA.

When discussing true point value, the researchers use the term “shot attempts” instead of “field goal attempts” because their formula includes missed shots when a player is fouled, which is not included in standard field-goal attempt statistics. So, when including made and missed free throws, the disparity based on this new true value metric is even greater as average 2-point shot attempts are now worth 0.087 more points than 3-point shot attempts.

Officials from NBA teams and the league have discussed moving the 3-point line back from its current distance of 23 feet, 9 inches (22 feet in the corners). But as this study shows, the value of a 3-pointer is decreasing at the current distance, and teams are already starting to alter their shot selection to emphasize more high-percentage 2-point shots.

“These research findings do not coincide completely with the unresearched musings of NBA analysts Charles Barkley and Shaquille O’Neal,” Sanders says.

“For example, our findings do not suggest that such perimeter stars as Stephen Curry or Damian Lillard should not shoot a lot of threes. It means marginal stretch fours and other marginal outside shooters should not pull up for a 3 as often and that some marginal outside shooters should not extend their range to 25-26 feet or more. Players can still achieve the offensive spacing benefits of positioning on the perimeter without some players shooting from there quite as often.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Matt Michael, Syracuse University

 


Air pollution found to impair performance in matriculation exams in mathematical subjects

Researchers from the University of Oulu, Finland, investigated how air pollution affects students’ performance in matriculation exams, particularly in mathematical subjects. The study revealed that performance declines in exams involving thinking and memorization when fine particulate matter (PM2.5) levels in the school’s vicinity increase even slightly.

The research is the first to examine the same student’s performance in a test measuring the same skill in a short time frame. Skills refer to linguistic and mathematical abilities, measured by exams in the Finnish language, writing, reading, mathematics, and physics.

Researchers from the University of Oulu Business School examined the effects of very short-term exposure to air pollution on students’ performance in matriculation exams in Finland from 2006 to 2016.

According to the study, a one-unit increase in PM2.5 particle concentration (particles smaller than 2.5 micrometers) reduced the average student’s performance in a mathematical exam by approximately 0.13 percentage points compared to performance in a similar exam with lower fine particulate concentrations.

The study found no impact on linguistic skills due to an increase in fine particulate matter, and there were no gender differences observed.

Researchers were surprised to find significant effects on matriculation exam performance in Finland, a country with relatively low air pollution levels. This is the first time such effects have been demonstrated in Finland. The researchers emphasize that even in countries like Finland, where air pollution levels generally comply with the World Health Organization’s recommendations, reducing air pollution remains crucial.

“Increasing evidence suggests that exposure to air pollution during exams may have a decisive impact on the progression of students into further studies, especially if matriculation exam results are used as a significant selection criterion,” says University Researcher Marko Korhonen.

The primary data for the study came from Statistics Finland, covering all matriculation exams in Finland from spring 2006 to autumn 2016, including 22 academic terms. The study included over 370,000 final exams from Finnish high schools, involving 172,414 students from 253 schools in 54 municipalities.

Student performance was assessed using hourly air quality measurements from monitoring points located near the exam venues. The structure of Finnish high school final exams, where students take multiple exams in different courses, allowed the examination of each student’s test results in various final exams. Exams were conducted on different days in the same schools, and air quality was measured during the exams near each school.

The study, titled “The impact of ambient PM2.5 air pollution on student performance: Evidence from Finnish matriculation examinations,” has been published in Economics Letters.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Oulu

 


Driving the best possible bargain now isn’t the best long-term strategy, according to game theory

Conventional wisdom says that you should never leave money on the table when negotiating. But research in my field suggests this could be exactly the wrong approach.

There’s mounting evidence that a short-term win at the bargaining table can mean a loss in terms of overall trust and cooperation. That can leave everyone—including the “winner”—worse off.

As a former executive, I’ve managed large contracts as both a buyer and a seller. Now, as a business professor, I study these trading partner relationships, exploring what works in practice. My work supports what economic theorists and social scientists have been arguing for years: The best results come when people collaborate to create long-term value instead of fighting for short-term wins.

What game are you playing?

Research into art, science and practice of collaborative approaches dates back to the 1940s when the mathematician John von Neumann and economist Oskar Morgenstern used mathematical analysis to model competition and cooperation in living things.

Interest in collaborative approaches grew when researchers John Nash, John C. Harsanyi and Reinhard Selten won a Nobel Memorial Prize in Economic Sciences in 1994. Their work inspired academics around the world to delve deeper into what’s known as game theory.

Game theory is the study of the outcome of strategic interactions among decision makers. By using rigorous statistical methods, researchers can model what happens when people choose to cooperate or choose to take an aggressive, power-based approach to negotiation.

Many business leaders are taught strategies focusing on using their power and playing to win—often at the other party’s expense. In game theory, this is known as a zero-sum game, and it’s an easy trap to fall into.

But not every game has a clear winner or loser. In economics, a win-win game is called a nonzero-sum game. In this sort of situation, people aren’t fighting over whose slice of a pie will be larger. They’re working to grow the pie for everyone.

Kate Vitasek lays out five rules for developing a value creation strategy.

A second dimension of game theory is whether people are playing a one-shot or a repeated game. Think of a one-shot game as like going to the flea market: You probably won’t see your trading partner again, so if you’re a jerk to them, the risk of facing the consequences is low.

An interesting twist uncovered by studying repeated games is that when one party uses their power in a negotiation, it creates the urge for the other party to retaliate.

The University of Michigan’s Robert Axelrod, a mathematician turned game theorist, coined this a “tit-for-tat” strategy. His research, perhaps best known in the book “The Evolution of Cooperation,” uses statistics to show that when individuals cooperate, they come out better than when they don’t.

The case for leaving money on the table

Another Nobel laureate, American economist Oliver Williamson, has offered negotiating advice that most would call a paradigm shift—and some, a heresy.

That advice? Always leave money on the table—especially when you’ll be returning to the same “game” again. Why? According to Williamson, it sends a powerful signal of trustworthiness and credibility to one’s negotiating partner when someone consciously chooses to cooperate and build trust.

The opposite approach leads to lost trust and what the Nobel laureate economist Oliver Hart calls “shading.” This is a retaliatory behaviour that happens when a party isn’t getting the outcome it expected from a deal and feels the other party is to blame.

Simply put, noncollaborative approaches causes distrust and create friction, which adds transaction costs and inefficiencies.

The million-dollar question is whether collaborative approaches work in practice. And from my vantage point as a scholar, the answer is yes. In fields as diverse as health care to high-tech, I see growing real-world evidence backing up the insights of game theory.

The lessons are simple yet profound: Playing a game together to achieve mutual interests is better than playing exclusively with self-interest in mind.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Kate Vitasek, The Conversation

 


Is there a typical rate of cultural evolution?

Are cultural evolution rates similar across human societies? The emerging field of Cliodynamics uses mathematical models to study history.

Tobias Wand and colleagues used a data-driven approach to estimate the rates of cultural evolution in 23 geographic areas, using data from Seshat: Global History Databank, which records nine “complexity characteristics” for 370 polities over 10,000 years, ending in the nineteenth century. The work is published in the journal PNAS Nexus.

The complexity characteristics are polity population; extent of polity territory; the size of the largest urban center; hierarchical complexity; the presence of professional soldiers, officers, priests, bureaucrats, and judges; the variety of observable structures; characteristics of writing and recordkeeping; variety of types of texts; and monetary system.

The authors focused on 23 Natural Geographic Areas (NGAs) and restricted their analysis to the central part of each NGA’s timeline so that they could focus on periods of steady, uninterrupted growth rather than moments of invasion or civilization-ending crisis. A universal logistic growth model was fitted to a variable combining all nine complexity characteristics.

On average, it takes societies that develop in isolation about 2,500 years to reach high levels of social complexity, according to the authors. Full development of complexity was spread out over multiple cultural or institutional phases in every NGA.

According to the authors, the data suggest that there are common pathways in human social evolution across cultures.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to PNAS Nexus

 


Mathematical model connects innovation and obsolescence to unify insights across diverse fields

In Lewis Carroll’s Through the Looking-Glass, the Red Queen tells Alice, “It takes all the running you can do, to keep in the same place.” The race between innovation and obsolescence is like this.

Recent evidence about the slowing of technological and scientific progress in contrast to the accelerating epidemiological risks in a globalized world—in the opposite direction—indicates the importance of the relative rates of innovation and obsolescence.

When does innovation outpace, or fail to outpace, obsolescence? Understanding this dynamic is nascent, and the way that innovation is discussed is largely fragmented across fields. Despite some qualitative efforts to bridge this gap, insights are rarely transferred.

In research led by Complexity Science Hub (CSH), Eddie Lee and colleagues have taken an important step towards building those bridges with a quantitative mathematical theory that models this dynamic.

The paper, “Idea engines: Unifying innovation & obsolescence from markets & genetic evolution to science,” is published in Proceedings of the National Academy of Sciences.

“You could say this is an exercise in translation,” says Lee, the first author of the paper. “There’s a plethora of theories on innovation and obsolescence in different fields: from economist Joseph Schumpeter’s theory of innovation, to other ideas proposed by theoretical biologist Stuart Kauffman, or philosopher of science Thomas Kuhn. Through our work, we try to open the doors to the scientific process and connect aspects of the different theories into one mathematical model,” explains Lee, a postdoc researcher at CSH.

Space of the possible, and its boundaries

Lee, together with Geoffrey West and Christopher Kempes at the Santa Fe Institute, conceives of innovation as expanding the space of the possible while obsolescence shrinks it. The “space of the possible” encompasses the set of all realized potentialities within a system.

“Within the space of the possible, you might think of different manufacturing technologies available in firms. All the living mutation species would be a good example in biology. In science, you might think of scientific theories that are feasible and empirically supported,” says Lee.

The space of the possible grows as innovations are pulled in from the “adjacent possible,” Stuart Kauffman’s term for the set of all things that lie one step away from what is possible. Lee and his co-authors compare this with an obsolescent front, which is the set of all things that are on the verge of being discarded.

Three possible scenarios

Based on this picture of the space of the possible, the team modeled a general dynamics of innovation and obsolescence to identify three possible scenarios. There is an ever-expanding scenario, where the possibilities agents are capable of growth without end. Schumpeterian dystopia is the opposite of this world, where innovation fails to outpace obsolescence. A third scenario follows the original Schumpeterian concept of creation and destruction, in which new ways of production survive by eliminating old ones.

The model was tested with real-world data from a variety of fields, from measures of firm productivity to COVID-19 mutations and scientific citations. Thus, the researchers were able to bring together examples that have heretofore been considered in isolation from one another. Both the model and the data are for the average set of dynamics rather than focusing on specific innovations, which allows for the generalization emphasized in the paper.

“We saw a remarkable similarity between all the data, from economics, biology, and science of science,” states the CSH researcher. One key discovery is that all the systems seem to live around the innovative frontier. “Moreover, agents at the boundary of innovative explosion, whether close to it or far away, share the same characteristic profile,” adds Lee, where few agents are innovative and many are near obsolescence. West likens this to systems living on the “edge of chaos,” where a small change in the dynamics can lead to a large change in the state of the system.

Universal phenomenon

The novel approach could transform our understanding of the dynamics of innovation in complex systems. By trying to capture the essence of innovation and obsolescence as a universal phenomenon, the work brings divergent viewpoints together into a unified mathematical theory. “Our framework provides a way of unifying a phenomenon that has so far been studied separately with a quantitative theory,” say the authors.

“Given the critical role that innovation in all its multiple manifestations plays in society, it’s quite surprising that our work appears to be the first attempt to develop a sort of grand unified mathematical theory which is testable to understand its dynamics,” says West. “It’s still very crude but hopefully can provide a point of departure for developing a more detailed realistic theory that can help inform policy and practitioners.”

“We provide an average model of the combined dynamics of innovation and obsolescence,” says Kempes. “In the future it is exciting and important to think about how this average model meets up with detailed theories of how innovations actually occur. For example, how do current objects or technologies get combined to form new things in something like the recently proposed Assembly Theory?”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Complexity Science Hub

 


Mathematicians Have Finally Proved That Bach was a Great Composer

Converting hundreds of compositions by Johann Sebastian Bach into mathematical networks reveals that they store lots of information and convey it very effectively.

Johann Sebastian Bach is considered one of the great composers of Western classical music. Now, researchers are trying to figure out why – by analysing his music with information theory.

Suman Kulkarni at the University of Pennsylvania and her colleagues wanted to understand how the ability to recall or anticipate a piece of music relates to its structure. They chose to analyse Bach’s opus because he produced an enormous number of pieces with many different structures, including religious hymns called chorales and fast-paced, virtuosic toccatas.

First, the researchers translated each composition into an information network by representing each note as a node and each transition between notes as an edge, connecting them. Using this network, they compared the quantity of information in each composition. Toccatas, which were meant to entertain and surprise, contained more information than chorales, which were composed for more meditative settings like churches.

Kulkarni and her colleagues also used information networks to compare Bach’s music with listeners’ perception of it. They started with an existing computer model based on experiments in which participants reacted to a sequence of images on a screen. The researchers then measured how surprising an element of the sequence was. They adapted information networks based on this model to the music, with the links between each node representing how probable a listener thought it would be for two connected notes to play successively – or how surprised they would be if that happened. Because humans do not learn information perfectly, networks showing people’s presumed note changes for a composition rarely line up exactly with the network based directly on that composition. Researchers can then quantify that mismatch.

In this case, the mismatch was low, suggesting Bach’s pieces convey information rather effectively. However, Kulkarni hopes to fine-tune the computer model of human perception to better match real brain scans of people listening to the music.

“There is a missing link in neuroscience between complicated structures like music and how our brains respond to it, beyond just knowing the frequencies [of sounds]. This work could provide some nice inroads into that,” says Randy McIntosh at Simon Fraser University in Canada. However, there are many more factors that affect how someone perceives music – for example, how long a person listens to a piece and whether or not they have musical training. These still need to be accounted for, he says.

Information theory also has yet to reveal whether Bach’s composition style was exceptional compared with other types of music. McIntosh says his past work found some general similarities between musicians as different from Bach as the rock guitarist Eddie Van Halen, but more detailed analyses are needed.

“I would love to perform the same analysis for different composers and non-Western music,” says Kulkarni.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karmela Padavic-Callaghan*