Researchers develop online hate speech ‘shockwave’ formula

A George Washington University research team has created a novel formula that demonstrates how, why, and when hate speech spreads throughout social media. The researchers put forth a first-principles dynamical theory that explores a new realm of physics in order to represent the shockwave effect created by bigoted content across online communities.

This effect is evident in lightly moderated websites, such as 4Chan, and highly regulated social platforms like Facebook. Furthermore, hate speech ripples through online communities in a pattern that non-hateful content typically does not follow.

The new theory considers recently gained knowledge on the pivotal role of in-built communities in the growth of online extremism. The formula weighs the competing forces of fusion and fission, accounting for the spontaneous emergence of in-built communities through the absorption of other communities and interested individuals (fusion) and the disciplinary measures moderators take against users and groups that violate a given platform’s rules (fission).

Researchers hope the formula can serve as a tool for moderators to project the shockwave-like spread of hateful content and develop methods to delay, divert, and prevent it from spiraling out of control. The novel theory could also be applied beyond social mediaplatforms and online message boards, potentially powering moderation strategies on blockchain platforms, generative AI, and the metaverse.

“This study presents the missing science of how harms thrive online and, hence, how they can be overcome,” Neil Johnson, professor of physics at the George Washington University and co-author of the study, said. “This missing science is a new form of shockwave physics.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to George Washington University


Butterfly chaos effect’ discovered in swarms and herds of animals

Researchers at the Universidad Carlos III de Madrid (UC3M) and the Universidad Complutense de Madrid (UCM) have discovered a phase shift between chaotic states that can appear in herds of animals and, in particular, in swarms of insects. This advance may help to better understand their behaviour or be applied to the study of the movement of cells or tumors.

A phase shift occurs when the conditions of a system change drastically, for example, when water changes from a liquid to a solid statewhen it freezes. In this research, recently published in the journal Physical Review E, this group of mathematicians has found such a phenomenon in swarms. Related research is also available on the arXiv preprint server.

“The insects in the swarm stay in a limited volume, even if they’re in a park or an open space. To explain this, we assume that there is a harmonic potential, a kind of recuperative force that confines them (like that of a spring that tries to return to its resting position when we stretch or contract it),” explains one of the study’s authors, Luis L. Bonilla, director of UC3M’s Gregorio Millán Barbany Institute.

This confinement of the insects responds to a constant of proportionality between force and displacement. Researchers have found that for low confinement values, the movement of the insects in the swarm is chaotic (their movements change a lot if the initial conditions are changed). In this context, the phase shift occurs when the swarm splits into several swarms that are, however, closely related to each other, because there are insects moving from one to another.

At the critical line between phases of this shift, the distance between two insects in the swarm that are influenced by each other is proportional to the size of the swarm, even if the number of insects in the swarm grows indefinitely. This is called “scale-free chaos” and hasn’t been discovered until now, according to the researchers.

“As the number of insects increases, the critical line moves towards zero confinement. What happens is that the maximum distance between two insects that still feel each other’s influence is proportional to the size of the swarm. It doesn’t matter how many insects we put in it. And that represents an absolute novelty that we have discovered,” explains Bonilla.

Specifically, what these mathematicians predict through numerical simulations is that certain swarms of insects (specifically a class of small flies) have scale-free chaotic behaviour, which translates into certain power laws with exponents similar to those measured in nature. They have also found a simplified mean-field theory that corroborates the scale-free chaos phase shift. “It would be good to look for and find the phase shift between chaotic phases that we predict, either in observations in nature or in controlled laboratory studies,” says another of the authors of the research, UCM mathematician Rafael González Albaladejo, who is also linked to UC3M’s Gregorio Millán Barbany Institute.

The formation of herds is one of the manifestations of so-called “active matter,” made up of something like self-propelled individuals that form a whole, the researchers explain. It can be a swarm of insects, a flock of sheep, a flock of birds, a school of fish, but also bacteria in motion, melanocytes (the cells that distribute pigments in the skin) or artificial systems such as periodically shaken irregular grains or seeds. “Herd formation mechanisms play a role in some of these systems, so the results we have obtained can be linked to biology, to the study of cells, and beyond that, to the study of tumors and other diseases,” adds Albaladejo.

How do so many animals move in unison? These researchers explain that each individual only senses its neighbours and moves accordingly, even though it has no perspective on the movement of the whole herd. And depending on whether they use sight, hearing or the vibrations of the fluid in which they are immersed, the concept of neighbour can change quite a bit.

Sheep moving together see and sense those around them, while birds in a flock see their nearest neighbours, even if they are quite far apart. “Moving accordingly may mean that they move in the same direction as their neighbours (the norm) or they may adopt different strategies depending on the situation. For example, if a crowd is trying to get out of a crowded pen with more than one gate, there are times when not following neighbours is advantageous,” they explain.

It has taken the mathematicians about two years to carry out this research work. Initially, they set out to explain experiments by studying the conventional phase shift between a crowd of insects that fill a space with constant density and become ordered when passing a critical value of the control parameter (e.g., by decreasing the noise). But then they decided to add a harmonic potential to confine the swarm and explore what happens when the attractive force between individuals decreases.

“We discovered many periodic, quasi-periodic and finally chaotic states for a fixed number of insects that we increased. The surprising thing is the transition between chaotic states that we didn’t know or assume existed, and we were able to find the correct arguments and tests to support their existence,” says another of the study’s authors, Ana Carpio, from UCM’s Department of Mathematical Analysis and Applied Mathematics, who points out that there is still a lot to be done based on this work.

“From experimentally seeking confirmation of our predictions and better adapting the model to experimental observations, to carrying out theoretical and mathematical research that goes beyond our numerical simulations,” she concludes.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Carlos III University of Madrid


Implications of no-free-lunch theorems

In the 18th century, the philosopher David Hume observed that induction—inferring the future based on what’s happened in the past—can never be reliable. In 1997, SFI Professor David Wolpert with his colleague Bill Macready made Hume’s observation mathematically precise, showing that it’s impossible for any inference algorithm (such as machine learning or genetic algorithms) to be consistently better than any other for every possible real-world situation.

Over the next decade, the pair proved a series of theorems about this that were dubbed the “no-free-lunch” theorems. These proved that one algorithm could, in fact, be a bit better than another in most circumstances—but only at the cost of being far worse in the remaining circumstances.

These theorems have been extremely controversial since their inception, since they punctured the claims of many researchers that the algorithms they had developed were superior to other algorithms. As part of the controversy, in 2019, the philosopher Gerhard Schulz wrote a book wrestling with the implications of Hume’s and Wolpert’s work.

A special issue of the Journal for General Philosophy of Science published in March 2023 is devoted to Schulz’s book, and includes an article by Wolpert himself, in which he reviews the “no-free-lunch” theorems, pointing out that there are also many “free-lunch” theorems.

He states that the meta-induction algorithms that Schurz advocates as a “solution to Hume’s problem” are simply examples of such a free lunch based on correlations among the generalization errors of induction algorithms. Wolpert concludes that the prior algorithms that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Santa Fe Institute


Exploring ‘compellingness’ in mechanism design

Consider an auction. You have two types of main protagonists or agents: a seller (or auctioneer) and many potential buyers. There are, of course, certain ground rules. For instance, one objective may be to design the auction in such a way that the person buying the item(s) up for sale is the buyer “who values that good the most.”

But, says Singapore Management University Associate Professor of Economics Takashi Kunimoto, what if you do not know which potential buyer does give the item the highest valuation?

“For example, I want to design a scheme in such a way that they’re willing to tell the truth about their valuation, and I can choose the person who values it the most. So that’s the kind of framework I have in mind to determine who’s going to be a winner, with what probability, and who pays how much.”

Professor Kunimoto, whose research interests include game theory, mechanism design, and macroeconomic theory, has written a paper in conjunction with two other SMU researchers, Professor of Economics Shurojit Chatterji, and Research Fellow Paulo Ramos, titled “Compellingness in Nash Implementation.”

John Nash, who featured in the Russell Crowe movie “A Beautiful Mind,” was awarded the Nobel Prize for Economics in 1994 for his work on the mathematics of game theory. Professor Kunimoto says that, when it comes to a group of agents interacting with each other—whether it be in an auction or in an institution or organization—he has to make a “fundamental assumption about where their interactions will lead,” often referred to as “Nash equilibrium.” (This is when no player can benefit by unilaterally changing their strategy and consequently is “somehow locked in and therefore cannot find anything better.”)

One potential issue with this framework, however, is that there may be many different Nash equilibria. “As a designer I don’t really know which one is going to be played,” says Professor Kunimoto.

This is where Nash implementation comes in. “Can I design a mechanism in such a way that every Nash equilibrium delivers an outcome I want to implement?”

Another Nobel laureate, Eric Maskin, had already established this basic framework, Professor Kunimoto told the Office of Research, but “one issue was omitted in literature. Even though you’re looking at many equilibria, somehow, they only focus on ‘pure strategy’ equilibria so they don’t resort to randomization.” (Pure strategy implies that the player chooses the same strategy each time in a deterministic manner).

This is where the classic example of a penalty shoot-out in football comes in. There is a striker and a goalkeeper. They can go left or right (although, in reality, there are other options). It is, as Professor Kunimoto points out, a zero-sum game. Either the striker scores or does not. “So, my best strategy is the worst strategy for the other. There’s a complete conflict of interest.”

In this zero-sum game, the equilibrium plays entails randomization, implying that the striker does not always try to place the ball in the same part of the net. “But implementing such a randomizing strategy might be quite sophisticated.”

“When I say that with Nash implementation, every Nash equilibrium delivers the right outcome, somehow I implicitly assume they’re going to play some pure strategy which involves no randomization.”

Professor Kunimoto then opts for a mixed strategy approach which does involve randomization and “should be even better” at predicting outcomes. “If that’s the case, maybe I’d better be more careful about how I design the mechanism.”

Of course, the mixed strategy equilibrium could be less likely to be played than the pure strategy equilibrium. If this is the case, one can call such a mixed strategy equilibrium ‘not compelling.” On the contrary, the mixed strategy equilibrium could be more likely to be played than the pure strategy equilibrium. In this case, one can call such a mixed strategy equilibrium ‘compelling.”

But, on the assumption that more than one mixed strategy could be played, “one might be called ‘compelling’ and the other might not.” (In the paper, the researchers call a mixed strategy equilibrium ‘compelling’ if its outcome Pareto—an action resulting in no one losing out although it helps one party—dominates any pure strategy outcome.)

To what extent then does the design of the mechanism need to be modified?

“The question is fundamental and was already addressed in the literature,” says Professor Kunimoto, “but somehow it was overlooked and that’s the context of the question I’m interested in.”

As we have seen, many assumptions are made in understanding the plausibility of both pure and mixed strategy equilibria in Nash implementation. However, isn’t a study of just two participating parties or agents somewhat limited in its focused approach?

“Yes, but we encountered difficulty characterizing how mixed strategy equilibria look like in the mechanism with more than two agents. To avoid some of the technical issues, we decided to confine our attention to the case of two agents.”

“When I design a mechanism, I do not necessarily look at one that works in the real world. To do that, maybe I have to come up with some robustness requirement, which, I hope, leads us to more natural mechanisms.”

“I just hope that finding a more natural mechanism might lead to a mechanism which might work in the real world, and I think my contribution is going to be somehow pushing this agenda towards finding more effective institutions—but it’s still a long way off.”

“Hopefully if we provide the set of guidelines, policymakers and others may find some of the applications useful, although given what I’ve said in the paper, it’s still a long shot.”

As for the paper itself, it has yet to be published and is likely to need some revision.

For instance, in the abstract, the researchers state that they “illustrate the difficulty of extending our result to the case of more than two agents.”

“When we extend our results to environments with three or more agents in a straightforward manner,” the paper concludes, “the class of environments in which compelling implementation is possible becomes very small.”

Professor Kunimoto says, however, they can handle more than two agents as this, he acknowledges, “was a significant limitation in the draft paper. Fortunately, we’re almost able to overcome that deficiency.”

To do that, they are now considering “mini versions of the two-person case, but in many pairs.”

In essence, it all comes down to ‘reverse engineering’ game theory. Instead of trying to make predictions about how the game is going to be played, “we want to go the other way round this,” Professor Kunimoto says.

“Somehow, I really want a particular prediction to be consistent with the objective I want to achieve. I want to design a mechanism, but the outcome is going to be exactly the one I want implemented.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Stuart Pallister, Singapore Management University


Theoretical study offers proof that one parallel world cannot be extremely different from the other

Theoretical string theory in theoretical physics predicts the existence of parallel worlds (mirror symmetry prediction). These two worlds (A-side and B-side) are supposed to differ in terms of the six-dimensional spaces (A and B) hidden in each world.

However, as these spaces are extremely similar and invisible, theoretically, we cannot distinguish them from the world that we live in. Considerable research has been conducted on the properties of space A, and extreme changes (i.e., blowing up) do not occur in it under certain conditions.

However, recently it has been discovered that spaces A and B are transformed in certain ways and their apparently different objects correspond to each other. However, the nature and extent of this transformation are not well understood, and research on the properties of space B has not yet progressed.

In this new study, published in the Asian Journal of Mathematics, researchers mathematically investigated whether the behaviour of space B also has the same properties as that of space A. They transferred a known phenomenon from the A-side to the B-side and proved that blowing up does not occur even in space B under certain conditions.

This achievement provides mathematical proof for one of the previously intuitively expected similarities between the A-side and B-side. Although the researchers made some assumptions to prove this theorem, in the future, they aim to clarify whether the theorem holds even without these assumptions.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Tsukuba

 


Human crowds are best modelled by a ‘visual neighbourhood’

Human crowd dynamics are best predicted by a visual neighbourhood model, based on the visual fields of each person in the crowd. Birds flock, fish school, and human crowds, too, move in a collective motion pattern. Understanding human crowd behaviour can be useful for preventing jams, crushes, and stampedes. Mathematical models of collective motion are typically based on characterizing the local interactions between individuals.

One popular approach, called a metric model, is to quantify forces of attraction, repulsion, and velocity alignment for all neighbours within a fixed radius from the focal individual. Alternatively, in a topological model the focal individual might be influenced by a fixed number of near neighbours, regardless of the distance to the focal individual.

For their study published in PNAS Nexus, Trenton Wirth and colleagues asked participants to walk in real and virtual crowds of varying densities, then changed the walking direction of some neighbours to see how the participants responded. The authors found that the data produced was better predicted by the metric model than by the topological model.

But the best model was based on the visual motions of the neighbours the focal individual could see. In dense crowds, near neighbours may partially or completely block the view of more distant neighbours, removing the distant neighbours from the focal pedestrian’s input. Pursuing a visual model promises more realistic simulations of crowd dynamics, according to the authors.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to PNAS Nexus


Machine learning model analyses why couples break up

What does artificial intelligence offer that goes beyond traditional statistical models, such as regression analysis, to investigate the behaviour of households, in particular the factors that cause the separation of couples and dissolution of the conjugal bond?

With Bruno Arpino (University of Florence) and Marco Le Moglie (Catholic University of Milan) we have analysed data for over 2,000 German married or cohabiting couples, who were followed for a dozen years on average by the annual GSOEP survey (German Socio-Economic Panel), with more than 900 ending in separation.

By adopting a machine learning approach (specifically, Random Survival Forests) the procedure found on its own the relationship between the various factors contained in the database. In this case it considered more than 40 factors, from age to education level, from health to psychology traits: the mass of raw data was fed to ML, without making precise hypotheses, but simply indicating as event of interest the break-up of the union, and the algorithm indicated the influence of each factor contained in the data. The variables that pose the greatest threat to the stability of a union have been identified with an accuracy of 70% (a predictive ability that outperforms the 50% achieved by traditional regression methods).

Not only was ML able to discover the factors behind the breakup of couples, but it was also able to use this knowledge to predict the end of a union before it happens. This is also because, instead of submitting all the data available to an ad hoc algorithm, half were used to instruct the algorithm itself and the validity of the results was verified with the other half of the dataset.

The results of the analysis are very interesting, above all because the ML methodology is able to weigh the relative importance of various factors in causing the breakup. Factors that had been particularly influential in previous studies have instead lost their relevance here, like unemployment, and the partner’s high level of education and income.

The four major risk factors, that emerged from the study are in descending order: personal satisfaction, the woman’s quantity of paid work, some personality factors and age.

The strongest predictor of separation is personal satisfaction: if both partners are dissatisfied, obviously the couple won’t last. Less obvious is that a strong drop in conjugal stability emerges when the woman is very satisfied with the union but the man much less so, while the reverse effect is less evident. If the woman works many hours outside the home, the risk of separation or divorce is higher, even when the man is more involved in domestic chores (but this result is nothing new and according to the existing literature it depends on the greater agency and independence of working women.

As for personality traits, high extraversion in men (classically linked to a higher infidelity) and low openness in women, less adaptable to the changes brought about by cohabitation, are the traits that more strongly associated with the end of a couple. Also a low level of conscientiousness in both partners (understood as organizational capacity in daily life, and therefore—if low—as disorder and inability to respect commitments) does not help to stay together. But also a too high or too low level of neuroticism can be a problem. This result can be interpreted as the fact that suffering from excessive anxiety, jealousy, guilt, worry or anger clearly complicates the relationship.

This is true above all for women, but, on the other hand, those who don’t feel this type of emotion could lead their partners to read that personality trait as lack of interest (men, in this case). However, no pairing of personalities was determined that is more strongly associated with the breakup of the relationship. Finally, considering age, very young couples tend to be more unstable, but for women stability in relationships intensifies after the age of 40, while this is not the case for men.

ML analysis is not without limitations. In this case a major one is that it refers only to Germany and also has few details on the psychological aspects of the two partners. However, from a methodological point of view, the study demonstrates the great potential of ML techniques in demographic and sociological research in general, highlighting their ability to monitor and analyse a large number of predictive factors, to automatically find linear or non-linear relations, additive or non-additive relations between these factors and the outcome of interest, with greater precision and more robustness of estimates against collinearity than commonly used methods.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Letizia Mencarini, Bocconi University


Study: German youth show weaker performance in math, reading and science compared to 2018

Young people in Germany are less proficient in mathematics, reading and science as compared to 2018. This is revealed in a PISA study. About one-third of the 15-year-olds tested achieved only a very low level of proficiency in at least one of the three subjects. The results confirmed a downward trend already in evidence in the preceding PISA studies. The mathematics and reading scores of German students are only at OECD average levels. They remain above that level only in natural sciences.

The PISA studies are regular assessments of the ability of 15-year-old students to solve problems in mathematics, reading and science in real-world contexts as they approach the end of compulsory schooling. The current study, coordinated by the Organization for Economic Cooperation and Development (OECD) and conducted in Germany by the Center for International Student Assessment (ZIB) at the Technical University of Munich (TUM), was carried out in the spring of 2022.

In many OECD countries the average scores in mathematics and reading skills were lower as compared to the previous PISA study in 2018. Scores were also down in natural sciences, although to a lesser extent.

In Germany the decrease in scores was larger than average in all three subjects. As a result, Germany is now significantly above the OECD average (492 vs. 485 points) only in natural sciences. In mathematics (475 vs. 472 points) and reading (480 vs. 476 points), the results now match the OECD average, which has also fallen in both subjects.

After the first PISA study in 2000, Germany initially achieved improvements in its results and was able to maintain them at a high level. In the most recent PISA rounds, however, there were signs of a negative trend. The scores in mathematics and natural sciences are now below those of the PISA studies in the 2000s, when those subjects were assessed in detail for the first time (mathematics: PISA 2003; natural sciences: PISA 2006). The reading scores of the current study are around the same as in PISA 2000, when that subject was a focal point for the first time.

Only a few OECD countries were able to improve some of their results between 2018 and 2022, for example Japan in reading and Italy, Ireland and Latvia in science. In mathematics, students in Japan and Korea show the highest average performance. The top countries in reading are Ireland, Japan, Korea and Estonia. Japan, Korea, Estonia and Canada have the best results in natural sciences.

Focus of the eighth PISA study: Mathematics

In the eighth Program for International Student Assessment (PISA) study, the skills were assessed of a representative selection of approximately 6,100 15-year-old secondary school students attending around 260 schools of all types in Germany. Students also answered a questionnaire about learning conditions, attitudes and social background. School principals, teachers and parents answered questions on the lesson structure, classroom resources and the role of learning in the home. Approximately 690,000 students took part in the study worldwide. Every PISA cycle has a focus subject. This time it was mathematics.

The German part of the study was headed by the ZIB on behalf of the Standing Conference of the Ministers of Education and Cultural Affairs (KMK) and the Federal Ministry of Education and Research. Partners in the ZIB, alongside TUM, are the Leibniz Institute for Research and Information in Education (DIPF) and the Leibniz Institute for Pedagogy of Natural Sciences and Mathematics (IPN).

More students at low proficiency levels

According to the test scores, the study classifies the students in six proficiency levels. Students whose skills do not exceed proficiency level 1 require additional support in order to meet the demands of vocational training or further schooling and participate fully in society.

About one-third of the 15-year-olds achieved only these very low proficiency levels in at least one of the three tested subjects. Around one in six have significant deficiencies in all three subjects. The percentages of these particularly low-performing students have increased significantly since 2018. They now stand at 30% in mathematics, 26% in reading and 23% in science.

At the other end of the scale are the highly proficient students. In mathematics and reading they now represent only 9% and 8% of the total, respectively, while in natural sciences their share of the total remained unchanged, at around 10%.

The pandemic factor

The students’ answers to the questionnaires point to possible reasons for the lower scores: First, the researchers believe that school closures during the COVID pandemic negatively affected the ability to learn skills. In Germany schools made less use of digital media than the OECD average and relied more on materials mailed to students. “By comparison with other countries, Germany was not well prepared for distance learning in terms of digital hardware—but then caught up,” says study head Prof. Doris Lewalter, an educational researcher at TUM and managing director of the ZIB. Fewer than half of the low-proficiency students made use of available remedial options.

However, the analysis of the international data shows no systematic link between the decreases in proficiency between 2018 and 2022 and the duration of school closures. Some countries with relatively few school closure days have significantly lower scores than in 2018 while others, with higher numbers of days lost, show only small decreases or even slightly higher scores.

Language difficulties as a factor

A second possible factor to explain the results in the study’s focal area of mathematics: in Germany there is still a strong link between students’ proficiency in mathematics and the socio-economic status of their families and their immigrant background. Today’s 15-year-olds who themselves have immigrated to Germany are significantly less proficient at mathematics than the same group in 2012, when this question was last investigated. German is spoken less often in the homes of these students than in those of comparable students in 2012.

“This conclusion is only a partial explanation of the overall results, however,” says Prof. Lewalter. “The mathematical scores of non-immigrant students are also lower than in 2012—and even more so than for German-born children of immigrant parents.”

The factor of interest and motivation

To explain the long-term negative trend, the researchers therefore also take the students’ responses to the questionnaire regarding motivation, attitudes and classroom conditions. Compared to 2012, the students showed less enjoyment and interest in mathematics. The subject was also causing them more anxiety. In addition, the 15-year-olds see fewer potential benefits from learning mathematics.

“The results also show that the students feel less supported by their mathematics teachers. But this support is a key prerequisite for good instruction. In addition, the students have only limited awareness of teachers’ efforts to make lessons relevant to real-world contexts. This makes it more difficult for them to recognize the importance of mathematics in their lives—which can in turn decrease their motivation for the subject,” says Prof. Lewalter.

‘A big push’

As key conclusions from the PISA results, the educational researchers recommend:

  • A systematic diagnosis and development of linguistic and reading proficiency from pre-school to the secondary level. “German language skills are the foundation of success at school,” says Prof. Lewalter.
  • Ongoing development of instruction and the inclusion of digital media. “The students’ living realities are constantly changing and, along with them, the baseline conditions for applying mathematics, reading and science skills,” says Prof. Lewalter.
  • Needs-oriented resource allocation to better equip schools with large numbers of students from disadvantaged families and with immigrant backgrounds.

“After the first PISA study in 2000, Germany was able to significantly improve students’ skills with effective support programs,” says Prof. Lewalter. “With a big push, combining the efforts of policy makers, schools and society, we can do it again.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Technical University Munich

 


Mathematical model based on psychology predicts who will buy trendy products

It’s often risky to introduce new products to the market. In fact, statistics show that between 40 to 90 percent of new products fail. A key component of product adoption is consumer psychology. While there are a few theories that attempt to explain why certain people are not likely to accept novelties, a new study takes a slightly different approach.

Florida Atlantic University and collaborators developed and introduced a new mathematical innovation model, grounded in psychology, to provide both qualitative and quantitative predictions of adoption trends for new products.

The objective of the study was to test the researchers’ hypothesis that the most innovative people are making purchasing decisions independently of others, highly innovative people are more likely to adopt when the adoption trend has an increasing rate of growth (accelerating trend), moderately innovative people are more likely to adopt when the adoption trend has a positive rate of growth (increasing trend), and the least innovative people pay attention only to the total number of adoptions when making the purchasing decision.

Researchers divided people/adopters into four categories: innovator, early adopter, majority and laggard. They applied their innovation model to predict sales data of 200 products of a supermarket chain over four years. They assigned particular preferences for various adoption trends based on the adopters’ psychological profiles and generated forecasts for retail sales. They then compared the performance of their innovation model in predicting sales with two other commonly used innovation models and one financial time series model.

Results of the study, published in the journal Physica A: Statistical Mechanics and its Applications, offer a new psychological interpretation of probably the most popular, existing adoption model used widely in marketing today.

Key findings reveal that different adopter groups are looking for particular properties of adoption trends to inform their purchasing decisions and that those properties can be formulated mathematically with testable predictions. Importantly, their innovation adoption model outperformed two behavioural models, and the BASS model, which has become the standard model in business innovation forecasts.

Findings point to the evidence of customers’ behaviour based by these four groups, but more importantly, illustrate the usefulness in quantifying psychological behaviour in a general social context, especially with respect to innovation.

“We assumed that individuals’ decisions regarding the purchase of a new product are driven by the perceived type of adoption trend,” said Andrzej Nowak, Ph.D., co-author and a professor of psychology in FAU’s Charles E. Schmidt College of Science.

“Innovators look for new products and try them out first, regardless of what anyone else is doing. Early adopters, who look for new future successful products, try to get them early. They react to the value of the second derivative of cumulative sales when making decisions. The majority are interested in products quickly gaining popularity and are more likely to buy when the first derivative of cumulative sales is high. The laggards see only the total number of adopters, which is cumulative sales, as a convincing reason to buy.”

The authors say that companies that want to survive and flourish in today’s fast-changing market need to constantly innovate and adapt to the changing conditions, customers’ sentiment, and emerging trends.

“Further research in the area of retail product sales forecasting is extremely important since accurate models of product adoption can help companies reduce waste from unsold products, which in many cases also can affect the environment and decrease storage costs,” said Nowak.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Florida Atlantic University


Math teachers hold a bias against girls when the teachers think gender equality has been achieved, says study

Math teachers who believe women no longer face discrimination tend to be biased against girls’ ability in math. This is what we found through an experiment we conducted with over 400 elementary and middle school math teachers across the United States. Our findings were published in a peer-reviewed article that appeared in April 2023 in the International Journal of STEM Education.

For our experiment, we asked teachers to evaluate a set of student solutions to math problems. The teachers didn’t know that gender- and race-specific names, such as Tanisha and Connor, had been randomly assigned to the solutions. We did this so that if they evaluated identical student work differently, it would be because of the gender- and race-specific names they saw, not the differences in student work. The idea was to see if the teachers had any unconscious biases.

After the teachers evaluated the student solutions, we asked a series of questions about their beliefs and experiences. We asked if they felt society had achieved gender equality. We asked them whether they felt anxious about doing math. We asked whether they felt students’ ability in math was fixed or could be improved. We also asked teachers to think about their own experience as math students and to report how frequently they experienced feelings of unequal treatment because of their race or gender.

We then investigated if these beliefs and experiences were related to how they evaluated the math ability of students of different genders or racial groups.

Consistent with our prior work, we found that implicit bias against girls arises in ambiguous situations—in this case, when student solutions were not completely correct.

Further, for teachers who believed that U.S. society had achieved gender equality, they tended to rate a student’s ability higher when they saw a male student name than when they saw a female student name for the same student work.

Teachers’ unconscious gender biases in math classes have been documented repeatedly.

Our study identifies factors that underlie such biases; namely, that biases are stronger among teachers who believe that gender discrimination is not a problem in the United States. Understanding the relationship between teachers’ beliefs and biases can help teacher educators create effective and targeted interventions to remove such biases from classrooms.

Our findings also shed light on potential reasons that males tend to have higher confidence in math and stick with math-intensive college majors even when they’re not high performers.

One big remaining question is how to create targeted interventions to help teachersovercome such biases. Evidence suggests that unconscious biases come into play in situations where stereotypes might emerge. Further, research suggests that these unconscious biases can be suppressed only when people are aware of them and motivated to restrain them.

Since bias may take on different forms in different fields, a one-time, one-size-fits-all anti-bias training may not have a lasting effect. We think it’s worthwhile to investigate if it’s more effective to provide implicit bias training programs that are specific to the areas where bias is revealed.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Yasemin Copur-Gencturk, Ian Thacker and Joseph Cimpian, The Conver