What math tells us about social dilemmas

Human coexistence depends on cooperation. Individuals have different motivations and reasons to collaborate, resulting in social dilemmas, such as the well-known prisoner’s dilemma. Scientists from the Chatterjee group at the Institute of Science and Technology Austria (ISTA) now present a new mathematical principle that helps to understand the cooperation of individuals with different characteristics. The results, published in PNAS, can be applied to economics or behavioural studies.

A group of neighbours shares a driveway. Following a heavy snowstorm, the entire driveway is covered in snow, requiring clearance for daily activities. The neighbours have to collaborate. If they all put on their down jackets, grab their snow shovels, and start digging, the road will be free in a very short amount of time. If only one or a few of them take the initiative, the task becomes more time-consuming and labor-intensive. Assuming nobody does it, the driveway will stay covered in snow. How can the neighbours overcome this dilemma and cooperate in their shared interests?

Scientists in the Chatterjee group at the Institute of Science and Technology Austria (ISTA) deal with cooperative questions like that on a regular basis. They use game theory to lay the mathematical foundation for decision-making in such social dilemmas.

The group’s latest publication delves into the interactions between different types of individuals in a public goods game. Their new model, published in PNAS, explores how resources should be allocated for the best overall well-being and how cooperation can be maintained.

The game of public goods

For decades, the public goods game has been a proven method to model social dilemmas. In this setting, participants decide how much of their own resources they wish to contribute for the benefit of the entire group. Most existing studies considered homogeneous individuals, assuming that they do not differ in their motivations and other characteristics.

“In the real world, that’s not always the case,” says Krishnendu Chatterjee. To account for this, Valentin Hübner, a Ph.D. student, Christian Hilbe, and Maria Kleshina, both former members of the Chatterjee group, started modeling settings with diverse individuals.

A recent analysis of social dilemmas among unequals, published in 2019, marked the foundation for their work, which now presents a more general model, even allowing multi-player interaction.

“The public good in our game can be anything, such as environmental protection or combating climate change, to which everybody can contribute,” Hübner explains. The players have different levels of skills. In public goods games, skills typically refer to productivity.

“It’s the ability to contribute to a particular task,” Hübner continues. Resources, technically called endowment or wealth, on the other hand, refer to the actual things that participants contribute to the common good.

In the snowy driveway scenario, the neighbours vary significantly in their available resources and in their abilities to use them. Solving the problem requires them to cooperate. But what role does their inequality play in such a dilemma?

The two sides of inequality

Hübner’s new model provides answers to this question. Intuitively, it proposes that for diverse individuals to sustain cooperation, a more equal distribution of resources is necessary. Surprisingly, more equality does not lead to maximum general welfare. To reach this, the resources should be allocated to more skilled individuals, resulting in a slightly uneven distribution.

“Efficiency benefits from unequal endowment, while robustness always benefits from equal endowment,” says Hübner. Put simply, for accomplishing a task, resources should be distributed almost evenly. Yet, if efficiency is the goal, resources should be in the hands of those more willing to participate—but only to a certain extent.

What is more important—cooperation efficiency or stability? The scientists’ further simulations of learning processes suggest that individuals balance the trade-off between these two things. Whether this is also the case in the real world remains to be seen. Numerous interpersonal nuances also contribute to these dynamics, including aspects like reciprocity, morality, and ethical issues, among others.

Hübner’s model solely focuses on cooperation from a mathematical standpoint. Yet, due to its generality, it can be applied to any social dilemma with diverse individuals, like climate change, for instance. Testing the model in the real world and applying it to society are very interesting experimental directions.

“I’m quite sure that there will be behavioural experiments benefiting from our work in the future,” says Chatterjee. The study could potentially also be interesting for economics, where the new model’s principles can help to better inform economic systems and policy recommendations.

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given Institute of Science and Technology Austria

 


Make mine a double: Moore’s Law and the future of mathematics

What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law.

Moore’s Law stems from a seminal 1965 article by Intel founder Gordon Moore. He wrote:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”

Moore noted that in 1965 engineering advances were enabling a doubling in semiconductor density every 12 months, but this rate was later modified to roughly 18 months. Informally, we may think of this as doubling computer performance.

In any event, Moore’s Law has now continued unabated for 45 years, defying several confident predictions it would soon come to a halt, and represents a sustained exponential rate of progress that is without peer in the history of human technology. Here is a graph of Moore’s Law, shown with the transistor count of various computer processors:

Where we’re at with Moore’s Law

At the present time, researchers are struggling to keep Moore’s Law on track. Processor clock rates have stalled, as chip designers have struggled to control energy costs and heat dissipation, but the industry’s response has been straightforward — simply increase the number of processor “cores” on a single chip, together with associated cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections.

The capacity of leading-edge DRAM main memory chips continues to advance apace with Moore’s Law. The current state of the art in computer memory devices is a 3D design, which will be jointly produced by IBM and Micron Technology, according to a December 2011 announcement by IBM representatives.

As things stand, the best bet for the future of Moore’s Law are nanotubes — submicroscopic tubes of carbon atoms that have remarkable properties.

According to a recent New York Times article, Stanford researchers have created prototype electronic devices by first growing billions of carbon nanotubes on a quartz surface, then coating them with an extremely fine layer of gold atoms. They then used a piece of tape (literally!) to pick the gold atoms up and transfer them to a silicon wafer. The researchers believe that commercial devices could be made with these components as early as 2017.

Moore’s Law in science and maths

So what does this mean for researchers in science and mathematics?

Plenty, as it turns out. A scientific laboratory typically uses hundreds of high-precision devices that rely crucially on electronic designs, and with each step of Moore’s Law, these devices become ever cheaper and more powerful. One prominent case is DNA sequencers. When scientists first completed sequencing a human genome in 2001, at a cost of several hundred million US dollars, observers were jubilant at the advances in equipment that had made this possible.

Now, only ten years later, researchers expect to reduce this cost to only US$1,000 within two years and genome sequencing may well become a standard part of medical practice. This astounding improvement is even faster than Moore’s Law!

Applied mathematicians have benefited from Moore’s Law in the form of scientific supercomputers, which typically employ hundreds of thousands of state-of-the-art components. These systems are used for tasks such as climate modelling, product design and biological structure calculations.

Today, the world’s most powerful system is a Japanese supercomputer that recently ran the industry-standard Linpack benchmark test at more than ten “petaflops,” or, in other words, 10 quadrillion floating-point operations per second.

Below is a graph of the Linpack performance of the world’s leading-edge systems over the time period 1993-2011, courtesy of the website Top 500. Note that over this 18-year period, the performance of the world’s number one system has advanced more than five orders of magnitude. The current number one system is more powerful than the sum of the world’s top 500 supercomputers just four years ago.

Linpack performance over time.

Pure mathematicians have been a relative latecomer to the world of high-performance computing. The present authors well remember the era, just a decade or two ago, when the prevailing opinion in the community was that “real mathematicians don’t compute.”

But thanks to a new generation of mathematical software tools, not to mention the ingenuity of thousands of young, computer-savvy mathematicians worldwide, remarkable progress has been achieved in this arena as well (see our 2011 AMS Notices article on exploratory experimentation in mathematics).

In 1963 Daniel Shanks, who had calculated pi to 100,000 digits, declared that computing one billion digits would be “forever impossible.” Yet this level was reached in 1989. In 1989, famous British physicist Roger Penrose, in the first edition of his best-selling book The Emperor’s New Mind, declared that humankind would likely never know whether a string of ten consecutive sevens occurs in the decimal expansion of pi. Yet this was found just eight years later, in 1997.

Computers are certainly being used for more than just computing and analysing digits of pi. In 2003, the American mathematician Thomas Hales completed a computer-based proof of Kepler’s conjecture, namely the long-hypothesised fact that the simple way the grocer stacks oranges is in fact the optimal packing for equal-diameter spheres. Many other examples could be cited.

Future prospects

So what does the future hold? Assuming that Moore’s Law continues unabated at approximately the same rate as the present, and that obstacles in areas such as power management and system software can be overcome, we will see, by the year 2021, large-scale supercomputers that are 1,000 times more powerful and capacious than today’s state-of-the-art systems — “exaflops” computers (see NAS Report). Applied mathematicians eagerly await these systems for calculations, such as advanced climate models, that cannot be done on today’s systems.

Pure mathematicians will use these systems as well to intuit patterns, compute integrals, search the space of mathematical identities, and solve intricate symbolic equations. If, as one of us discussed in a recent Conversation article, such facilities can be combined with machine intelligence, such as a variation of the hardware and software that enabled an IBM system to defeat the top human contestants in the North American TV game show Jeopardy! we may see a qualitative advance in mathematical discovery and even theory formation.

It is not a big leap to imagine that within the next ten years tailored and massively more powerful versions of Siri (Apple’s new iPhone assistant) will be an integral part of mathematics, not to mention medicine, law and just about every other part of human life.

Some observers, such as those in the Singularity movement, are even more expansive, predicting a time just a few decades hence when technology will advance so fast that at the present time we cannot possibly conceive or predict the outcome.

Your present authors do not subscribe to such optimistic projections, but even if more conservative predictions are realised, it is clear that the digital future looks very bright indeed. We will likely look back at the present day with the same technological disdain with which we currently view the 1960s.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis

 


Mathematical models may help shed light on body clock disruptions

Researchers are using mathematical models to better understand the effects of disruptions like daylight savings time, working night shifts, jet lag or even late-night phone scrolling on the body’s circadian rhythms.

The University of Waterloo and the University of Oxford researchers have developed a new model to help scientists better understand the resilience of the brain’s master clock: the cluster of neurons in the brain that coordinates the body’s other internal rhythms. They also hope to suggest ways to help improve this resilience in individuals with weak or impaired circadian rhythms. The study, “Can the Clocks Tick Together Despite the Noise? Stochastic Simulations and Analysis,” appears in the SIAM Journal on Applied Dynamical Systems.

Sustained disruptions to circadian rhythm have been linked to diabetes, memory loss, and many other disorders.

“Current society is experiencing a rapid increase in demand for work outside of traditional daylight hours,” said Stéphanie Abo, a Ph.D. student in applied mathematics and the study’s lead author. “This greatly disrupts how we are exposed to light, as well as other habits such as eating and sleeping patterns.”

Humans’ circadian rhythms, or internal clocks, are the roughly 24-hour cycles many body systems follow, usually alternating between wakefulness and rest. Scientists are still working to understand the cluster of neurons known as suprachiasmatic nucleus (SCN) or master clock.

Using mathematical modeling techniques and differential equations, the team of applied mathematics researchers modeled the SCN as a macroscopic, or big-picture, system comprised of a seemingly infinite number of neurons. They were especially interested in understanding the system’s couplings—the connections between neurons in the SCN that allow it to achieve a shared rhythm.

Frequent and sustained disturbances to the body’s circadian rhythms eliminated the shared rhythm, implying a weakening of the signals transmitted between SCN neurons.

Abo said they were surprised to find that “a small enough disruption can actually make the connections between neurons stronger.”

“Mathematical models allow you to manipulate body systems with specificity that cannot be easily or ethically achieved in the body or a petri dish,” Abo said. “This allows us to do research and develop good hypotheses at a lower cost.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Waterloo

 


Math teachers hold a bias against girls when the teachers think gender equality has been achieved, says study

Math teachers who believe women no longer face discrimination tend to be biased against girls’ ability in math. This is what we found through an experiment we conducted with over 400 elementary and middle school math teachers across the United States. Our findings were published in a peer-reviewed article that appeared in April 2023 in the International Journal of STEM Education.

For our experiment, we asked teachers to evaluate a set of student solutions to math problems. The teachers didn’t know that gender- and race-specific names, such as Tanisha and Connor, had been randomly assigned to the solutions. We did this so that if they evaluated identical student work differently, it would be because of the gender- and race-specific names they saw, not the differences in student work. The idea was to see if the teachers had any unconscious biases.

After the teachers evaluated the student solutions, we asked a series of questions about their beliefs and experiences. We asked if they felt society had achieved gender equality. We asked them whether they felt anxious about doing math. We asked whether they felt students’ ability in math was fixed or could be improved. We also asked teachers to think about their own experience as math students and to report how frequently they experienced feelings of unequal treatment because of their race or gender.

We then investigated if these beliefs and experiences were related to how they evaluated the math ability of students of different genders or racial groups.

Consistent with our prior work, we found that implicit bias against girls arises in ambiguous situations—in this case, when student solutions were not completely correct.

Further, for teachers who believed that U.S. society had achieved gender equality, they tended to rate a student’s ability higher when they saw a male student name than when they saw a female student name for the same student work.

Teachers’ unconscious gender biases in math classes have been documented repeatedly.

Our study identifies factors that underlie such biases; namely, that biases are stronger among teachers who believe that gender discrimination is not a problem in the United States. Understanding the relationship between teachers’ beliefs and biases can help teacher educators create effective and targeted interventions to remove such biases from classrooms.

Our findings also shed light on potential reasons that males tend to have higher confidence in math and stick with math-intensive college majors even when they’re not high performers.

One big remaining question is how to create targeted interventions to help teachersovercome such biases. Evidence suggests that unconscious biases come into play in situations where stereotypes might emerge. Further, research suggests that these unconscious biases can be suppressed only when people are aware of them and motivated to restrain them.

Since bias may take on different forms in different fields, a one-time, one-size-fits-all anti-bias training may not have a lasting effect. We think it’s worthwhile to investigate if it’s more effective to provide implicit bias training programs that are specific to the areas where bias is revealed.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Yasemin Copur-Gencturk, Ian Thacker and Joseph Cimpian, The Conver


New Research Disproves a Long-Held ‘Cognitive Illusion’ That Hockey Goaltenders Improve Under Pressure

The good news is that—statistically speaking—there is reason to believe Edmonton Oilers goalie Stuart Skinner will improve against the Florida Panthers in the Stanley Cup final.

The bad news is it may not be enough to make a difference.

That’s according to a new study, “Do NHL goalies get hot in the playoffs?” by Likang Ding, a doctoral student studying operations and information systems in the Alberta School of Business. The study is published on the arXivpreprint server.

Ding’s statistical analysis—in the final stage of review for publication—disproves the long-held and prevailing “hot hand” theory that if a goalie is performing well, he’ll continue to perform as well or better as pressure intensifies.

The term “hot hand” derives from basketball, where it is believed a shooter is more likely to score if their previous attempts were successful.

“Our main finding is the nonexistence of the hot-hand phenomenon (for hockey goaltenders),” says Ding. “That is, no positive influence of recent save performance on the save probability for the next shot.”

Instead, Ding and co-authors Ivor Cribben, Armann Ingolfsson and Monica Tran found that, by a small margin, “better past performance may result in a worse future performance.”

That could mean Panthers goaltender Sergei Bobrovsky is due for a slight slump, given his relatively hot streak of late. But according to Ding, that decline may amount to no more than about 1%—certainly nothing to count on.

The reverse is also true, says Ding. If a goalie is underperforming, as Skinner has on occasion during the playoffs, statistics would forecast a slight uptick in his save percentage.

The explanation in that case might be the “motivation effect”; when a goaltender’s recent save performance has been below his average, his effort and focus increase, “causing the next-shot save probability to be higher.”

Here Ding quotes Hall of Fame goaltender Ken Dryden, who once said, “If a shot beats you, make sure you stop the next one, even if it is harder to stop than the one before.”

Though it wasn’t part of his current study, Ding says he reviewed Skinner’s stats before the finals and found a worse-than-average performance, “so I’m hoping he will come back eventually.”

Ding wanted to take a closer look at the hot hand theory because it is crucial in understanding coaches’ decisions about which goaltender to start in a given game. It could mean the second goalie deserves a chance to enter the fray, get used to the pace and stay fresh, even if it might seem risky.

Ding’s data set includes information about all shots on goal in the NHL playoffs from 2008 to 2016, amounting to 48,431 shots faced by 93 goaltenders over 795 games and nine playoff seasons.

The hot hand theory has been around for at least as long as professional sports and is often applied to a range of human endeavour to support the notion that “success breeds success”—an appealing, almost intuitive assumption.

And yet, a series of studies in the 1980s focused on basketball shooting percentages showed there was no statistical evidence to support the theory, says Ding, attributing it instead to a psychological tendency to see patterns in random data.

The hot hand theory remained controversial after the statistical methods used in those studies were later shown to be biased, says Ding. But even once the bias was corrected, the theory has since been largely disproven.

Nobel Prize-winning cognitive scientist Daniel Kahneman once called the phenomenon “a massive and widespread cognitive illusion.” Ding’s study is one more confirming the consensus that the hot hand is no more than wishful thinking.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Geoff McMaster, University of Alberta


What game theory can teach us about standing up to bullies

In a time of income inequality and ruthless politics, people with outsized power or an unrelenting willingness to browbeat others often seem to come out ahead.

New research from Dartmouth, however, shows that being uncooperative can help people on the weaker side of the power dynamic achieve a more equal outcome—and even inflict some loss on their abusive counterpart.

The findings provide a tool based in game theory—the field of mathematics focused on optimizing competitive strategies—that could be applied to help equalize the balance of power in labor negotiations or international relations, and could even be used to integrate cooperation into interconnected artificial intelligence systems such as driverless cars.

Published in PNAS Nexus, the study takes a fresh look at what are known in game theoryas “zero-determinant strategies” developed by renowned scientists William Press, now at the University of Texas at Austin, and the late Freeman Dyson at the Institute for Advanced Study in Princeton, New Jersey.

Zero-determinant strategies dictate that “extortionists” control situations to their advantage by becoming less and less cooperative—though just cooperative enough to keep the other party engaged—and by never being the first to concede when there’s a stalemate. Theoretically, they will always outperform their opponent by demanding and receiving a larger share of what’s at stake.

But the Dartmouth paper uses mathematical models of interactions to uncover an “Achilles heel” to these seemingly uncrackable scenarios, said senior author Feng Fu, an associate professor of mathematics. Fu and first author Xingru Chen, who received her Ph.D. in mathematics from Dartmouth in 2021, discovered an “unbending strategy” in which resistance to being steamrolled not only causes an extortionist to ultimately lose more than their opponent but can result in a more equal outcome as the overbearing party compromises in a scramble to get the best payoff.

“Unbending players who choose not to be extorted can resist by refusing to fully cooperate. They also give up part of their own payoff, but the extortioner loses even more,” said Chen, who is now an assistant professor at the Beijing University of Posts and Telecommunications.

“Our work shows that when an extortioner is faced with an unbending player, their best response is to offer a fair split, thereby guaranteeing an equal payoff for both parties,” she said. “In other words, fairness and cooperation can be cultivated and enforced by unbending players.”

These scenarios frequently play out in the real world, Fu said. Labor relations provide a poignant model. A large corporation can strong-arm suppliers and producers such as farmworkers to accept lower prices for their effort by threatening to replace them and cut them off from a lucrative market. But a strike or protest can turn the balance of power back toward the workers’ favour and result in more fairness and cooperation, such as when a labor union wins some concessions from an employer.

While the power dynamic in these scenarios is never equal, Fu said, his and Chen’s work shows that unbending players can reap benefits by defecting from time to time and sabotaging what extortioners are truly after—the highest payoff for themselves.

“The practical insight from our work is for weaker parties to be unbending and resist being the first to compromise, thereby transforming the interaction into an ultimatum game in which extortioners are incentivized to be fairer and more cooperative to avoid ‘lose-lose’ situations,” Fu said.

“Consider the dynamics of power between dominant entities such as Donald Trump and the lack of unbending from the Republican Party, or, on the other hand, the military and political resistance to Russia’s invasion of Ukraine that has helped counteract incredible asymmetry,” he said. “These results can be applied to real-world situations, from social equity and fair pay to developing systems that promote cooperation among AI agents, such as autonomous driving.”

Chen and Fu’s paper expands the theoretical understanding of zero-determinant interactions while also outlining how the outsized power of extortioners can be checked, said mathematician Christian Hilbe, leader of the Dynamics of Social Behaviour research group at the Max Planck Institute for Evolutionary Biology in Germany

“Among the technical contributions, they stress that even extortioners can be outperformed in some games. I don’t think that has been fully appreciated by the community before,” said Hilbe, who was not involved in the study but is familiar with it. “Among the conceptual insights, I like the idea of unbending strategies, behaviours that encourage an extortionate player to eventually settle at a fairer outcome.”

Behavioural research involving human participants has shown that extortioners may constitute a significant portion of our everyday interactions, said Hilbe, who published a 2016 paper in the journal PLOS ONE reporting just that. He also co-authored a 2014 study in Nature Communications that found people playing against a computerized opponent strongly resisted when the computer engaged in threatening conduct, even when it reduced their own payout.

“The empirical evidence to date suggests that people do engage in these extortionate behaviours, especially in asymmetric situations, and that the extorted party often tries to resist it, which is then costly to both parties,” Hilbe said.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Morgan Kelly, Dartmouth College


New research analyses ‘Finnegans Wake’ for novel spacing between punctuation marks

Sequences of consecutive breakpoint distances for “Gates of Paradise” and “Finnegans Wake” in the same scale. Credit: Stanisław Drożdż

Statistical analysis of classic literature has shown that the way punctuation breaks up text obeys certain universal mathematical relationships. James Joyce’s tome “Finnegans Wake,” however, famously breaks the rules of normal prose through its unusual, dreamlike stream of consciousness. New work in chaos theory, published in the journal Chaos, takes a closer look at how Joyce’s challenging novel stands out, mathematically.

Researchers have compared the distribution of punctuation marks in various experimental novels to determine the underlying order of “Finnegans Wake.” By statistically analysing the texts, the team has found that the tome exhibits an unusual but statistically identifiable structure.

“‘Finnegans Wake’ exhibits the type of narrative that makes it possible to continue longer strings of words without the need for punctuation breaks,” said author Stanisław Drożdż. “This may indicate that this type of narrative is less taxing on the human perceptual and respiratory systems or, equivalently, that it resonates better with them.”

As word sequences run longer without punctuation marks, the higher the probability that a punctuation mark appears next. Such a relationship is called a Weibull distribution. Weibull distributions apply to anything from human diseases to “The Gates of Paradise,” a Polish novel written almost entirely in a single sentence spanning nearly 40,000 words.

Enter “Finnegans Wake,” which weaves together puns, phrases, and portmanteaus from up to 70 languages into a dreamlike stream of consciousness. The book typifies Joyce’s later works, some of the only known examples to appear to not adhere to the Weibull distribution in punctuation.

The team broke down 10 experimental novels by word counts between punctuation marks. These sets of numbers were compiled into a singularity spectrum for each book that described how orderly sentences of different lengths are proportioned. “Finnegans Wake” has a notoriously broad range of sentence lengths, making for a wide spectrum.

While most punctuation distributions skew toward shorter word sequences, the wide singularity spectrum in “Finnegans Wake” was perfectly symmetrical, meaning sentence length variability follows an orderly curve.

This level of symmetry is a rare feat in the real world, implying a well-organized, complex hierarchical structure that aligns perfectly with a phenomenon known as multifractality, systems represented by fractals within fractals.

“‘Finnegans Wake’ appears to have the unique property that the probability of interrupting a sequence of words with a punctuation character decreases with the length of the sequence,” Drożdż said. “This makes the narrative more flexible to create perfect, long-range correlated cascading patterns that better reflect the functioning of nature.”

Drożdż hopes the work helps large language models better capture long-range correlations in text. The team next looks to apply their work in this domain.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given   American Institute of Physics

 


Improved information and communication technology infrastructure leads to better math skills, research suggests

Students who are more digitally skilled also perform better in math. New research from Renae Loh and others at Radboud University shows that in countries with better availability of information and communication technology (ICT) in schools, math performance benefits greatly. It further suggests that improving the ICT environment in schools can reduce inequality in education between countries. The paper is published in European Educational Research Journal today.

For anyone growing up today, ICT skills play a tremendously important role. Today’s youth constantly come into contact with technology throughout their life, both in work and leisure. Though previous studies have shown the importance of ICT skills in students’ learning outcomes, a new study focuses specifically on its relevance to math and how that differs between countries.

“Both ICT and math rely on structural and logical thinking, which is why ICT skills overlap with and boosts math learning. But we were also curious to find out how much of that depends on a country’s ICT environment,” says Renae Loh, primary author of the paper and a sociologist at Radboud University.

Benefits of a strong ICT infrastructure

Loh and her colleagues used data from the 2018 PISA Study and compares 248,720 students aged 15 to 16 across 43 countries. Included in this data is information about the ICT skills of these students. They were asked whether they read new information on digital devices, and if they would try to solve problems with those devices themselves, among other questions. The more positively students responded to these questions, the more skilled in ICT the researchers judged these students to be.

Loh says, “What we found is that students get more educational benefit out of their digital skills in countries with a strong ICT infrastructure in education. This is likely because the more computers and other digital tools are available to them in their studies, the more they were able to put those skills to use, and the more valued these skills were. It is not a negligible difference either.”

“A strong ICT infrastructure in education could boost what math performance benefits students gain from their digital skills by about 60%. Differences in ICT infrastructure in education accounted for 25% of the differences between countries in how much math benefits students gain from their digital skills. It is also a better indicator than, for example, looking at a more general indicator of country wealth, because it is more pinpointed and more actionable.”

Reducing inequality

Especially notable to Loh and her colleagues was the difference that was apparent between countries with a strong ICT infrastructure, and countries without. “It was surprisingly straightforward, in some ways: the higher the computer-to-student ratio in a country, the stronger the math performance. This is consistent with the idea that these skills serve as a learning and signaling resource, at least for math, and students need opportunities to put these resources to use.”

Loh points out that there are limits to the insight offered by the data, however. “Our study doesn’t look at the process of how math is taught in these schools, specifically. Or how the ICT infrastructure is actually being used. Future research might also puzzle over how important math teachers themselves believe ICT skills to be, and if that belief and their subsequent teaching style influences the development of students, too.”

“There is still vast inequality in education around the world,” warns Loh. “And now there’s an added ICT dimension. Regardless of family background, gender, and so on, having limited access to ICT or a lack in digital skills is a disadvantage in schooling. What is clear is that the school environment is important here. More targeted investments in a robust ICT infrastructure in education would help in bridging the educational gap between countries and may also help to address inequalities in digital skills among students in those countries.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Radboud University


Window Patterns

Start with a square piece of paper (like a Post it note), fold and unfold it in half along a mid-line or along a diagonal. Take another identical square, and fold and unfold it the same way. Decide on some way to place the second square on the first, so that the second is somewhat rotated. Use only the edges of the square and the creases that you made to determine the placement. Make your placement precise so that your “rule” can be described exactly in terms of the edges and creases. Repeat this process, placing a third square on top of your second square using exactly the same rule. Repeat until your placing of papers leads you back to the first piece.

The resulting construction might look something like the one shown on the left below. If you take your papers, set them in place with some careful light gluing, and place them on a window, the sunlight passing through the overlapping papers creates a stained-glass effect that shows a variety of shapes.

This sort of construction is a simplified version of what William Gibbs describes in his book “Window Patterns.” In Gibbs’ treatment, the pattern is partially planned in advance, and then the dimensions of the rectangular pieces of paper that make up the pattern are determined using a little trigonometry. This process can be simplified by starting with a more limited range of options for paper dimension and placement. It turns out that a surprising number of window patterns can be created by only using squares, their mid-lines, and their diagonals, and that these patterns invariably have “special triangles” and related regular polygons and star-polygons embedded within them.

Here are a two more “placement rules” and the patterns that they give rise to.

The diagrams were created using Geometer’s Sketchpad – if you construct the rule using translations applied to a constructed square, you can use the iteration feature to create the final pattern. GSP provides a good environment for planning out the patterns prior to constructing them with paper, and building the plans in GSP is enjoyable and instructive as well.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Researchers find best routes to self-assembling 3-D shapes

This showas a few of the 2.3 million possible 2-D designs — planar nets — for a truncated octahedron (right column). The question is: Which net is best to make a self-assembling shape at the nanoscale?

Material chemists and engineers would love to figure out how to create self-assembling shells, containers or structures that could be used as tiny drug-carrying containers or to build 3-D sensors and electronic devices.

There have been some successes with simple 3-D shapes such as cubes, but the list of possible starting points that could yield the ideal self-assembly for more complex geometric configurations gets long fast. For example, while there are 11 2-D arrangements for a cube, there are 43,380 for a dodecahedron (12 equal pentagonal faces). Creating a truncated octahedron (14 total faces – six squares and eight hexagons) has 2.3 million possibilities.

“The issue is that one runs into a combinatorial explosion,” said Govind Menon, associate professor of applied mathematics at Brown University. “How do we search efficiently for the best solution within such a large dataset? This is where math can contribute to the problem.”

In a paper published in the Proceedings of National Academy of Sciences, researchers from Brown and Johns Hopkins University determined the best 2-D arrangements, called planar nets, to create self-folding polyhedra with dimensions of a few hundred microns, the size of a small dust particle. The strength of the analysis lies in the combination of theory and experiment. The team at Brown devised algorithms to cut through the myriad possibilities and identify the best planar nets to yield the self-folding 3-D structures. Researchers at Johns Hopkins then confirmed the nets’ design principles with experiments.

“Using a combination of theory and experiments, we uncovered design principles for optimum nets which self-assemble with high yields,” said David Gracias, associate professor in of chemical and biomolecular engineering at Johns Hopkins and a co-corresponding author on the paper. “In doing so, we uncovered striking geometric analogies between natural assembly of proteins and viruses and these polyhedra, which could provide insight into naturally occurring self-assembling processes and is a step toward the development of self-assembly as a viable manufacturing paradigm.”

“This is about creating basic tools in nanotechnology,” said Menon, co-corresponding author on the paper. “It’s important to explore what shapes you can build. The bigger your toolbox, the better off you are.”

While the approach has been used elsewhere to create smaller particles at the nanoscale, the researchers at Brown and Johns Hopkins used larger sizes to better understand the principles that govern self-folding polyhedra.

The researchers sought to figure out how to self-assemble structures that resemble the protein shells viruses use to protect their genetic material. As it turns out, the shells used by many viruses are shaped like dodecahedra (a simplified version of a geodesic dome like the Epcot Center at Disney World). But even a dodecahedron can be cut into 43,380 planar nets. The trick is to find the nets that yield the best self-assembly. Menon, with the help of Brown undergraduate students Margaret Ewing and Andrew “Drew” Kunas, sought to winnow the possibilities. The group built models and developed a computer code to seek out the optimal nets, finding just six that seemed to fit the algorithmic bill.

The students got acquainted with their assignment by playing with a set of children’s toys in various geometric shapes. They progressed quickly into more serious analysis. “We started randomly generating nets, trying to get all of them. It was like going fishing in a lake and trying to count all the species of fish,” said Kunas, whose concentration is in applied mathematics. After tabulating the nets and establishing metrics for the most successful folding maneuvers, “we got lists of nets with the best radius of gyration and vertex connections, discovering which nets would be the best for production for the icosahedron, dodecahedron, and truncated octahedron for the first time.”

Gracias and colleagues at Johns Hopkins, who have been working with self-assembling structures for years, tested the configurations from the Brown researchers. The nets are nickel plates with hinges that have been soldered together in various 2-D arrangements. Using the options presented by the Brown researchers, the Johns Hopkins’s group heated the nets to around 360 degrees Fahrenheit, the point at which surface tension between the solder and the nickel plate causes the hinges to fold upward, rotate and eventually form a polyhedron. “Quite remarkably, just on heating, these planar nets fold up and seal themselves into these complex 3-D geometries with specific fold angles,” Gracias said.

“What’s amazing is we have no control over the sequence of folds, but it still works,” Menon added.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Karolina Grabowska/Pexels,