Maths makes finding bat roosts much easier, our research shows

Finding bats is hard. They are small, fast and they primarily fly at night.  But our new research could improve the way conservationists find bat roosts. We’ve developed a new algorithm that significantly reduces the area that needs to be searched, which could save time and cut labour cost.

Of course, you may wonder why we would want to find bats in the first place. But these flying mammals are natural pest controllers and pollinators, and they help disperse seeds. So they are extremely useful in contributing to the health of our environment.

Despite their importance though, bat habitats are threatened by human activities such as increased lighting, noise and land use. To ensure that we can study and enhance the health of our bat population, we need to locate their roosts. But finding bat roosts is a bit like finding a needle in a haystack.

Our previous work measured and modelled the motion of greater horseshoe bats in flight. Having such a model means we can predict where bats will be, depending on their roost position. But the position of the roost is something we often don’t know.

Our new research combines our previous mathematical model of bat motion with data gathered from acoustic recorders known as “bat detectors”. These bat detectors are placed around the environment and left there for several nights.

Seeing with sound

Bats use echolocation, which allows them to “see with sound” when they’re flying. If these ultrasonic calls are made within ten to 15 metres of a bat detector, the device is triggered to make a recording, providing an accurate record of where and when a bat was present.

The sound recordings also provide clues about the identity of the species. Greater horseshoe bats make a very distinctive “warbling” call at almost exactly 82kHz in frequency, so we can easily tell whether the species is present or not.

Assuming that a bat detector’s batteries last for a few nights, its memory card is not full, and the units are not stolen or vandalised, then we can use the bat call data to generate a map that shows the proportion of bat calls at each detector location.

Our model can also be used to predict the proportion of bat calls based on a given roost location. So, we split the environment up into a grid and simulate bats flying from each grid square. The grid square, or squares, whose simulations best reproduce the bat detector data will then be the most likely locations of the roost.

This simple algorithm can then be applied to whole terrains, meaning that we can create a map of likely roost locations. Cutting out the regions that are least likely to contain the roost can mean we shrink the search space to less than 1% of the initially surveyed area. Simplifying the process of finding bat roosts allows more of an ecologist’s time to be spent on conservation projects, rather than laborious searching.

In 2022, we developed an app that uses publicly available data to predict bat flight lines. At the moment the app can help ecologists, developers or local authority planners, know how the environment is used by bats. However, it needs a roost location to be specified first, and this information is not always known. Our new research removes this barrier, making the app easier to use.

Our work offers a way of identifying likely roost locations. These estimates can then be verified either by directly observing particular features, or by capturing bats at a nearby location and following them back home, using radiotracking.

Over the past two decades, bat detectors have gone from simple hand-held machines to high-performance devices that can collect data for days at a time. Yet they are usually deployed only to identify bat species. We have shown they can be used to identify the areas most likely to contain bat roosts, uncovering critical information about these most secretive of animals.

We hope that this will provide further tools for ecologists to optimise the initial microphone detector locations, thereby providing a holistic way of detecting bat roosts.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Thomas Woolley and Fiona Mathews, The Conversation

 


Merging AI and Human Efforts to Tackle Complex Mathematical Problems

By rapidly analysing large amounts of data and making accurate predictions, artificial intelligence (AI) tools could help to answer many long-standing research questions. For instance, they could help to identify new materials to fabricate electronics or the patterns in brain activity associated with specific human behaviours.

One area in which AI has so far been rarely applied is number theory, a branch of mathematics focusing on the study of integers and arithmetic functions. Most research questions in this field are solved by human mathematicians, often years or decades after their initial introduction.

Researchers at the Israel Institute of Technology (Technion) recently set out to explore the possibility of tackling long-standing problems in number theory using state-of-the-art computational models.

In a recent paper, published in the Proceedings of the National Academy of Sciences, they demonstrated that such a computational approach can support the work of mathematicians, helping them to make new exciting discoveries.

“Computer algorithms are increasingly dominant in scientific research, a practice now broadly called ‘AI for Science,'” Rotem Elimelech and Ido Kaminer, authors of the paper, told Phys.org.

“However, in fields like number theory, advances are often attributed to creativity or human intuition. In these fields, questions can remain unresolved for hundreds of years, and while finding an answer can be as simple as discovering the correct formula, there is no clear path for doing so.”

Elimelech, Kaminer and their colleagues have been exploring the possibility that computer algorithms could automate or augment mathematical intuition. This inspired them to establish the Ramanujan Machine research group, a new collaborative effort aimed at developing algorithms to accelerate mathematical research.

Their research group for this study also included Ofir David, Carlos de la Cruz Mengual, Rotem Kalisch, Wolfram Berndt, Michael Shalyt, Mark Silberstein, and Yaron Hadad.

“On a philosophical level, our work explores the interplay between algorithms and mathematicians,” Elimelech and Kaminer explained. “Our new paper indeed shows that algorithms can provide the necessary data to inspire creative insights, leading to discoveries of new formulas and new connections between mathematical constants.”

The first objective of the recent study by Elimelech, Kaminer and their colleagues was to make new discoveries about mathematical constants. While working toward this goal, they also set out to test and promote alternative approaches for conducting research in pure mathematics.

“The ‘conservative matrix field’ is a structure analogous to the conservative vector field that every math or physics student learns about in first year of undergrad,” Elimelech and Kaminer explained. “In a conservative vector field, such as the electric field created by a charged particle, we can calculate the change in potential using line integrals.

“Similarly, in conservative matrix fields, we define a potential over a discrete space and calculate it through matrix multiplications rather than using line integrals. Traveling between two points is equivalent to calculating the change in the potential and it involves a series of matrix multiplications.”

In contrast with the conservative vector field, the so-called conservative matrix field is a new discovery. An important advantage of this structure is that it can generalize the formulas of each mathematical constant, generating infinitely many new formulas of the same kind.

“The way by which the conservative matrix field creates a formula is by traveling between two points (or actually, traveling from one point all the way to infinity inside its discrete space),” Elimelech and Kaminer said. “Finding non-trivial matrix fields that are also conservative is challenging.”

As part of their study, Elimelech, Kaminer and their colleagues used large-scale distributed computing, which entails the use of multiple interconnected nodes working together to solve complex problems. This approach allowed them to discover new rational sequences that converge to fundamental constants (i.e., formulas for these constants).

“Each sequence represents a path hidden in the conservative matrix field,” Elimelech and Kaminer explained. “From the variety of such paths, we reverse-engineered the conservative matrix field. Our algorithms were distributed using BOINC, an infrastructure for volunteer computing. We are grateful to the contribution by hundreds of users worldwide who donated computation time over the past two and a half years, making this discovery possible.”

The recent work by the research team at the Technion demonstrates that mathematicians can benefit more broadly from the use of computational tools and algorithms to provide them with a “virtual lab.” Such labs provide an opportunity to try ideas experimentally in a computer, resembling the real experiments available in physics and in other fields of science. Specifically, algorithms can carry out mathematical experiments providing formulas that can be used to formulate new mathematical hypotheses.

“Such hypotheses, or conjectures, are what drives mathematical research forward,” Elimelech and Kaminer said. “The more examples supporting a hypothesis, the stronger it becomes, increasing the likelihood to be correct. Algorithms can also discover anomalies, pointing to phenomena that are the building-blocks for new hypotheses. Such discoveries would not be possible without large-scale mathematical experiments that use distributed computing.”

Another interesting aspect of this recent study is that it demonstrates the advantages of building communities to tackle problems. In fact, the researchers published their code online from their project’s early days and relied on contributions by a large network of volunteers.

“Our study shows that scientific research can be conducted without exclusive access to supercomputers, taking a substantial step toward the democratization of scientific research,” Elimelech and Kaminer said. “We regularly post unproven hypotheses generated by our algorithms, challenging other math enthusiasts to try proving these hypotheses, which when validated are posted on our project website. This happened on several occasions so far. One of the community contributors, Wolfgang Berndt, got so involved that he is now part of our core team and a co-author on the paper.”

The collaborative and open nature of this study allowed Elimelech, Kaminer and the rest of the team to establish new collaborations with other mathematicians worldwide. In addition, their work attracted the interest of some children and young people, showing them how algorithms and mathematics can be combined in fascinating ways.

In their next studies, the researchers plan to further develop the theory of conservative matrix fields. These matrix fields are a highly powerful tool for generating irrationality proofs for fundamental constants, which Elimelech, Kaminer and the team plan to continue experimenting with.

“Our current aim is to address questions regarding the irrationality of famous constants whose irrationality is unknown, sometimes remaining an open question for over a hundred years, like in the case of the Catalan constant,” Elimelech and Kaminer said.

“Another example is the Riemann zeta function, central in number theory, with its zeros at the heart of the Riemann hypothesis, which is perhaps the most important unsolved problem in pure mathematics. There are many open questions about the values of this function, including the irrationality of its values. Specifically, whether ζ(5) is irrational is an open question that attracts the efforts of great mathematicians.”

The ultimate goal of this team of researchers is to successfully use their experimental mathematics approach to prove the irrationality of one of these constants. In the future, they also hope to systematically apply their approach to a broader range of problems in mathematics and physics. Their physics-inspired hands-on research style arises from the interdisciplinary nature of the team, which combines people specialized in CS, EE, math, and physics.

“Our Ramanujan Machine group can help other researchers create search algorithms for their important problems and then use distributed computing to search over large spaces that cannot be attempted otherwise,” Elimelech and Kaminer added. “Each such algorithm, if successful, will help point to new phenomena and eventually new hypotheses in mathematics, helping to choose promising research directions. We are now considering pushing forward this strategy by setting up a virtual user facility for experimental mathematics,” inspired by the long history and impact of user facilities for experimental physics.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Ingrid Fadelli , Phys.org


Study Finds Cooperation Can Still Evolve Even With Limited Payoff Memory

Direct reciprocity facilitates cooperation in repeated social interactions. Traditional models suggest that individuals learn to adopt conditionally cooperative strategies if they have multiple encounters with their partner. However, most existing models make rather strong assumptions about how individuals decide to keep or change their strategies. They assume individuals make these decisions based on a strategy’s average performance. This in turn suggests that individuals would remember their exact payoffs against everyone else.

In a recent study, researchers from the Max Planck Institute for Evolutionary Biology, the School of Data Science and Society, and the Department of Mathematics at the University of North Carolina at Chapel Hill examine the effects of realistic memory constraints. They find that cooperation can evolve even with minimal memory capacities. The research is published in the journal Proceedings of the Royal Society B: Biological Sciences.

Direct reciprocity is based on repeated interactions between two individuals. This concept, often described as “you scratch my back, I’ll scratch yours,” has proven to be a pivotal mechanism in maintaining cooperation within groups or societies.

While models of direct reciprocity have deepened our understanding of cooperation, they frequently make strong assumptions about individuals’ memory and decision-making processes. For example, when strategies are updated through social learning, it is commonly assumed that individuals compare their average payoffs.

This would require them to compute (or remember) their payoffs against everyone else in the population. To understand how more realistic constraints influence direct reciprocity, the current study considers the evolution of conditional behaviours when individuals learn based on more recent experiences.

Two extreme scenarios

This study first compares the classical modeling approach with another extreme approach. In the classical approach, individuals update their strategies based on their expected payoffs, considering every single interaction with each member of the population (perfect memory). Conversely, the opposite extreme is considering only the very last interaction (limited memory).

Comparing these two scenarios shows that individuals with limited payoff memory tend to adopt less generous strategies. They are less forgiving when someone defects against them. Yet, moderate levels of cooperation can still evolve.

Intermediate cases

The study also considers intermediate cases, where individuals consider their last two or three or four recent experiences. The results show that cooperation rates quickly approach the levels observed under perfect payoff memory.

Overall, this study contributes to a wider literature that explores which kinds of cognitive capacities are required for reciprocal altruism to be feasible. While more memory is always favourable, reciprocal cooperation can already be sustained if individuals have a record of two or three past outcomes.

This work’s results have been derived entirely within a theoretical model. The authors feel that such studies are crucial for making model-informed deductions about reciprocity in natural systems.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Michael Hesse, Max Planck Society


Students’ Awareness of Their Cognitive Processes Facilitates The Learning of Math, Finds Study

The purpose of education is to ensure that students acquire the skills necessary for succeeding in a world that is constantly changing. Self-assessment, or teaching students how to examine and evaluate their own learning and cognitive processes, has proven to be an effective method, and this competence is partly based on metacognitive knowledge.

A new study conducted at the University of Eastern Finland shows that metacognitive knowledge, i.e., awareness of one’s cognitive processes, is also a key factor in the learning of mathematics. The work is published in the journal Cogent Education.

The study explored thinking skills and possible grade-level differences in children attending comprehensive school in Finland. The researchers investigated 6th, 7th and 9th graders’ metacognitive knowledge in the context of mathematics.

“The study showed that ninth graders excelled at explaining their use of learning strategies, while 7th graders demonstrated proficiency in understanding when and why certain strategies should be used. No other differences between grade levels were observed, which highlights the need for continuous support throughout the learning path,” says Susanna Toikka of the University of Eastern Finland, the first author of the article.

The findings emphasize the need to incorporate elements that support metacognitive knowledge into mathematics learning materials, as well as into teachers’ pedagogical practices.

Self-assessment and understanding of one’s own learning help to face new challenges

Metacognitive knowledge helps students not only to learn mathematics, but also more broadly in self-assessment and lifelong learning. Students who can assess their own learning and understanding are better equipped to face new challenges and adapt to changing environments. Such skills are crucial for lifelong learning, as they enable continuous development and learning throughout life.

“Metacognitive knowledge is a key factor in learning mathematics and problem-solving, but its significance also extends to self-assessment and lifelong learning,” says Toikka.

In schools, metacognitive knowledge can be effectively developed as part of education. Based on earlier studies, Toikka and colleagues have developed a combination of frameworks for metacognitive knowledge, which helps to identify students’ needs for development regarding metacognitive knowledge by offering an alternative perspective to that of traditional developmental psychology.

“This also supports teachers in promoting students’ metacognitive knowledge. Teachers can use the combination of frameworks to design and implement targeted interventions that support students’ skills in lifelong learning.”

According to Toikka, the combination of frameworks enhances understanding of metacognitive knowledge and helps to identify areas where individual support is needed: “This type of understanding is crucial for the development of metacognitive knowledge among diverse learners.”

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to University of Eastern Finland


Why expanding access to algebra is a matter of civil rights

Bob Moses, who helped register Black residents to vote in Mississippi during the Civil Rights Movement, believed civil rights went beyond the ballot box. To Moses, who was a teacher as well as an activist, math literacy is a civil right: a requirement to earning a living wage in modern society. In 1982, he founded the Algebra Project to ensure that “students at the bottom get the math literacy they need.”

As a researcher who studies ways to improve the math experiences of students, WEbelieve a new approach that expands access to algebra may help more students get the math literacy Moses, who died in 2021, viewed as so important. It’s a goal districts have long been struggling to meet.

Efforts to improve student achievement in algebra have been taking place for decades. Unfortunately, the math pipeline in the United States is fraught with persistent opportunity gaps. According to the Nation’s Report Card – a congressionally mandated project administered by the Department of Education – in 2022 only 29% of U.S. fourth graders and 20% of U.S. eighth graders were proficient in math. Low-income students, students of color and multilingual learners, who tend to have lower scoreson math assessments, often do not have the same access as others to qualified teachers, high-quality curriculum and well-resourced classrooms.

A new approach

The Dallas Independent School District – or Dallas ISD – is gaining national attention for increasing opportunities to learn by raising expectations for all students. Following in the footsteps of more than 60 districts in the state of Washington, in 2019 the Dallas ISD implemented an innovative approach of having students be automatically enrolled rather than opt in to honours math in middle school.

Under an opt-in policy, students need a parent or teacher recommendation to take honours math in middle school and Algebra 1 in eighth grade. That policy led both to low enrollment and very little diversity in honours math. Some parents, especially those who are Black or Latino, were not aware how to enroll their students in advanced classes due to a lack of communication in many districts.

In addition, implicit bias, which exists in all demographic groups, may influence teachers’ perceptions of the behaviour and academic potential of students, and therefore their subsequent recommendations. Public school teachers in the U.S. are far less racially and ethnically diverse than the students they serve.

Dallas ISD’s policy overhaul aimed to foster inclusivity and bridge educational gaps among students. Through this initiative, every middle school student, regardless of background, was enrolled in honours math, the pathway that leads to taking Algebra 1 in eighth grade, unless they opted out.

Flipping the switch from opt-in to opt-out led to a dramatic increase in the number of Black and Latino learners, who constitute the majority of Dallas students. And the district’s overall math scores remained steady. About 60% of Dallas ISD eighth graders are now taking Algebra 1, triple the prior level. Moreover, more than 90% are passing the state exam.

Civil rights activist Bob Moses believed math literacy was critical for students to be able to make a living. Robert Elfstrom/Villon Films via Getty Images

Efforts spread

Other cities are taking notice of the effects of Dallas ISD’s shifting policy. The San Francisco Unified School District, for example, announced plans in February 2024 to implement Algebra 1 in eighth grade in all schools by the 2026-27 school year.

In fall 2024, the district will pilot three programs to offer Algebra 1 in eighth grade. The pilots range from an opt-out program for all eighth graders – with extra support for students who are not proficient – to a program that automatically enrolls proficient students in Algebra 1, offered as an extra math class during the school day. Students who are not proficient can choose to opt in.

Nationwide, however, districts that enroll all students in Algebra 1 and allow them to opt out are still in the minority. And some stopped offering eighth grade Algebra 1 entirely, leaving students with only pre-algebra classes. Cambridge, Massachusetts – the city in which Bob Moses founded the Algebra Project – is among them.

Equity concerns linger

Between 2017 and 2019, district leaders in the Cambridge Public Schools phased out the practice of placing middle school students into “accelerated” or “grade-level” math classes. Few middle schools in the district now offer Algebra 1 in eighth grade.

The policy shift, designed to improve overall educational outcomes, was driven by concerns over significant racial disparities in advanced math enrollment in high school. Completion of Algebra 1 in eighth grade allows students to climb the math ladder to more difficult classes, like calculus, in high school. In Cambridge, the students who took eighth grade Algebra 1 were primarily white and Asian; Black and Latino students enrolled, for the most part, in grade-level math.

Some families and educators contend that the district’s decision made access to advanced math classes even more inequitable. Now, advanced math in high school is more likely to be restricted to students whose parents can afford to help them prepare with private lessons, after-school programs or private schooling, they said.

While the district has tried to improve access to advanced math in high school by offering a free online summer program for incoming ninth graders, achievement gaps have remained persistently wide.

Perhaps striking a balance between top-down policy and bottom-up support will help schools across the U.S. realize the vision Moses dreamed of in 1982 when he founded the Algebra Project: “That in the 21st century every child has a civil right to secure math literacy – the ability to read, write and reason with the symbol systems of mathematics.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Liza Bondurant, The Conversation


Mathematicians Find Odd Shapes That Roll Like A Wheel In Any Dimension

Not content with shapes in two or three dimensions, mathematicians like to explore objects in any number of spatial dimensions. Now they have discovered shapes of constant width in any dimension, which roll like a wheel despite not being round.

A 3D shape of constant width as seen from three different angles. The middle view resembles a 2D Reuleaux triangle

Mathematicians have reinvented the wheel with the discovery of shapes that can roll smoothly when sandwiched between two surfaces, even in four, five or any higher number of spatial dimensions. The finding answers a question that researchers have been puzzling over for decades.

Such objects are known as shapes of constant width, and the most familiar in two and three dimensions are the circle and the sphere. These aren’t the only such shapes, however. One example is the Reuleaux triangle, which is a triangle with curved edges, while people in the UK are used to handling equilateral curve heptagons, otherwise known as the shape of the 20 and 50 pence coins. In this case, being of constant width allows them to roll inside coin-operated machines and be recognised regardless of their orientation.

Crucially, all of these shapes have a smaller area or volume than a circle or sphere of the equivalent width – but, until now, it wasn’t known if the same could be true in higher dimensions. The question was first posed in 1988 by mathematician Oded Schramm, who asked whether constant-width objects smaller than a higher-dimensional sphere might exist.

While shapes with more than three dimensions are impossible to visualise, mathematicians can define them by extending 2D and 3D shapes in logical ways. For example, just as a circle or a sphere is the set of points that sits at a constant distance from a central point, the same is true in higher dimensions. “Sometimes the most fascinating phenomena are discovered when you look at higher and higher dimensions,” says Gil Kalai at the Hebrew University of Jerusalem in Israel.

Now, Andrii Arman at the University of Manitoba in Canada and his colleagues have answered Schramm’s question and found a set of constant-width shapes, in any dimension, that are indeed smaller than an equivalent dimensional sphere.

Arman and his colleagues had been working on the problem for several years in weekly meetings, trying to come up with a way to construct these shapes before they struck upon a solution. “You could say we exhausted this problem until it gave up,” he says.

The first part of the proof involves considering a sphere with n dimensions and then dividing it into 2n equal parts – so four parts for a circle, eight for a 3D sphere, 16 for a 4D sphere and so on. The researchers then mathematically stretch and squeeze these segments to alter their shape without changing their width. “The recipe is very simple, but we understood that only after all of our elaboration,” says team member Andriy Bondarenko at the Norwegian University of Science and Technology.

The team proved that it is always possible to do this distortion in such a way that you end up with a shape that has a volume at most 0.9n times that of the equivalent dimensional sphere. This means that as you move to higher and higher dimensions, the shape of constant width gets proportionally smaller and smaller compared with the sphere.

Visualising this is difficult, but one trick is to imagine the lower-dimensional silhouette of a higher-dimensional object. When viewed at certain angles, the 3D shape appears as a 2D Reuleaux triangle (see the middle image above). In the same way, the 3D shape can be seen as a “shadow” of the 4D one, and so on.  “The shapes in higher dimensions will be in a certain sense similar, but will grow in complexity as [the] dimension grows,” says Arman.

Having identified these shapes, mathematicians now hope to study them further. “Even with the new result, which takes away some of the mystery about them, they are very mysterious sets in high dimensions,” says Kalai.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


New Research Disproves a Long-Held ‘Cognitive Illusion’ That Hockey Goaltenders Improve Under Pressure

The good news is that—statistically speaking—there is reason to believe Edmonton Oilers goalie Stuart Skinner will improve against the Florida Panthers in the Stanley Cup final.

The bad news is it may not be enough to make a difference.

That’s according to a new study, “Do NHL goalies get hot in the playoffs?” by Likang Ding, a doctoral student studying operations and information systems in the Alberta School of Business. The study is published on the arXivpreprint server.

Ding’s statistical analysis—in the final stage of review for publication—disproves the long-held and prevailing “hot hand” theory that if a goalie is performing well, he’ll continue to perform as well or better as pressure intensifies.

The term “hot hand” derives from basketball, where it is believed a shooter is more likely to score if their previous attempts were successful.

“Our main finding is the nonexistence of the hot-hand phenomenon (for hockey goaltenders),” says Ding. “That is, no positive influence of recent save performance on the save probability for the next shot.”

Instead, Ding and co-authors Ivor Cribben, Armann Ingolfsson and Monica Tran found that, by a small margin, “better past performance may result in a worse future performance.”

That could mean Panthers goaltender Sergei Bobrovsky is due for a slight slump, given his relatively hot streak of late. But according to Ding, that decline may amount to no more than about 1%—certainly nothing to count on.

The reverse is also true, says Ding. If a goalie is underperforming, as Skinner has on occasion during the playoffs, statistics would forecast a slight uptick in his save percentage.

The explanation in that case might be the “motivation effect”; when a goaltender’s recent save performance has been below his average, his effort and focus increase, “causing the next-shot save probability to be higher.”

Here Ding quotes Hall of Fame goaltender Ken Dryden, who once said, “If a shot beats you, make sure you stop the next one, even if it is harder to stop than the one before.”

Though it wasn’t part of his current study, Ding says he reviewed Skinner’s stats before the finals and found a worse-than-average performance, “so I’m hoping he will come back eventually.”

Ding wanted to take a closer look at the hot hand theory because it is crucial in understanding coaches’ decisions about which goaltender to start in a given game. It could mean the second goalie deserves a chance to enter the fray, get used to the pace and stay fresh, even if it might seem risky.

Ding’s data set includes information about all shots on goal in the NHL playoffs from 2008 to 2016, amounting to 48,431 shots faced by 93 goaltenders over 795 games and nine playoff seasons.

The hot hand theory has been around for at least as long as professional sports and is often applied to a range of human endeavour to support the notion that “success breeds success”—an appealing, almost intuitive assumption.

And yet, a series of studies in the 1980s focused on basketball shooting percentages showed there was no statistical evidence to support the theory, says Ding, attributing it instead to a psychological tendency to see patterns in random data.

The hot hand theory remained controversial after the statistical methods used in those studies were later shown to be biased, says Ding. But even once the bias was corrected, the theory has since been largely disproven.

Nobel Prize-winning cognitive scientist Daniel Kahneman once called the phenomenon “a massive and widespread cognitive illusion.” Ding’s study is one more confirming the consensus that the hot hand is no more than wishful thinking.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Geoff McMaster, University of Alberta


Mathematicians Discover Impossible Problem In Super Mario Games

Using the tools of computational complexity, researchers have discovered it is impossible to figure out whether certain Super Mario Bros levels can be beaten without playing them, even if you use the world’s most powerful supercomputer.

Figuring out whether certain levels in the Super Mario Bros series of video games can be completed before you play them is mathematically impossible, even if you had several years and the world’s most powerful supercomputer to hand, researchers have found.

“We don’t know how to prove that a game is fun, we don’t know what that means mathematically, but we can prove that it’s hard and that maybe gives some insight into why it’s fun,” says Erik Demaine at the Massachusetts Institute of Technology. “I like to think of hard as a proxy for fun.”

To prove this, Demaine and his colleagues use tools from the field of computational complexity – the study of how difficult and time-consuming various problems are to solve algorithmically. They have previously proven that figuring out whether it is possible to complete certain levels in Mario games is a task that belongs to a group of problems known as NP-hard, where the complexity grows exponentially. This category is extremely difficult to compute for all but the smallest problems.

Now, Demaine and his team have gone one step further by showing that, for certain levels in Super Mario games, answering this question is not only hard, but impossible. This is the case for several titles in the series, including New Super Mario Bros and Super Mario Maker. “You can’t get any harder than this,” he says. “Can you get to the finish? There is no algorithm that can answer that question in a finite amount of time.”

While it may seem counterintuitive, problems in this undecidable category, known as RE-complete, simply cannot be solved by a computer, no matter how powerful, no matter how long you let it work.

Demaine concedes that a small amount of trickery was needed to make Mario levels fit this category. Firstly, the research looks at custom-made levels that allowed the team to place hundreds or thousands of enemies on a single spot. To do this they had to remove the limits placed by the game publishers on the number of enemies that can be present in a level.

They were then able to use the placement of enemies within the level to create an abstract mathematical tool called a counter machine, essentially creating a functional computer within the game.

That trick allowed the team to invoke another conundrum known as the halting problem, which says that, in general, there is no way to determine if a given computer program will ever terminate, or simply run forever, other than running it and seeing what happens.

These layers of mathematical concepts finally allowed the team to prove that no analysis of the game level can say for sure whether or not it can ever be completed. “The idea is that you’ll be able to solve this Mario level only if this particular computation will terminate, and we know that there’s no way to determine that, and so there’s no way to determine whether you can solve the level,” says Demaine.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Study Shows the Power of Social Connections to Predict Hit Songs

Ever wondered how your friends shape your music taste? In a recent study, researchers at the Complexity Science Hub (CSH) demonstrated that social networks are a powerful predictor of a song’s future popularity. By analysing friendships and listening habits, they’ve boosted machine learning prediction precision by 50%.

“Our findings suggest that the social element is as crucial in music spread as the artist’s fame or genre influence,” says Niklas Reisz from CSH. By using information about listener social networks, along with common measures used in hit song prediction, such as how well-known the artist is and how popular the genre is, the researchers improved the precision of predicting hit songs from 14% to 21%. The study, published in Scientific Reports, underscores the power of social connections in music trends.

A deep dive into data

The CSH team analysed data from the music platform last.fm, analysing 2.7 million users, 10 million songs, and 300 million plays. With users able to friend each other and share music preferences, the researchers gained anonymized insights into who listens to what and who influences whom, according to Reisz.

For their model, the researchers worked with two networks: one mapping friendships and another capturing influence dynamics—who listens to a song and who follows suit. “Here, the nodes of the network are also people, but the connections arise when one person listens to a song and shortly afterwards another person listens to the same song for the first time,” explains Stefan Thurner from CSH.

Examining the first 200 plays of a new song, they predicted its chances of becoming a hit—defined as being in the top 1% most played songs on last.fm.

User influence

The study found that a song’s spread hinges on user influence within their social network. Individuals with a strong influence and large, interconnected friend circles accelerate a song’s popularity. According to the study, information about social networks and the dynamics of social influence enable much more precise predictions as to whether a song will be a hit or not.

“Our results also show how influence flows both ways—people who influence their friends are also influenced by them” explains CSH researcher Vito Servedio. “In this way, multi-level cascades can develop within a very short time, in which a song can quickly reach many other people, starting with just a few people.”

Social power in the music industry

Predicting hit songs is crucial for the music industry, offering a competitive edge. Existing models often focus on artist fame and listening metrics, but the CSH study highlights the overlooked social aspect—musical homophily, which is the tendency for friends to listen to similar music. “It was particularly interesting for us to see that the social aspect, musical homophily, has so far received very little attention—even though music has always had a strong social aspect,” says Reisz.

The study quantifies this social influence, providing insights that extend beyond music to areas like political opinion and climate change attitudes, according to Thurner.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Complexity Science Hub Vienna

 


Wire-Cut Forensic Examinations Currently Too Unreliable For Court, New Study Says

A research article published June 10 in the Proceedings of the National Academy of Sciences highlights the importance of careful application of high-tech forensic science to avoid wrongful convictions.

In a study with implications for an array of forensic examinations that rely on “vast databases and efficient algorithms,” researchers found the odds of a false match significantly increase when examiners make millions of comparisons in a quest to match wires found at a crime scene with the tools allegedly used to cut them.

The rate of mistaken identifications could be as high as one in 10 or more, concluded the researchers, who are affiliated with the Center for Statistics and Applications in Forensic Evidence (CSAFE), based in Ames, Iowa.

“It is somewhat of a counterintuition,” said co-author Susan VanderPlas, an assistant professor of statistics at the University of Nebraska-Lincoln. “You are more likely to find the right match—but you’re also more likely to find the wrong match.”

VanderPlas worked as a research professor at CSAFE before moving to Nebraska in 2020. Co-authors of the study, “Hidden Multiple Comparisons Increase Forensic Error Rates,” were Heike Hoffmann and Alicia Carriquiry, both affiliated with CSAFE and Iowa State University’s Department of Statistics.

Wire cuts and tool marks are used frequently as evidence in robberies, bombings, and other crimes. In the case of wire cuts, tiny striations on the cut ends of a wire may be matched to one of many available tools in a toolbox or garage. Comparing the evidence to more tools increases the chances that similar striations may be found on unrelated tools, resulting in a false accusation and conviction.

Wire-cutting evidence has been at issue in at least two cases that garnered national attention, including one where the accused was linked to a bombing based on a small piece of wire, a tiny fraction of an inch in diameter, that was matched to a tool found among the suspect’s belongings.

“Wire-cutting evidence is used in court and, based on our findings, it shouldn’t be—at least not without presenting additional information about how many comparisons were made,” VanderPlas said.

Wire cutting evidence is evaluated by comparing the striations found on the cut end of a piece of wire against the cutting blades of tools suspected to have been used in the crime. In a manual test, the examiner slides the end of the wire along the path created along another piece of material cut by the same tool to see where the pattern of striations match.

An automated process uses a comparison microscope and pattern-matching algorithms, to find possible matches pixel by pixel.

This can result in thousands upon thousands of individual comparisons, depending upon the length of the cutting blade, diameter of the wire, and even the number of tools checked.

For example, VanderPlas said she and her husband tallied the various tin snips, wire cutters, pliers and similar tools stored in their garage and came up with a total of 7 meters in blade length.

Examiners may not even be aware of the number of comparisons they are making as they search for a matching pattern, because those comparisons are hidden in the algorithms.

“This often-ignored issue increases the false discovery rate, and can contribute to the erosion of public trust in the justice system through conviction of innocent individuals,” the study authors wrote.

Forensic examiners typically testify based upon subjective rules about how much similarity is required to make an identification, the study explained. The researchers could not obtain error rate studies for wire-cut examinations and used published error rates for ballistics examinations to estimate possible false discovery rates for wire-cut examinations.

Before wire-cut examinations are used as evidence in court, the researchers recommended that:

  • Examiners report the overall length or area of materials used in the examination process, including blade length and wirediameter. This would enable examination-wide error rates to be calculated.
  • Studies be conducted to assess both false discovery and false elimination error rates when examiners are making difficult comparisons. Studies should link the length and area of comparison to error rates.
  • The number of items searched, comparisons made and results returned should be reported when a database is used at any stage of the forensic evidence evaluation process.

The VanderPlas article joins other reports calling for improvements in forensic science in America. The National Academies Press, publisher of the PNAS journal and other publications of the National Academies of Sciences, Engineering and Medicine, also published the landmark 2009 report “Strengthening Forensic Science in the United States: A Path Forward.”

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Nebraska-Lincoln