Smart learning software helps students study math during lockdowns and beyond

Intelligent tutoring systems for math problems helped pupils remain or even increase their performance during the pandemic. This is the conclusion of a new study led by the Martin Luther University Halle-Wittenberg (MLU) and Loughborough University in the U.K.

As part of their work, the researchers analysed data from 5 million exercises done by about 2,700 pupilsin Germany over a period of five years. The study found that particularly lower-performing children benefit if they use the software regularly. The paper was published in the journal Computers and Education Open.

Intelligent tutoring systems are digital learning platforms that children can use to complete math problems. “The advantage of those rapid learning aids is that pupils receive immediate feedback after they submit their solution. If a solution is incorrect, the system will provide further information about the pupil’s mistake.

“If certain errors are repeated, the system recognizes a deficit and provides further problem sets that address the issue,” explains Assistant Professor Dr. Markus Spitzer, a psychologist at MLU. Teachers could also use the software to discover possible knowledge gaps in their classes and adapt their lessons accordingly.

For the new study, Spitzer and his colleague Professor Korbinian Moeller from Loughborough University used data from “Bettermarks,” a large commercial provider of such tutoring systems in Germany. The team analysed the performance of pupils before, during and after the first two coronavirus lockdowns.

Their analysis included data from about 2,700 children who solved more than 5 million problems. The data was collected between January 2017 and the end of May 2021. “This longer timeframe allowed us to observe the pupils’ performance trajectories over several years and analyse them in a wider context,” says Spitzer.

The students’ performance was shown to remain constant throughout the period. “The fact that their performance didn’t drop during the lockdowns is a win in and of itself. But our analysis also shows that lower-performing children even managed to narrow the gap between themselves and higher achieving pupils,” Spitzer concludes.

According to the psychologist, intelligent tutoring systems are a useful addition to conventional math lessons. “The use of tutoring systems varies greatly from state to state. However, our study suggests that their use should be expanded across the board,” explains Spitzer. The systems could also help during future school closures, for example in the event of extreme weather conditions, transport strikes or similar events.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given Tom Leonhardt, Martin Luther University Halle-Wittenberg

 


‘Models of everything’ created to improve accuracy of long-term weather forecasting

People love to complain about the weather – and especially about weather forecasters. But real, accurate forecasting beyond five to seven days is immensely complicated, due to the sheer volume of atmospheric processes and factors. Fortunately for us, advances in computing are making it possible for mathematicians, atmospheric scientists and statisticians to create “models of everything,” which may lead to accurate long-range weather forecasts.

NC State mathematician John Harlim is working on one such “model of everything,” specifically for longer-range weather and climate prediction. He’s part of a five-year project led by NYU’s Andrew Majda that is creating simpler, less expensive stochastic models (a model that includes random variables) for extended range weather and climate prediction.

One major stumbling block to extending and improving weather predictions beyond seven-day forecasts is a lack of understanding of the tropical weather dynamics that drive global weather patterns. The mix of factors in these patterns is amazingly complex. According to Harlim, “The dynamics in the tropics involve hierarchies of processes on both huge scales – like, 10,000 km – and much smaller scales over many months.  Physical processes in individual clouds can affect these larger processes in the long run.

“In terms of a model, then, you would have to resolve the entire globe in one-kilometer chunks, look at every possible weather pattern that could possibly occur over every moment given all sorts of variables, and then scale it up,” Harlim adds. Since this approach is very expensive, computationally speaking, Harlim and his colleagues hope to develop simpler, cheaper models that can capture tropical dynamics and understand their interactions with extratropical weather patterns.

Says Harlim, “Understanding tropical dynamics is the Holy Grail of atmospheric modeling, and if we’re successful, you’ll be able to get accurate weather forecasting for months, not just days, in advance.”

Atmospheric scientist Sukanta Basu is part of a team working on a “model of everything” for atmospheric turbulence by studying airflow over complex terrain, including islands. The team wants to understand how atmospheric turbulence affects laser propagations, but their work could have other applications as well – such as predicting microbursts for aircraft safety or estimating evaporation rates for water management in agriculture. And just like Harlim’s, Basu’s model will have to take a huge number of factors into account.

“We’ll be looking at 10-meter terrain maps, finding out every spatial location and time and what the atmospheric field may look like,” Basu says. “The amount of computational power needed is huge – one simulation can fill up a terabyte disk – so we’re looking at petascale computing, which can do a quadrillion operations per second. We didn’t have computing on this scale ten years ago, so projects like this were impossible.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Tracey Peake, North Carolina State University


Theorem of Everything: The Secret That Links Numbers and Shapes

For millennia mathematicians have struggled to unify arithmetic and geometry. Now one young genius could have brought them in sight of the ultimate prize.

IF JOEY was Chloe’s age when he was twice as old as Zoe was, how many times older will Zoe be when Chloe is twice as old as Joey is now?

Or try this one for size. Two farmers inherit a square field containing a crop planted in a circle. Without knowing the exact size of the field or crop, or the crop’s position within the field, how can they draw a single line to divide both the crop and field equally?

You’ve either fallen into a cold sweat or you’re sharpening your pencil (if you can’t wait for the answer, you can check the bottom of this page). Either way, although both problems count as “maths” – or “math” if you insist – they are clearly very different. One is arithmetic, which deals with the properties of whole numbers: 1, 2, 3 and so on as far as you can count. It cares about how many separate things there are, but not what they look like or how they behave. The other is geometry, a discipline built on ideas of continuity: of lines, shapes and other objects that can be measured, and the spatial relationships between them.

Mathematicians have long sought to build bridges between these two ancient subjects, and construct something like a “grand unified theory” of their discipline. Just recently, one brilliant young researcher might have brought them decisively closer. His radical new geometrical insights might not only unite mathematics, but also help solve one of the deepest number problems of them all: the riddle of the primes. With the biggest prizes in mathematics, the Fields medals, to be awarded this August, he is beginning to look like a shoo-in.

The ancient Greek philosopher and mathematician Aristotle once wrote, “We cannot… prove geometrical truths by arithmetic.” He left little doubt he believed geometry couldn’t help with numbers, either. It was hardly a controversial thought for the time. The geometrical proofs of Aristotle’s near-contemporary Euclid, often called the father of geometry, relied not on numbers, but logical axioms extended into proofs by drawing lines and shapes. Numbers existed on an entirely different, more abstract plane, inaccessible to geometers’ tools.

And so it largely remained until, in the 1600s, the Frenchman René Descartes used the techniques of algebra – of equation-solving and the manipulation of abstract symbols – to put Euclid’s geometry on a completely new footing. By introducing the notion that geometrical points, lines and shapes could all be described by numerical coordinates on an underlying grid, he allowed geometers to make use of arithmetic’s toolkit, and solve problems numerically.

This was a moonshot that let us, eventually, do things like send rockets into space or pinpoint positions to needle-sharp accuracy on Earth. But to a pure mathematician it is only a halfway house. A circle, for instance, can be perfectly encapsulated by an algebraic equation. But a circle drawn on graph paper, produced by plotting out the equation’s solutions, would only ever capture a fragment of that truth. Change the system of numbers you use, for example – as a pure mathematician might do – and the equation remains valid, while the drawing may no longer be helpful.

Wind forward to 1940 and another Frenchman was deeply exercised by the divide between geometry and numbers. André Weil was being held as a conscientious objector in a prison just outside Rouen, having refused to enlist in the months preceding the German occupation of France – a lucky break, as it turned out. In a letter to his wife, he wrote: “If it’s only in prison that I work so well, will I have to arrange to spend two or three months locked up every year?”

Weil hoped to find a Rosetta stone between algebra and geometry, a reference work that would allow truths in one field to be translated into the other. While behind bars, he found a fragment.

It had to do with the Riemann hypothesis, a notorious problem concerning how those most fascinating numbers, the primes, are distributed (see below). There had already been hints that the hypothesis might have geometrical parallels. Back in the 1930s, a variant had been proved for objects known as elliptic curves. Instead of trying to work out how prime numbers are distributed, says mathematician Ana Caraiani at Imperial College London, “you can relate it to asking how many points a curve has”.

Weil proved that this Riemann-hypothesis equivalent applied for a range of more complicated curves too. The wall that had stood between the two disciplines since Ancient Greek times finally seemed to be crumbling. “Weil’s proof marks the beginning of the science with the most un-Aristotelian name of arithmetic geometry,” says Michael Harris of Columbia University in New York.

The Riemann Hypothesis: The million-dollar question

The prime numbers are the atoms of the number system, integers indivisible into smaller whole numbers other than one. There are an infinite number of them and there is no discernible pattern to their appearance along the number line. But their frequency can be measured – and the Riemann hypothesis, formulated by Bernhard Riemann in 1859, predicts that this frequency follows a simple rule set out by a mathematical expression now known as the Riemann zeta function.

Since then, the validity of Riemann’s hypothesis has been demonstrated for the first 10 trillion primes, but an absolute proof has yet to emerge. As a mark of the problem’s importance, it was included in the list of seven Millennium Problems set by the Clay Mathematics Institute in New Hampshire in 2000. Any mathematician who can tame it stands to win $1 million.

In the post-war years, in the more comfortable setting of the University of Chicago, Weil tried to apply his insight to the broader riddle of the primes, without success. The torch was taken up by Alexander Grothendieck, a mathematician ranked as one of the greatest of the 20th century. In the 1960s, he redefined arithmetic geometry.

Among other innovations, Grothendieck gave the set of whole numbers what he called a “spectrum”, for short Spec(Z). The points of this undrawable geometrical entity were intimately connected to the prime numbers. If you could ever work out its overall shape, you might gain insights into the prime numbers’ distribution. You would have built a bridge between arithmetic and geometry that ran straight through the Riemann hypothesis.

The shape Grothendieck was seeking for Spec(Z) was entirely different from any geometrical form we might be familiar with: Euclid’s circles and triangles, or Descartes’s parabolas and ellipses drawn on graph paper. In a Euclidean or Cartesian plane, a point is just a dot on a flat surface, says Harris, “but a Grothendieck point is more like a way of thinking about the plane”. It encompasses all the potential uses to which a plane could be put, such as the possibility of drawing a triangle or an ellipse on its surface, or even wrapping it map-like around a sphere.

If that leaves you lost, you are in good company. Even Grothendieck didn’t manage to work out the geometry of Spec(Z), let alone solve the Riemann hypothesis. That’s where Peter Scholze enters the story.

“Even the majority of mathematicians find most of the work unintelligible”

Born in Dresden in what was then East Germany in 1987, Scholze is currently, at the age of 30, a professor at the University of Bonn. He laid the first bricks for his bridge linking arithmetic and geometry in his PhD dissertation, published in 2012 when he was 24. In it, he introduced an extension of Grothendieck-style geometry, which he termed perfectoid geometry. His construction is built on a system of numbers known as the p-adics that are intimately connected with the prime numbers (see “The p-adics: A different way of doing numbers”). The key point is that in Scholze’s perfectoid geometry, a prime number, represented by its associated p-adics, can be made to behave like a variable in an equation, allowing geometrical methods to be applied in an arithmetical setting.

It’s not easy to explain much more. Scholze’s innovation represents “one of the most difficult notions ever introduced in arithmetic geometry, which has a long tradition of difficult notions”, says Harris. Even the majority of working mathematicians find most of it unintelligible, he adds.

Be that as it may, in the past few years, Scholze and a few initiates have used the approach to solve or clarify many problems in arithmetic geometry, to great acclaim. “He’s really unique as a mathematician,” says Caraiani, who has been collaborating with him. “It’s very exciting to be a mathematician working in the same field.”

This August, the world’s mathematicians are set to gather in Rio de Janeiro, Brazil, for their latest international congress, a jamboree held every four years. A centrepiece of the event is the awarding of the Fields medals. Up to four of these awards are given each time to mathematicians under the age of 40, and this time round there is one name everyone expects to be on the list. “I suspect the only way he can escape getting a Fields medal this year is if the committee decides he’s young enough to wait another four years,” says Marcus du Sautoy at the University of Oxford.

 

Peter Scholze, 30, looks like a shoo-in for mathematics’s highest accolade this summer

With so many grand vistas opening up, the question of Spec(Z) and the Riemann hypothesis almost becomes a sideshow. But Scholze’s new methods have allowed him to study the geometry, in the sense Grothendieck pioneered, that you would see if you examined the curve Spec(Z) under a microscope around the point corresponding to a prime number p. That is still a long way from understanding the curve as a whole, or proving the Riemann hypothesis, but his work has given mathematicians hope that this distant goal might yet be reached. “Even this is a huge breakthrough,” says Caraiani.

Scholze’s perfectoid spaces have enabled bridges to be built in entirely different directions, too. A half-century ago, in 1967, the then 30-year-old Princeton mathematician Robert Langlands wrote a tentative letter to Weil outlining a grand new idea. “If you are willing to read it as pure speculation I would appreciate that,” he wrote. “If not – I am sure you have a waste basket handy.”

In his letter, Langlands suggested that two entirely distinct branches of mathematics, number theory and harmonic analysis, might be related. It contained the seeds of what became known as the Langlands program, a vastly influential series of conjectures some mathematicians have taken to calling a grand unified theory capable of linking the three core mathematical disciplines: arithmetic, geometry and analysis, a broad field that we encounter in school in the form of calculus. Hundreds of mathematicians around the world, including Scholze, are committed to its completion.

The full slate of Langlands conjectures is no more likely than the original Riemann hypothesis to be proved soon. But spectacular discoveries could lie in store: Fermat’s last theorem, which took 350 years to prove before the British mathematician Andrew Wiles finally did so in 1994, represents just one particular consequence of its conjectures. Recently, the French mathematician Laurent Fargues proposed a way to build on Scholze’s work to understand aspects of the Langlands program concerned with p-adics. It is rumoured that a partial solution could appear in time for the Rio meeting.

In March, Langlands won the other great mathematical award, the Abel prize, for his lifetime’s work. “It took a long time for the importance of Langlands’s ideas to be recognised,” says Caraiani, “and they were overdue for a major award.” Scholze seems unlikely to have to wait so long.

The p-adics: A different way of doing numbers

Key to the latest work in unifying arithmetic and geometry are p-adic numbers.

These are an alternative way of representing numbers in terms of any given prime number p. To make a p-adic number from any positive integer, for example, you write that number in base p, and reverse it. So to write 20 in 2-adic form, say, you take its binary, or base-2, representation – 10100 – and write it backwards, 00101. Similarly 20’s 3-adic equivalent is 202, and as a 4-adic it is written 011.

The rules for manipulating p-adics are a little different, too. Most notably, numbers become closer as their difference grows more divisible by whatever p is. In the 5-adic numbers, for example, the equivalents of 11 and 36 are very close because their difference is divisible by 5, whereas the equivalents of 10 and 11 are further apart.

For decades after their invention in the 1890s, the p-adics were just a pretty mathematical toy: fun to play with, but of no practical use. But in 1920, the German mathematician Helmut Hasse came across the concept in a pamphlet in a second-hand bookshop, and became fascinated. He realised that the p-adics provided a way of harnessing the unfactorisability of the primes – the fact they can’t be divided by other numbers – that turned into a shortcut to solving complicated proofs.

Since then, p-adics have played a pivotal part in the branch of maths called number theory. When Andrew Wiles proved Fermat’s infamous last theorem (that the equation xn + yn = zn has no solutions when x, y and z are positive integers and n is an integer greater than 2) in the early 1990s, practically every step in the proof involved p-adic numbers.

  • Answers: Zoe will be three times as old as she is now. The farmers should draw a line across the field that connects the centre points of the field and the crop.

This article appeared in print under the headline “The shape of numbers”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Infinity War: The Ongoing Battle Over The World’s Hardest Maths Proof

Is there an error in there somewhere?

It’s the stuff of Hollywood. Somebody somewhere is surely selling the movie rights to what’s become the biggest spat in maths: a misunderstood genius, a 500-page proof almost nobody can understand and a supporting cast squabbling over what it all means. At stake: nothing less than the future of pure mathematics.

In 2012, Shinichi Mochizuki at Kyoto University in Japan produced a proof of a long-standing problem called the ABC conjecture. Six years later the jury is still out on whether it’s correct. But in a new twist, Peter Scholze at the University of Bonn – who was awarded the Fields Medal, the highest honour in maths, in August – and Jakob Stix at Goethe University Frankfurt – who is an expert in the type of maths used by Mochizuki – claim to have found an error at the heart of Mochizuki’s proof.

Roll credits? Not so fast. The pairs’ reputation means that their claim is a serious blow for Mochizuki. And a handful of other mathematicians claim to have lost the thread of the proof at the same point Scholze and Stix say there is an error. But there is still room for dispute.

a + b = c?

The ABC conjecture was first proposed in the 1980s and concerns a fundamental property of numbers, based around the simple equation a + b = c. For a long time, mathematicians believed that the conjecture was true but nobody had ever been able to prove it.

To tackle the problem, Mochizuki had to invent a fiendish type of maths called Inter-universal Teichmüller (IUT) theory. In an effort to understand IUT better, Scholze and Stix spent a week with Mochizuki in Tokyo in March. By the end of the week, they claim to have found an error.

The alleged flaw comes in Conjecture 3.12, which many see as the crux of the proof. This section involves measuring an equivalence between different mathematical objects. In effect, Scholze and Stix claim that Mochizuki changes the length of the measuring stick in the middle of the process.

No proof

“We came to the conclusion that there is no proof,” they write in their report, which was posted online on 20 September.

But Ivan Fesenko at the University of Nottingham, UK, who says he is one of only 15 people around the world who actually understand Mochizuki’s theory, thinks Scholze and Stix are jumping the gun. “They spent much less time than all of us who have been studying this for many years,” says Fesenko.

Mochizuki has tried to help others understand his work, taking part in seminars and answering questions. Mochizuki was even the one who posted Scholze and Stix’s critical report. “We have this paradoxical situation in which the victim has published the report of the villain,” says Fesenko with a laugh. “This is an unprecedented event in mathematics.”

So is the proof wrong or just badly explained? Fesenko thinks that the six-year dispute exposes something rotten at the heart of pure mathematics. These days mathematicians work in very narrow niches, he says. “People just do not understand what the mathematician in the next office to you is doing.”

This means that mathematicians will increasingly have to accept others’ proofs without actually understanding them – something Fesenko describes as a fundamental problem for the future development of mathematics.

This suggests the story of Mochizuki’s proof may forever lack a satisfactory ending – becoming a war between mathematicians that is doomed to spiral into infinity. “My honest answer is that we will never have consensus about it,” says Fesenko.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Douglas Heaven*

 


Theoretical study offers proof that one parallel world cannot be extremely different from the other

Theoretical string theory in theoretical physics predicts the existence of parallel worlds (mirror symmetry prediction). These two worlds (A-side and B-side) are supposed to differ in terms of the six-dimensional spaces (A and B) hidden in each world.

However, as these spaces are extremely similar and invisible, theoretically, we cannot distinguish them from the world that we live in. Considerable research has been conducted on the properties of space A, and extreme changes (i.e., blowing up) do not occur in it under certain conditions.

However, recently it has been discovered that spaces A and B are transformed in certain ways and their apparently different objects correspond to each other. However, the nature and extent of this transformation are not well understood, and research on the properties of space B has not yet progressed.

In this new study, published in the Asian Journal of Mathematics, researchers mathematically investigated whether the behaviour of space B also has the same properties as that of space A. They transferred a known phenomenon from the A-side to the B-side and proved that blowing up does not occur even in space B under certain conditions.

This achievement provides mathematical proof for one of the previously intuitively expected similarities between the A-side and B-side. Although the researchers made some assumptions to prove this theorem, in the future, they aim to clarify whether the theorem holds even without these assumptions.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Tsukuba

 


Math professor’s side mirror that eliminates ‘blind spot’ receives US patent

A side-by-side comparison of a standard flat driver’s side mirror with the mirror designed by Dr. R. Andrew Hicks, mathematics professor at Drexel University. With minimal distortion, Hicks’s mirror shows a much wider field of view (the wide area to the left of the silver car seen in the distance, behind the tree, in this image). Hicks’s mirror has a field of view of about 45 degrees, compared to 15 to 17 degrees of view in a flat mirror. Hicks’s mirror received a US patent in May 2012.

A side mirror that eliminates the dangerous “blind spot” for drivers has now received a U.S. patent. The subtly curved mirror, invented by Drexel University mathematics professor Dr. R. Andrew Hicks, dramatically increases the field of view with minimal distortion.

Traditional flat mirrors on the driver’s side of a vehicle give drivers an accurate sense of the distance of cars behind them but have a very narrow field of view. As a result, there is a region of space behind the car, known as the blind spot, that drivers can’t see via either the side or rear-view mirror. It’s not hard to make a curved mirror that gives a wider field of view; no blind spot; but at the cost of visual distortion and making objects appear smaller and farther away.

Hicks’s driver’s side mirror has a field of view of about 45 degrees, compared to 15 to 17 degrees of view in a flat driver’s side mirror. Unlike in simple curved mirrors that can squash the perceived shape of objects and make straight lines appear curved, in Hicks’s mirror the visual distortions of shapes and straight lines are barely detectable.

Hicks, a professor in Drexel’s College of Arts and Sciences, designed his mirror using a mathematical algorithm that precisely controls the angle of light bouncing off of the curving mirror.

“Imagine that the mirror’s surface is made of many smaller mirrors turned to different angles, like a disco ball,” Hicks said. “The algorithm is a set of calculations to manipulate the direction of each face of the metaphorical disco ball so that each ray of light bouncing off the mirror shows the driver a wide, but not-too-distorted, picture of the scene behind him.”

Hicks noted that, in reality, the mirror does not look like a disco ball up close. There are tens of thousands of such calculations to produce a mirror that has a smooth, nonuniform curve.

Hicks first described the method used to develop this mirror in Optics Letters in 2008

In the United States, regulations dictate that cars coming off of the assembly line must have a flat mirror on the driver’s side. Curved mirrors are allowed for cars’ passenger-side mirrors only if they include the phrase “Objects in mirror are closer than they appear.”

Because of these regulations, Hicks’s mirrors will not be installed on new cars sold in the U.S. any time soon. The mirror may be manufactured and sold as an aftermarket product that drivers and mechanics can install on cars after purchase. Some countries in Europe and Asia do allow slightly curved mirrors on new cars. Hicks has received interest from investors and manufacturers who may pursue opportunities to license and produce the mirror.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Drexel University


Mathematicians use network theory to model champion Spanish soccer team’s style

Passing networks for the Netherlands and Spain drawn before the final game, using the passing data and tactical formations of the semi-finals. Image from arXiv:1206.6904v1

For years, sports fanatics have turned to statistics to help them gauge the relative strength or weaknesses of different teams, though some have been more amenable to the process than others. Baseball and football, for example, seem to have a statistic for every action that occurs on the field of play, with different players ranked and rated by their numbers. International football, aka soccer on the other hand has generally defied such attempts due to their being far fewer things to measure with the sport and the continuity of play. That may change however, as mathematicians Javier López Peña and Hugo Touchette of University College and Queen Mary University respectively, have applied network theory to the unique style of play of the European Championship 2012 victor, Spain. And as they describe in the paper they’ve uploaded to the preprint server arXiv, the graphic that results gives some clues as to why the team is considered one of the best of all time.

Anyone who has watched the Spanish team knows that their style of play is different from other teams. So much so it’s been given a name by fans: tiki-taka. It’s all about quick passes and exquisite teamwork. But trying to describe what the team does only leads to superlatives, which don’t really get to the heart of the matter. To help, Peña and Touchette turned to network theory, which makes sense, because soccer is played as a network of teammates working efficiently together.

Unfortunately, on paper, network theory tends to wind up looking like a bunch of hard to decipher equations, which wouldn’t help anyone except those that create them. To make it so that anyone could understand what their theories have turned up, the two used a simple drawing depicting players as nodes and their relationship to one another on the team, the amount of passing that is done between them, the way it is done and to whom, as lines between the nodes.

What shows up in the drawing first, is what everyone already knows, namely, that the team passes the ball among its players a lot. More than a lot actually. In one match during 2010’s World Cup between Spain and the Netherlands, the Spanish players out-passed their opponent 417 to 266. The drawing also highlights the fact that two players on the team are “well connected” i.e. easy for others to get to, versus just one for the opponent.

The graphic also shows what is known as “betweenness centrality,” which is a way of measuring the amount a network relies on a single node to operate at its optimum capacity. With soccer, it measures how much a team relies on an individual player. In this instance, the opponent appears far more vulnerable to disruption if that individual is covered adequately than happens with the Spanish team. Also implemented in the graphic is the notion of PageRank, developed by Google, which ranks the most popular pages by linkages. Applied to soccer it would mean the player who is passed the ball most often by teammates. With Spain, of course, it was central midfielder, Xavi.

In many ways the graphic confirms what most suspect, that Spain wins more because it relies more on precise teamwork rather than the special skills of one or two superstars. In other ways though, it shows that even soccer can be made to offer up statistics if someone looks hard enough.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bob Yirka , Phys.org


Making sports statistics more scientific

Whether it is the sprinter who finished first or the team that scored more points, it’s usually easy to determine who won a sporting event. But finding the statistics that explain why an athlete or team wins is more difficult — and major figures at the intersection of sports and numbers are determined to crack this problem.

Many statistics explain part of the picture, especially in team sports, such as the number of points scored by a point guard, a quarterback’s passing yards, or a slugger’s batting average. But many of these numbers — some of them sacred among sports fans — don’t directly address a player’s contribution to winning. This was a primary topic of discussion last weekend at the Sloan Sports Analytics Conference in Boston.

Organised by students from the MIT Sloan School of Management and sponsored by several sports-related companies, including media outlet ESPN, the conference brought together over 2,200 people to discuss player evaluation and other factors important to the business of sports.

Many of the research presentations and panel discussions described efforts to remove subjective judgments from sports statistics — and how to define new statistics more directly explain a player’s value.

“We have huge piles of statistics now,” said Bill James, Boston Red Sox official and baseball statistics pioneer, at a panel discussion about adding modern statistics to box scores. “What you have to do is reduce it to significant but small concepts,” said James.

New technology and analysis is only now making it possible to learn more about many fundamental events in several sports, which are not often addressed by traditional sports statistics.

“We’re going to talk about stats that work and stats that don’t work,” said John Walsh, executive vice president of ESPN, who moderated the box score panel discussion.

The panel, which also included three other experts, cited several examples of statistics that didn’t work: a receiver might drop a pass for one of several reasons — but rarely are drops broken down into categories; an assist in basketball is a judgment call with room for different interpretations; and fielding percentage in baseball only generally describes a defensive player’s ability.

In another session, Greg Moore, the director of baseball products for the sports graphics and visualization company Sportvision, described recent data-collection advances in baseball. When all the company’s systems are fully deployed in Major League Baseball stadiums, they plan to track the trajectory of each pitch thrown, the movement of all the players on the field and the speed of every swing and hit ball. Their systems, already fully installed in some ballparks, will collect over a million data points at every game. Some of this data is publicly available.

The data will make it possible to say not just that a player hit a double or that he hit a hard line drive, but that the ball left the bat at a certain speed and launch angle and a certain number of degrees from the foul line. No scout or official scorer can contaminate those kinds of measures with subjectivity. On the other hand, a string of objective data is not inherently more useful than a flawed statistic, which may contain useful wisdom.

During the box-score panel discussion, Dean Oliver, ESPN’s sports analytics director, said that collecting information this way opens a new frontier.

“It’s an immense amount of data, but you have to know what to do with it,” said Oliver.

The winner of the conference’s research paper competition found one way to make new data useful. Using SportVU, a basketball database collected by the company STATS, a team from the University of Southern California’s computer science department studied rebounding a basketball from its absolute first concepts. The data shows the movement of all the players and the ball, including rebounds, passes and other game events.

The research team showed empirically what was only previously accessible from inference and experience. They were able to show that by the time almost all rebounds travel 14 feet from the hoop they also drop below eight feet of elevation — easy reaching distance for a basketball player. The researchers were able to compare shot distance with rebound distance and to show where strategic changes might change offensive rebounding success.

Rajiv Maheswaran, the researcher who presented the paper, compared the effort to find new insights about sports to astronomy. Once you start looking at the stars, he said, you make discoveries, which lead to new hypotheses and more research.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to Chris Gorski, Inside Science News Service


Is there a typical rate of cultural evolution?

Are cultural evolution rates similar across human societies? The emerging field of Cliodynamics uses mathematical models to study history.

Tobias Wand and colleagues used a data-driven approach to estimate the rates of cultural evolution in 23 geographic areas, using data from Seshat: Global History Databank, which records nine “complexity characteristics” for 370 polities over 10,000 years, ending in the nineteenth century. The work is published in the journal PNAS Nexus.

The complexity characteristics are polity population; extent of polity territory; the size of the largest urban center; hierarchical complexity; the presence of professional soldiers, officers, priests, bureaucrats, and judges; the variety of observable structures; characteristics of writing and recordkeeping; variety of types of texts; and monetary system.

The authors focused on 23 Natural Geographic Areas (NGAs) and restricted their analysis to the central part of each NGA’s timeline so that they could focus on periods of steady, uninterrupted growth rather than moments of invasion or civilization-ending crisis. A universal logistic growth model was fitted to a variable combining all nine complexity characteristics.

On average, it takes societies that develop in isolation about 2,500 years to reach high levels of social complexity, according to the authors. Full development of complexity was spread out over multiple cultural or institutional phases in every NGA.

According to the authors, the data suggest that there are common pathways in human social evolution across cultures.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to PNAS Nexus

 


Researchers use simulations to tackle finite sphere-packing problem and ‘sausage catastrophe’

Have you ever wondered about the best way to pack a finite number of identical spheres into a shape-shifting flexible container, like a convex hull?

Researchers from the University of Twente, Active Soft Matter Lab led by Dr. Hanumantha Rao Vutukuri in the TNW Faculty, along with Utrecht University, have investigated this fascinating mathematical sphere-packing problem by combining experiments and computer simulations. Their research has been published in Nature Communications.

An intuitively simple problem concerning the best way to pack a set of spheres has a long history of studies dating back to the 17th century. The British sailor Raleigh, for instance, contemplated this issue while trying to find an efficient method for stacking cannonballs on his ship.

Later, Kepler conjectured that the densest packing for an infinite number of spheres would be the face-centered cubic (FCC) crystal structure, akin to the hexagonal arrangement of oranges and apples seen in supermarkets. Remarkably, this hypothesis was only proven in the 21st century.

The ‘sausage catastrophe’

When you have a finite number of spheres, everything gets more complicated; surprisingly, packing the “finite” spheres in a compact cluster does not always yield the densest packing. Mathematicians already conjectured decades ago that a linear, sausage-like arrangement provides the best packing, however, not for all numbers of spheres.

There’s a peculiar phenomenon at play: The sausage-shaped arrangement is the most efficient packing, but only with up to 55 spheres. Beyond that number, a clustered arrangement becomes the best packing. This abrupt transition is known as the “sausage catastrophe.”

In three-dimensional space, packing up to 55 spheres linearly forms a “sausage” that is denser than any cluster arrangement. However, in four dimensions, this scenario changes dramatically. About 300,000 spheres are needed for the “sausage” to transform into a spherelike cluster.

Rao was curious about whether this fundamentally intriguing problem could be observed and resolved in the lab using a model system. This system includes micron-sized spherical particles (colloids) and giant unilamellar vesicles (GUVs), which serve as flexible containers that are the main ingredients in Vutukuri’s lab.

“This curiosity led us to explore the finite sphere packing problem through experiments in 3D real space, specifically using colloids in GUVs. By varying the number of particles and the volume of the vesicles, we were able to examine the different particle arrangements inside these vesicles using a confocal microscope.

“We identified stable arrangements for specific combinations of vesicle volume and particle number: 1D (sausage), 2D (plate, with particles in one plane), and 3D (cluster). Notably, we also observed bistability; the configurations alternated between 1D and 2D arrangements or between 2D and 3D structures. However, our experiments were limited to observing a maximum of nine particles, as packing a larger number of particles resulted in the rupture of the vesicles.”

Vutukuri says that they then contacted Dijkstra’s lab at Utrecht University to delve deeper into this problem using simulations. Strikingly, the simulations predicted that packing spheres in a sausage configuration is most efficient for up to 55 spheres. However, when they attempted to pack 56 spheres into a vesicle, they discovered that a compact three-dimensional cluster was the more efficient arrangement.

Remarkably, for 57 spheres, the packing reverted back to a sausage configuration. While mathematicians have previously determined that a sausage configuration is the most efficient for 58 and 64 spheres, their study contradicts this, demonstrating that compact clusters are more effective.

The findings show that the “sausage catastrophe,” a phenomenon previously described by mathematicians, is not just a theoretical scenario but can also be observed experimentally.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Twente