How to make better decisions—using scoring systems

When faced with difficult choices, we often rank the alternatives to see how they stack up. This approach is ubiquitous, used from major business and policy decisions, through to personal choices such as the selection of a university course, place to live, or political voting preference.

Typically, criteria are identified and each one is “weighted” according to importance. The options are then scored against each criterion and the weightings applied. But this common approach is frequently flawed and not as rational as it first seems.

Decision-making usually involves choosing from a limited range of options. When there is only one criterion to consider, such as cost, the decision is straightforward. Normally, however, there are pros and cons to balance up for each option. A process known as multi-criteria decision analysis is often used to weigh alternatives in this way.

There are many variants, but the weighted-sum method is the most common. This technique appears superficially simple, logical and intuitive. Typically, the decision-maker starts by eliminating any options that fail to deliver one or more essential requirements, which are termed the “needs”.

The second stage involves sorting the remaining options into an order according to preference. This preference order is based on how well the options satisfy the other requirements, which are termed the “wants”. The needs must be satisfied, whereas the wants are attributes to maximize. Eliminating options that fail to meet the needs is straightforward, but combining the wants is more complicated.

To evaluate options against the wants, the decision-maker draws up a table in which column headings represent the different alternatives and row headings represent the wants. Scores are placed in the cells of the table to represent the performance of each option against each want. Some wants are more important than others, so each one is given an importance weighting.

Typically, the scores and weightings, or “weights”, are both chosen on a scale from zero to ten. Each score is then multiplied by its corresponding weight and additional columns can be inserted in the table for this purpose. The weighted scores are then added together for each option. Finally, the options are ranked according to their total weighted score.

A variant is the weighted-product method, where the weighted scores for each option are multiplied together, rather than added together. It requires all values to be at least one. This method favours good all-round performance, whereas the weighted-sum method is more forgiving of wide variations in performance.

Flaws in the conventional approach

A weakness in most of these multi-criteria decision analyses is their dependence on number values to express ideas that depend on a human evaluation—in other words, they are “qualitative”.

Aside from those difficulties, the straightforward multiplication of a score by an importance weighting has inherent pitfalls if a simple scale like zero to ten is used. Where a want is considered important—has a high importance weighting—and a candidate performs well with respect to that want, the weighted score will be high, as expected.

Where a want is considered less important and an option performs poorly with respect to that want, the method will give that option a low-weighted score. However, since the want in question has a low importance weighting, the option should not be penalized harshly and should remain in contention.

Ideally, the lowest weighted scores ought to be those ascribed to options with a low score against a highly weighted want. They should not be given to options that score low against a want that has a low weighting and is therefore inconsequential. So, although the standard approach works for identifying the best options, it is poor at ranking the alternatives and it gives a false impression of the worst options.

Consider such an approach being used to allocate funding, based on applications for a grant. Rival bids might be scored against wants such as innovation, quality, timeliness and value-for-money, each of which is weighted according to importance. The smallest awards ought to be given to the bids with the lowest scores against the most important criteria. Instead, the bids ranked bottom are those with low scores against criteria that are least important.

A better approach

These problems can be addressed by using a scoring scale that includes negative values. The alternative inference mechanism (AIM) method, makes the necessary adjustments while keeping an intuitive range of scores, such as zero to ten.

AIM recognizes that the worst candidates are those with low scores against wants that have high importance weightings. On the other hand, candidates that score poorly against an unimportant want are not severely disadvantaged.

The chart below contrasts AIM with the conventional approach. The two methods agree on the best options, which are the green ones clustered at the top-right. However, the worst options in red or orange are at the bottom-left using the conventional approach, and at the bottom-right with AIM.

The conventional approach has a row of zeros across the bottom, as it ranks candidates equally if they score zero against a want, regardless of its importance. Similarly, it has a column of zeros up the left side, as it penalizes all candidates against unimportant criteria, regardless of their score. In contrast, AIM has a lot of yellow on the left side, indicating that unimportant scores are neither good nor bad—a much more logical and a rational basis for sound decision-making.

Multi-criteria decision analysis can be an excellent tool for choosing between alternative options, but understanding the weighting mechanism is vital.

AIM was previously described in a specialized engineering journal, but its wider appreciation and adoption could improve decision-making across society. The conventional scoring model has only persisted though an unquestioning acceptance of its validity.

AIM has shown the value of probing a little deeper into the meaning of the numbers. With greater awareness of rational models like AIM, we could make better decisions in all aspects of our lives.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Adrian Hopgood, The Conversation


Sharpening Occam’s Razor: A new perspective on structure and complexity

In science, the explanation with the fewest assumptions is most likely to be true. Called “Occam’s Razor,” this principle has guided theory and experiment for centuries. But how do you compare between abstract concepts?

In a new paper, philosophers from UC Santa Barbara and UC Irvine discuss how to weigh the complexity of scientific theories by comparing their underlying mathematics. They aim to characterize the amount of structure a theory has using symmetry—or the aspects of an object that remain the same when other changes are made.

After much discussion, the authors ultimately doubt that symmetry will provide the framework they need. However, they do uncover why it’s such an excellent guide for understanding structure.

“Scientific theories don’t often wear their interpretation on their sleeves, so it can be hard to say exactly what they’re telling you about the world,” said lead author Thomas Barrett, an associate professor in UC Santa Barbara’s philosophy department. “Especially modern theories. They just get more mathematical by the century.” Understanding the amount of structure in different theories can help us make sense of what they’re saying, and even give us reasons to prefer one over another.

Structure can also help us recognize when two ideas are really the same theory, just in different clothes. For instance, in the early 20th century, Werner Heisenberg and Erwin Schrödinger formulated two separate theories of quantum mechanics. “And they hated each other’s theories,” Barrett said. Schrödinger argued that his colleague’s theory “lacked visualizability.” Meanwhile, Heisenberg found Schrödinger’s theory “repulsive” and claimed that “what Schrödinger writes about visualizability […] is crap.”

But while the two concepts appeared radically different, they actually made the same predictions. About a decade later, their colleague John von Neumann demonstrated that the formulations were mathematically equivalent.

Apples and oranges

A common way to examine a mathematical object is to look at its symmetries. The idea is that more symmetric objects have simpler structures. For instance, compare a circle—which has infinitely many rotational and reflective symmetries—to an arrow, which has only one. In this sense, the circle is simpler than the arrow, and requires less mathematics to describe.

The authors extend this rubric to more abstract mathematics using automorphisms. These functions compare various parts of an object that are, in some sense, “the same” as each other. Automorphisms give us a heuristic for measuring the structure of different theories: More complex theories have fewer automorphisms.

In 2012, two philosophers proposed a way to compare the structural complexity of different theories. A mathematical object X has at least as much structure as another, Y, if and only if the automorphisms of X are a subset of those of Y. Consider the circle again. Now compare it to a circle that is colored half red. The shaded circle now has only some of the symmetries it used to, on account of the extra structure that was added to the system.

This was a good try, but it relied too much on the objects having the same type of symmetries. This works well for shapes, but falls apart for more complicated mathematics.

Isaac Wilhelm, at the National University of Singapore, attempted to fix this sensitivity. We should be able to compare different types of symmetry groups as long as we can find a correspondence between them that preserves each one’s internal framework. For example, labeling a blueprint establishes a correspondence between a picture and a building that preserves the building’s internal layout.

The change allows us to compare the structures of very different mathematical theories, but it also spits out incorrect answers. “Unfortunately, Wilhelm went a step too far there,” Barrett said. “Not just any correspondence will do.”

A challenging endeavor

In their recent paper, Barrett and his co-authors, JB Manchak and James Weatherall, tried to salvage their colleague’s progress by restricting the type of symmetries, or automorphisms, they would consider. Perhaps only a correspondence that arises from the underlying objects (e.g. the circle and the arrow), not their symmetry groups, is kosher.

Unfortunately, this attempt fell short as well. In fact, it seems that using symmetries to compare mathematical structure may be doomed by principle. Consider an asymmetric shape. An ink blot, perhaps. Well, there’s more than one ink stain in the world, all of which are completely asymmetric and completely different from one another. But, they all have the same symmetry group—namely, none—so all these systems classify the ink blots as having the same complexity even if some are far messier than others.

This ink blot example reveals that we can’t tell everything about an object’s structural complexity just by looking at its symmetries. As Barrett explained, the number of symmetries an object admits bottoms out at zero. But there isn’t a corresponding ceiling to the amount of complexity an object can have. This mismatch creates the illusion of an upper limit for structural complexity.

And therein the authors expose the true issue. The concept of symmetry is powerful for describing structure. However, it doesn’t capture enough information about a mathematical object—and the scientific theory it represents—to allow for a thorough comparison of complexity. The search for a system that can do this will continue to keep scholars busy.

A glimmer of hope

While symmetry might not provide the solution the authors hoped for, they uncover a key insight: Symmetries touch on the concepts that an object naturally and organically comes equipped with. In this way, they can be used to compare the structures of different theories and systems. “This idea gives you an intuitive explanation for why symmetries are a good guide to structure,” Barrett said. The authors write that this idea is worth keeping, even if philosophers have to abandon using automorphisms to compare structure.

Fortunately, automorphisms aren’t the only kind of symmetry in mathematics. For instance, instead of looking only at global symmetries, we can look at symmetries of local regions and compare these as well. Barrett is currently investigating where this will lead and working to describe what it means to define one structure in terms of another.

Although clarity still eludes us, this paper gives philosophers a goal. We don’t know how far along we are in this challenging climb to the summit of understanding. The route ahead is shrouded in mist, and there may not even be a summit to reach. But symmetry provides a hold to anchor our ropes as we continue climbing.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of California – Santa Barbara


Has a mathematician solved the ‘invariant subspace problem’? And what does that even mean?

Two weeks ago, a modest-looking paper was uploaded to the arXiv preprint server with the unassuming title “On the invariant subspace problem in Hilbert spaces.” The paper is just 13 pages long and its list of references contains only a single entry.

The paper purports to contain the final piece of a jigsaw puzzle that mathematicians have been picking away at for more than half a century: the invariant subspace problem.

Famous open problems often attract ambitious attempts at solutions by interesting characters out to make their name. But such efforts are usually quickly shot down by experts.

However, the author of this short note, Swedish mathematician Per Enflo, is no ambitious up-and-comer. He is almost 80, has made a name for himself solving open problems, and has quite a history with the problem at hand.

Per Enflo: Mathematics, music, and a live goose

Born in 1944 and now an emeritus professor at Kent State University, Ohio, Enflo has had a remarkable career, not only in mathematics but also in music.

He is a renowned concert pianist who has performed and recorded numerous piano concertos, and has performed solo and with orchestras across the world.

Enflo is also one of the great problem-solvers in a field called functional analysis. Aside from his work on the invariant subspace problem, Enflo solved two other major problems—the basis problem and the approximation problem—both of which had remained open for more than 40 years.

By solving the approximation problem, Enflo cracked an equivalent puzzle called Mazur’s goose problem. Polish mathematician Stanisław Mazur had in 1936 promised a live goose to anyone who solved his problem—and in 1972 he kept his word, presenting the goose to Enflo.

What’s an invariant subspace?

Now we know the main character. But what about the invariant subspace problem itself?

If you’ve ever taken a first-year university course in linear algebra, you will have come across things called vectors, matrices and eigenvectors. If you haven’t, we can think of a vector as an arrow with a length and a direction, living in a particular vector space. (There are lots of different vector spaces with different numbers of dimensions and various rules.)

A matrix is something that can transform a vector, by changing the direction and/or length of the line. If a particular matrix only transforms the length of a particular vector (meaning the direction is either the same or flipped in the opposite direction), we call the vector an eigenvector of the matrix.

Another way to think about this is to say that the matrix transforms the eigenvectors (and any lines parallel to them) back onto themselves: these lines are invariant for this matrix. Taken together, we call these lines invariant subspaces of the matrix.

Eigenvectors and invariant subspaces are also of interest beyond just mathematics—to take one example, it has been said that Google owes its success to “the $25 billion eigenvector.”

What about spaces with an infinite number of dimensions?

So that’s an invariant subspace. The invariant subspace problem is a little more complicated: it is about spaces with an infinite number of dimensions, and it asks whether every linear operator (the equivalent of a matrix) in those spaces must have an invariant subspace.

More precisely (hold onto your hat): the invariant subspace problem asks whether every bounded linear operator T on a complex Banach space X admits a non-trivial invariant subspace M of X, in the sense that there is a subspace M ≠ {0}, X of X such that T(M) is contained back in M.

Stated in this way, the invariant subspace problem was posed during the middle of last century, and eluded all attempts at a solution.

But as is often the case when mathematicians can’t solve a problem, we move the goalposts. Mathematicians working on this problem narrowed their focus by restricting the problem to particular classes of spaces and operators.

The first breakthrough was made by Enflo in the 1970s (although his result was not published until 1987). He answered the problem in the negative, by constructing an operator on a Banach space without a non-trivial invariant subspace.

What’s new about this new proposed solution?

So what is the current status of the invariant subspace problem? If Enflo solved it in 1987, why has he solved it again?

Well, Enflo settled the problem for Banach spaces in general. However, there is a particularly important kind of Banach space called a Hilbert space, which has a strong sense of geometry and is widely used in physics, economics and applied mathematics.

Resolving the invariant subspace problem for operators on Hilbert spaces has been stubbornly difficult, and it is this which Enflo claims to have achieved.

This time Enflo answers in the affirmative: his paper argues that every bounded linear operator on a Hilbert space does have an invariant subspace.

Expert review is still to come

I have not worked through Enflo’s preprint line by line. Enflo himself is reportedly cautiousabout the solution, as it has not yet been reviewed by experts.

Peer review of Enflo’s earlier proof, for Banach spaces in general, took several years. However, that paper ran to more than 100 pages, so a review of the 13 pages of the new paper should be much speedier.

If correct, it will be a remarkable achievement, especially for someone who has already produced so many remarkable achievements over such a large span of time. Enflo’s many contributions to mathematics, and his answers to many open problems, have made a big impact on the field, generating new techniques and ideas.

I’m looking forward to finding out whether Enflo’s work now closes the book on the invariant subspace problem, and to seeing the new mathematics that may emerge out of its conclusion.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Nathan Brownlowe, The Conversation


Decade-Long Struggle Over Maths Proof Could Be Decided By $1m Prize

Mathematician Shinichi Mochizuki’s Inter-universal Teichmüller theory has attracted controversy since it was published in 2012, with no one able to agree whether it is true. Now, a $1 million prize is being launched to settle the matter.

The Inter-Universal Geometry Center (IUGC) is overseeing the prize

Zen University

A prize of $1 million is being offered to anyone who can either prove or disprove an impenetrable mathematical theory, the veracity of which has been debated for over a decade.

Inter-universal Teichmüller theory (IUT) was created by Shinichi Mochizuki at Kyoto University, Japan, in a bid to solve a long-standing problem called the ABC conjecture, which focuses on the simple equation a + b = c. It suggests that if a and b are made up of large powers of prime numbers, then c isn’t usually divisible by large powers of primes.

In 2012, Mochizuki published a series of papers, running to more than 500 pages, that appeared to be a serious attempt at tackling the problem, but his dense and unusual style baffled many experts.

His apparent proof struggled to find acceptance and attracted criticism from some of the world’s most prominent mathematicians, including two who claimed in 2018 to have found a “serious, unfixable gap” in the work. Despite this, the paper was formally published in 2020, in a journal edited by Mochizuki himself. It was reported by Nature that he had nothing to do with the journal’s decision.

Since then, the theory has remained in mathematical limbo, with some people believing it to be true, but others disagreeing. Many mathematicians contacted for this story, including Mochizuki, either didn’t respond or declined to comment on the matter.

Now, the founder of Japanese telecoms and media company Dwango, Nobuo Kawakami, hopes to settle the issue by launching a cash prize for a paper that can prove – or disprove – the theory.

Two prizes are on offer. The first will see between $20,000 and $100,000 awarded annually, for the next 10 years, to the author of the best paper on IUT and related fields. The second – worth $1 million – is reserved for the mathematician who can write a paper that “shows an inherent flaw in the theory”, according to a press release.

Dwango didn’t respond to a request for interview, but during a press conference Kawakami said he hoped that his “modest reward will help increase the number of mathematicians who decide to get involved in IUT theory”.

To be eligible for the prizes, papers will need to be published in a peer-reviewed journal selected from a list compiled by the prize organisers, according to a report in The Asahi Shimbun newspaper, and Kawakami will choose the winner.

The competition is being run by the Inter-Universal Geometry Center (IUGC), which has been founded by Kawakami specifically to promote IUT, says Fumiharu Kato, director of the IUGC.

Kato says that Kawakami isn’t a mathematician, but sees IUT as a momentous part of the history of mathematics and believes that the cash prize is a “good investment” if it can finally clear up the controversy one way or the other.

“For me, IUT theory is logically simple. Of course, I mean, technically very, very hard. But logically it’s simple,” says Kato, who estimates that fewer than 10 people in the world comprehend the concept.

Kato believes that the controversy stems from the fact that Mochizuki doesn’t want to promote his theory, talk to journalists or other mathematicians about it or present the idea in a more easily digestible format, believing his work speaks for itself. Kato says that his current and former students are also reticent to do the same because they see him “as a god” in mathematics and don’t want to go against his wishes.

Because of this, most mathematicians are “at a loss” for a way to understand IUT, says Kato, who concedes that, despite earlier optimism about the idea, it is possible that the theory will eventually be disproven.

Ivan Fesenko at the University of Nottingham, UK, who is also deputy director at the IUGC, has long been a supporter of Mochizuki. He told New Scientist that there is no doubt about the correctness of IUT and that it all hinges on a deep understanding of an existing field called anabelian geometry.

“All negative public statements about the validity of IUT have been made by people who do not have proven expertise in anabelian geometry and who have zero research track record in anabelian geometry,” he says. “The new $1m IUT Challenger Prize will challenge every mathematician who has ever publicly criticised IUT to produce a paper with full proofs and get it published in a good math journal.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


UK hobbyist stuns math world with ‘amazing’ new shapes

David Smith, a retired print technician from the north of England, was pursuing his hobby of looking for interesting shapes when he stumbled onto one unlike any other in November.

When Smith shared his shape with the world in March, excited fans printed it onto T-shirts, sewed it into quilts, crafted cookie cutters or used it to replace the hexagons on a soccer ball—some even made plans for tattoos.

The 13-sided polygon, which 64-year-old Smith called “the hat”, is the first single shape ever found that can completely cover an infinitely large flat surface without ever repeating the same pattern.

That makes it the first “einstein”—named after the German for “one stone” (ein stein), not the famed physicist—and solves a problem posed 60 years ago that some mathematicianshad thought impossible.

After stunning the mathematics world, Smith—a hobbyist with no training who told AFP that he wasn’t great at math at school—then did it again.

While all agreed “the hat” was the first einstein, its mirror image was required one in seven times to ensure that a pattern never repeated.

But in a preprint study published online late last month, Smith and the three mathematicians who helped him confirm the discovery revealed a new shape—”the specter.”

It requires no mirror image, making it an even purer einstein.

‘It can be that easy’

Craig Kaplan, a computer scientist at Canada’s Waterloo University, told AFP that it was “an amusing and almost ridiculous story—but wonderful”.

He said that Smith, a retired print technician who lives in Yorkshire’s East Riding, emailed him “out of the blue” in November.

Smith had found something “which did not play by his normal expectations for how shapes behave”, Kaplan said.

If you slotted a bunch of these cardboard shapes together on a table, you could keep building outwards without them ever settling into a regular pattern.

Using computer programs, Kaplan and two other mathematicians showed that the shape continued to do this across an infinite plane, making it the first einstein, or “aperiodic monotile”.

When they published their first preprint in March, among those inspired was Yoshiaki Araki. The Japanese tiling enthusiast made art using the hat and another aperiodic shape created by the team called “the turtle”, sometimes using flipped versions.

Smith was inspired back, and started playing around with ways to avoid needing to flip his hat.

Less than a week after their first paper came out, Smith emailed Kaplan a new shape.

Kaplan refused to believe it at first. “There’s no way it can be that easy,” he said.

But analysis confirmed that Tile (1,1) was a “non-reflective einstein”, Kaplan said.

Something still bugged them—while this tile could go on forever without repeating a pattern, this required an “artificial prohibition” against using a flipped shape, he said.

So they added little notches or curves to the edges, ensuring that only the non-flipped version could be used, creating “the specter”.

‘Hatfest’

Kaplan said both their papers had been submitted to peer-reviewed journals. But the world of mathematics did not wait to express its astonishment.

Marjorie Senechal, a mathematician at Smith College in the United States, told AFP the discoveries were “exciting, surprising and amazing”.

She said she expects the specter and its relatives “will lead to a deeper understanding of order in nature and the nature of order.”

Doris Schattschneider, a mathematician at Moravian College in the US, said both shapes were “stunning”.

Even Nobel-winning mathematician Roger Penrose, whose previous best effort had narrowed the number of aperiodic tiles down to two in the 1970s, had not been sure such a thing was possible, Schattschneider said.

Penrose, 91, will be among those celebrating the new shapes during the two-day “Hatfest” event at Oxford University next month.

All involved expressed amazement that the breakthrough was achieved by someone without training in math.

“The answer fell out of the sky and into the hands of an amateur—and I mean that in the best possible way, a lover of the subject who explores it outside of professional practice,” Kaplan said.

“This is the kind of thing that ought not to happen, but very happily for the history of science does happen occasionally, where a flash brings us the answer all at once.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Daniel Lawler

 


Exploring how community-based social norms evolve

Cooperation is a guiding principle of everyday life. It’s as simple as following the rules of the road when driving or holding a door for a stranger; cooperation is a fundamental aspect of human societies and has long intrigued evolutionary biologists. But how do people reach a consensus on whether, and with whom, to cooperate?

A new study by Taylor Kessinger, a postdoctoral researcher in the Department of Biology, was published in the Proceedings of the National Academy of Sciences and provides valuable insights into community-based social norms by developing evolutionary game theory and mathematical models to mimic the dynamics of reputation-based cooperation in group-structured populations.

“Darwin famously asked: ‘Why would anyone ever be altruistic?’ And it turns out that question has tons of answers,” Kessinger says.

While previous theories have been proposed to explain cooperation (such as kin selection, wherein individuals are more likely to engage in prosocial behaviours with their relatives), in contemporary human societies, people cooperate with members of much larger communities. This leads to a complicated dynamic and interplay between community members, as cooperating with or avoiding certain individuals greatly affects one’s reputation.

“We wanted to understand the driving factors behind the convergence of social norms, especially in a heterogeneous society where different groups might have diverging views on reputations,” says senior author Joshua Plotkin, the Walter H. and Leonore C. Annenberg Professor of the Natural Sciences. “These norms are vital in facilitating cooperation, and yet how they are collectively accepted and evolve remains largely unexplored.”

Kessinger, the paper’s first author and a member of the Plotkin Research Group, explains that traditional models of cooperation have been based on homogeneous societies and straightforward information flow.

“We realized that this isn’t an accurate representation of reality, as societies are multifaceted; different groups disagree, not only about reputations, but also about which norms should govern behaviour,” Kessinger says.

To address this, the researchers developed a model that considers multiple coexisting social norms and studied how these norms might compete as individuals learn from one another and shift group affiliations, and whether this will lead to the convergence upon a shared norm. One of the key findings was the success of a particular social norm called “stern judging.” This norm assigns a bad reputation to individuals who cooperate with those of bad standing, as a punitive means.

“Stern judging came out on top among the norms we evaluated,” Kessinger says. “It was particularly effective in situations where individuals show a preference for interacting within their own group. This norm, emphasizing a sort of vigilant caution, appears to facilitate greater overall cooperation.”

However, the team also found that separating reputation information into independent groups can destabilize cooperation. “It’s a delicate balance,” Kessinger explains. “The more fragmented the information about reputations, the harder it is for cooperation to take root. But we also observed that in-group social interactions can partly counteract this effect.”

Their research paints a complex picture of cooperation. While certain norms promote the behaviour, the social structures of each community significantly influence its success. Stern judging is a robust facilitator of cooperation; however, it does not work in a fractured society without the ability for individuals to share information between groups.

By revealing the factors that influence the emergence of shared social norms, the study offers valuable insights for diverse fields, from sociology to psychology, and economics.

“Our findings have some implications for how to foster cooperation in diverse, multicultural societies,” Plotkin notes. “Whether it’s at a societal level or within smaller groups like neighbourhoods or workplaces, the ability to converge on a shared norm is crucial.”

The researchers also explore potential implications for the evolution of social norms and the number of independent judgment groups a well-functioning society can sustain. “The insights from our research open up new avenues for exploring the complexity of cooperation in society,” says Kessinger.

Looking ahead, the team wants to investigate how well real-world social norms humans use to judge behaviour map onto the abstract theoretical ones they have developed. With this, they believe that researchers will be able to gauge the extent to which individuals in a population adhere to specific social norms, with respect to demographic factors such as culture or age groups.

“There’s so much more to discover about how we come together to agree on norms and, ultimately, how we cooperate,” Kessinger says.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Nathi Magubane, University of Pennsylvania


Global symmetry found to be not completely necessary for the protection of topological boundary states

An international team led by researchers at Nankai University in China and at University of Zagreb in Croatia, along with team at the Institut national de la recherche scientifique (INRS) in Canada, led by Roberto Morandotti has made an important breakthrough in the study of topological phases. Their findings were recently published in Nature Physics.

In the last decade, topological photonics has attracted increasing attention due to the unique prospects to achieve light manipulation with high performance in terms of robustness and stability.

Discoveries in topological photonics have opened the way to the development of a novel generation of photonic devices, such as topological lasers and cavities, featuring topologically protected states that are immune to disorders and defects. The concept of topology in physics is inherited from mathematics, where topology is employed to study geometric properties of an object concerning quantities that are preserved under continuous deformation.

Two objects are topologically identical when the surface of one can be continuously deformed into that of the other one and vice versa, e.g., a coffee cup and a torus are equivalent from a topology viewpoint. In physics, the concept of topology is employed to describe the energy band characteristics, leading to prediction of novel topological states of matter and various topological materials.

Different topological phases (trivial and nontrivial) are distinguished by appropriately introducing quantized topological invariants, which enable establishing a link between the bulk properties and the emergence of the feature at the boundary of these materials, known as the bulk-boundary correspondence. In this regard, the most distinctive feature of a nontrivial topology is the existence of robust topological boundary states protected by specific spatial and/or intrinsic symmetries.

In general, in systems of symmetry-protected topological phase (SPT phase), it is believed that the close relationship between topological boundary states, topological invariants, and one or more overall symmetries is indispensable for maintaining topological protection against perturbations.

As consequence, both topological invariants and topological boundary states are irretrievably affected by any distortion that breaks the underlying symmetry. In this work, the international research team has challenged this traditional common belief, and thus broaden the understanding of SPT boundary states. They found that even if the system no longer has quantized topological invariants and some kinds of global symmetry, the topological boundary states can still exist in the corresponding subspaces, protected by the so-called sub-symmetries.

“Our discovery challenges the common thinking of the symmetry-protected topological phase in topology and renews the correspondence of topological invariant and boundary states,” said Domenico Bongiovanni one of the main investigators, Postdoctoral researcher at INRS-EMT. “Our idea has the potential to explain the topological origin of many unconventional states and can find application in different platforms and physical systems.”

The researchers, by introducing and exploring the concept of sub-symmetry, found that global symmetry in the traditional sense is not completely necessary for the protection of topological boundary states. In this regard, topological boundary states are preserved as long as the symmetries of specific subspaces are satisfied, even when the overall topological invariants no longer exist.

The research team cleverly designed and fabricated photonic lattice structures using a cw-laser writing technique to meet the conditions of different subspace symmetries. The experiments demonstrated a proof of concept with two most typical topological lattices: one-dimensional SSH and two-dimensional Kagome lattices.

In addition, the team innovatively introduced the concept of long-range coupling symmetry into the Kagome lattice model, which resolves the current controversies about the existence and topological protection of higher-order topological states in the Kagome lattice.

This study not only challenges the traditional comprehension of topological states protected by symmetry but also provides new ideas for the research and application of topological states in different physical backgrounds. This impact of this work is expected to further promote the development of topological photonics and its cutting-edge interdisciplinary fields, as well as the research and development of a new generation of topological photonic devices based on sub-symmetry-protected boundary states.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Institut national de la recherche scientifique – INRS


Researchers develop online hate speech ‘shockwave’ formula

A George Washington University research team has created a novel formula that demonstrates how, why, and when hate speech spreads throughout social media. The researchers put forth a first-principles dynamical theory that explores a new realm of physics in order to represent the shockwave effect created by bigoted content across online communities.

This effect is evident in lightly moderated websites, such as 4Chan, and highly regulated social platforms like Facebook. Furthermore, hate speech ripples through online communities in a pattern that non-hateful content typically does not follow.

The new theory considers recently gained knowledge on the pivotal role of in-built communities in the growth of online extremism. The formula weighs the competing forces of fusion and fission, accounting for the spontaneous emergence of in-built communities through the absorption of other communities and interested individuals (fusion) and the disciplinary measures moderators take against users and groups that violate a given platform’s rules (fission).

Researchers hope the formula can serve as a tool for moderators to project the shockwave-like spread of hateful content and develop methods to delay, divert, and prevent it from spiraling out of control. The novel theory could also be applied beyond social mediaplatforms and online message boards, potentially powering moderation strategies on blockchain platforms, generative AI, and the metaverse.

“This study presents the missing science of how harms thrive online and, hence, how they can be overcome,” Neil Johnson, professor of physics at the George Washington University and co-author of the study, said. “This missing science is a new form of shockwave physics.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to George Washington University


Butterfly chaos effect’ discovered in swarms and herds of animals

Researchers at the Universidad Carlos III de Madrid (UC3M) and the Universidad Complutense de Madrid (UCM) have discovered a phase shift between chaotic states that can appear in herds of animals and, in particular, in swarms of insects. This advance may help to better understand their behaviour or be applied to the study of the movement of cells or tumors.

A phase shift occurs when the conditions of a system change drastically, for example, when water changes from a liquid to a solid statewhen it freezes. In this research, recently published in the journal Physical Review E, this group of mathematicians has found such a phenomenon in swarms. Related research is also available on the arXiv preprint server.

“The insects in the swarm stay in a limited volume, even if they’re in a park or an open space. To explain this, we assume that there is a harmonic potential, a kind of recuperative force that confines them (like that of a spring that tries to return to its resting position when we stretch or contract it),” explains one of the study’s authors, Luis L. Bonilla, director of UC3M’s Gregorio Millán Barbany Institute.

This confinement of the insects responds to a constant of proportionality between force and displacement. Researchers have found that for low confinement values, the movement of the insects in the swarm is chaotic (their movements change a lot if the initial conditions are changed). In this context, the phase shift occurs when the swarm splits into several swarms that are, however, closely related to each other, because there are insects moving from one to another.

At the critical line between phases of this shift, the distance between two insects in the swarm that are influenced by each other is proportional to the size of the swarm, even if the number of insects in the swarm grows indefinitely. This is called “scale-free chaos” and hasn’t been discovered until now, according to the researchers.

“As the number of insects increases, the critical line moves towards zero confinement. What happens is that the maximum distance between two insects that still feel each other’s influence is proportional to the size of the swarm. It doesn’t matter how many insects we put in it. And that represents an absolute novelty that we have discovered,” explains Bonilla.

Specifically, what these mathematicians predict through numerical simulations is that certain swarms of insects (specifically a class of small flies) have scale-free chaotic behaviour, which translates into certain power laws with exponents similar to those measured in nature. They have also found a simplified mean-field theory that corroborates the scale-free chaos phase shift. “It would be good to look for and find the phase shift between chaotic phases that we predict, either in observations in nature or in controlled laboratory studies,” says another of the authors of the research, UCM mathematician Rafael González Albaladejo, who is also linked to UC3M’s Gregorio Millán Barbany Institute.

The formation of herds is one of the manifestations of so-called “active matter,” made up of something like self-propelled individuals that form a whole, the researchers explain. It can be a swarm of insects, a flock of sheep, a flock of birds, a school of fish, but also bacteria in motion, melanocytes (the cells that distribute pigments in the skin) or artificial systems such as periodically shaken irregular grains or seeds. “Herd formation mechanisms play a role in some of these systems, so the results we have obtained can be linked to biology, to the study of cells, and beyond that, to the study of tumors and other diseases,” adds Albaladejo.

How do so many animals move in unison? These researchers explain that each individual only senses its neighbours and moves accordingly, even though it has no perspective on the movement of the whole herd. And depending on whether they use sight, hearing or the vibrations of the fluid in which they are immersed, the concept of neighbour can change quite a bit.

Sheep moving together see and sense those around them, while birds in a flock see their nearest neighbours, even if they are quite far apart. “Moving accordingly may mean that they move in the same direction as their neighbours (the norm) or they may adopt different strategies depending on the situation. For example, if a crowd is trying to get out of a crowded pen with more than one gate, there are times when not following neighbours is advantageous,” they explain.

It has taken the mathematicians about two years to carry out this research work. Initially, they set out to explain experiments by studying the conventional phase shift between a crowd of insects that fill a space with constant density and become ordered when passing a critical value of the control parameter (e.g., by decreasing the noise). But then they decided to add a harmonic potential to confine the swarm and explore what happens when the attractive force between individuals decreases.

“We discovered many periodic, quasi-periodic and finally chaotic states for a fixed number of insects that we increased. The surprising thing is the transition between chaotic states that we didn’t know or assume existed, and we were able to find the correct arguments and tests to support their existence,” says another of the study’s authors, Ana Carpio, from UCM’s Department of Mathematical Analysis and Applied Mathematics, who points out that there is still a lot to be done based on this work.

“From experimentally seeking confirmation of our predictions and better adapting the model to experimental observations, to carrying out theoretical and mathematical research that goes beyond our numerical simulations,” she concludes.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Carlos III University of Madrid


Implications of no-free-lunch theorems

In the 18th century, the philosopher David Hume observed that induction—inferring the future based on what’s happened in the past—can never be reliable. In 1997, SFI Professor David Wolpert with his colleague Bill Macready made Hume’s observation mathematically precise, showing that it’s impossible for any inference algorithm (such as machine learning or genetic algorithms) to be consistently better than any other for every possible real-world situation.

Over the next decade, the pair proved a series of theorems about this that were dubbed the “no-free-lunch” theorems. These proved that one algorithm could, in fact, be a bit better than another in most circumstances—but only at the cost of being far worse in the remaining circumstances.

These theorems have been extremely controversial since their inception, since they punctured the claims of many researchers that the algorithms they had developed were superior to other algorithms. As part of the controversy, in 2019, the philosopher Gerhard Schulz wrote a book wrestling with the implications of Hume’s and Wolpert’s work.

A special issue of the Journal for General Philosophy of Science published in March 2023 is devoted to Schulz’s book, and includes an article by Wolpert himself, in which he reviews the “no-free-lunch” theorems, pointing out that there are also many “free-lunch” theorems.

He states that the meta-induction algorithms that Schurz advocates as a “solution to Hume’s problem” are simply examples of such a free lunch based on correlations among the generalization errors of induction algorithms. Wolpert concludes that the prior algorithms that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Santa Fe Institute