Students’ Awareness of Their Cognitive Processes Facilitates The Learning of Math, Finds Study

The purpose of education is to ensure that students acquire the skills necessary for succeeding in a world that is constantly changing. Self-assessment, or teaching students how to examine and evaluate their own learning and cognitive processes, has proven to be an effective method, and this competence is partly based on metacognitive knowledge.

A new study conducted at the University of Eastern Finland shows that metacognitive knowledge, i.e., awareness of one’s cognitive processes, is also a key factor in the learning of mathematics. The work is published in the journal Cogent Education.

The study explored thinking skills and possible grade-level differences in children attending comprehensive school in Finland. The researchers investigated 6th, 7th and 9th graders’ metacognitive knowledge in the context of mathematics.

“The study showed that ninth graders excelled at explaining their use of learning strategies, while 7th graders demonstrated proficiency in understanding when and why certain strategies should be used. No other differences between grade levels were observed, which highlights the need for continuous support throughout the learning path,” says Susanna Toikka of the University of Eastern Finland, the first author of the article.

The findings emphasize the need to incorporate elements that support metacognitive knowledge into mathematics learning materials, as well as into teachers’ pedagogical practices.

Self-assessment and understanding of one’s own learning help to face new challenges

Metacognitive knowledge helps students not only to learn mathematics, but also more broadly in self-assessment and lifelong learning. Students who can assess their own learning and understanding are better equipped to face new challenges and adapt to changing environments. Such skills are crucial for lifelong learning, as they enable continuous development and learning throughout life.

“Metacognitive knowledge is a key factor in learning mathematics and problem-solving, but its significance also extends to self-assessment and lifelong learning,” says Toikka.

In schools, metacognitive knowledge can be effectively developed as part of education. Based on earlier studies, Toikka and colleagues have developed a combination of frameworks for metacognitive knowledge, which helps to identify students’ needs for development regarding metacognitive knowledge by offering an alternative perspective to that of traditional developmental psychology.

“This also supports teachers in promoting students’ metacognitive knowledge. Teachers can use the combination of frameworks to design and implement targeted interventions that support students’ skills in lifelong learning.”

According to Toikka, the combination of frameworks enhances understanding of metacognitive knowledge and helps to identify areas where individual support is needed: “This type of understanding is crucial for the development of metacognitive knowledge among diverse learners.”

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to University of Eastern Finland


How To Perfectly Wrap Gifts Of All Shapes And Sizes Using Maths

Reduce wastage and enjoy deeply satisfying neat folds by applying a little geometry to your gift-wrapping, says Katie Steckles.

Wrapping gifts in paper involves converting a 2D shape into a 3D one, which presents plenty of geometrical challenges. Mathematics can help with this, in particular by making sure that you are using just the right amount of paper, with no wastage.

When you are dealing with a box-shaped gift, you might already wrap the paper around it to make a rectangular tube, then fold in the ends. With a little measuring, though, you can figure out precisely how much paper you will need to wrap a gift using this method, keeping the ends nice and neat.

For example, if your gift is a box with a square cross-section, you will need to measure the length of the long side, L, and the thickness, T, which is the length of one side of the square. Then, you will need a piece of paper measuring 4 × T (to wrap around the four sides with a small overlap) by L + T. Once wrapped around the shape, a bit of paper half the height of the square will stick out at each end, and if you push the four sides in carefully, you can create diagonal folds to make four points that meet neatly in the middle. The square ends of the gift make this possible (and deeply satisfying).

Similarly, if you are wrapping a cylindrical gift with diameter D (such as a candle), mathematics tells us you need your paper to be just more than π × D wide, and L + D long. This means the ends can be folded in – possibly less neatly – to also meet exactly in the middle (sticky bows are your friend here).

How about if your gift is an equilateral triangular prism? Here, the length of one side of the triangle gives the thickness T, and your paper should be a little over 3 × T wide and L + (2 × T) long. The extra length is needed because it is harder to fold the excess end bits to make the points meet in the middle. Instead, you can fold the paper to cover the end triangle exactly, by pushing it in from one side at a time and creating a three-layered triangle of paper that sits exactly over the end.

It is also possible to wrap large, flat, square-ish gifts using a diagonal method. If the diagonal of the top surface of your box is D, and the height is H, you can wrap it using a square piece of paper that measures a little over D + (√2 × H) along each side.

Place your gift in the centre of the paper, oriented diagonally, and bring the four corners to meet in the middle of your gift, securing it with one piece of tape or a sticky bow. This will cover all the faces exactly, and look pretty smart too.

For maximum mathematical satisfaction, what you want is to get the pattern on the paper to line up exactly. This is easier for a soft gift, where you can squash it to line up the pattern, but will only work with a box if the distance around it is exactly a multiple of the width of the repeat on the pattern. Otherwise, follow my example (above) and get your own custom wrapping paper printed!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


Why are algorithms called algorithms? A brief history of the Persian polymath you’ve likely never heard of

Algorithms have become integral to our lives. From social media apps to Netflix, algorithms learn your preferences and prioritise the content you are shown. Google Maps and artificial intelligence are nothing without algorithms.

So, we’ve all heard of them, but where does the word “algorithm” even come from?

Over 1,000 years before the internet and smartphone apps, Persian scientist and polymath Muhammad ibn Mūsā al-Khwārizmī invented the concept of algorithms.

In fact, the word itself comes from the Latinised version of his name, “algorithmi”. And, as you might suspect, it’s also related to algebra.

Largely lost to time

Al-Khwārizmī lived from 780 to 850 CE, during the Islamic Golden Age. He is considered the “father of algebra”, and for some, the “grandfather of computer science”.

Yet, few details are known about his life. Many of his original works in Arabic have been lost to time.

It is believed al-Khwārizmī was born in the Khwarazm regionsouth of the Aral Sea in present-day Uzbekistan. He lived during the Abbasid Caliphate, which was a time of remarkable scientific progress in the Islamic Empire.

Al-Khwārizmī made important contributions to mathematics, geography, astronomy and trigonometry. To help provide a more accurate world map, he corrected Alexandrian polymath Ptolemy’s classic cartography book, Geographia.

He produced calculations for tracking the movement of the Sun, Moon and planets. He also wrote about trigonometric functions and produced the first table of tangents.

Al-Khwārizmī was a scholar in the House of Wisdom (Bayt al-Hikmah) in Baghdad. At this intellectual hub, scholars were translating knowledge from around the world into Arabic, synthesising it to make meaningful progress in a range of disciplines. This included mathematics, a field deeply connected to Islam.

There are no images of what al-Khwārizmī looked like, but in 1983 the Soviet Union issued a stamp in honour of his 1,200th birthday. Wikimedia Commons

The ‘father of algebra’

Al-Khwārizmī was a polymath and a religious man. His scientific writings started with dedications to Allah and the Prophet Muhammad. And one of the major projects Islamic mathematicians undertook at the House of Wisdom was to develop algebra.

Around 830 CE, Caliph al-Ma’mun encouraged al-Khwārizmī to write a treatise on algebra, Al-Jabr (or The Compendious Book on Calculation by Completion and Balancing). This became his most important work.

A page from The Compendious Book on Calculation by Completion and Balancing. World Digital Library

At this point, “algebra” had been around for hundreds of years, but al-Khwārizmī was the first to write a definitive book on it. His work was meant to be a practical teaching tool. Its Latin translation was the basis for algebra textbooks in European universities until the 16th century.

In the first part, he introduced the concepts and rules of algebra, and methods for calculating the volumes and areas of shapes. In the second part he provided real-life problems and worked out solutions, such as inheritance cases, the partition of land and calculations for trade.

Al-Khwārizmī didn’t use modern-day mathematical notation with numbers and symbols. Instead, he wrote in simple prose and employed geometric diagrams:

Four roots are equal to twenty, then one root is equal to five, and the square to be formed of it is twenty-five.

In modern-day notation we’d write that like so:

4x = 20, x = 5, x2 = 25

Grandfather of computer science

Al-Khwārizmī’s mathematical writings introduced the Hindu-Arabic numerals to Western mathematicians. These are the ten symbols we all use today: 1, 2, 3, 4, 5, 6, 7, 8, 9, 0.

The Hindu-Arabic numerals are important to the history of computing because they use the number zero and a base-ten decimal system. Importantly, this is the numeral system that underpins modern computing technology.

Al-Khwārizmī’s art of calculating mathematical problems laid the foundation for the concept of algorithms. He provided the first detailed explanations for using decimal notation to perform the four basic operations (addition, subtraction, multiplication, division) and computing fractions.

The contrast between algorithmic computations and abacus computations, as shown in Margarita Philosophica (1517). The Bavarian State Library

This was a more efficient computation method than using the abacus. To solve a mathematical equation, al-Khwārizmī systematically moved through a sequence of steps to find the answer. This is the underlying concept of an algorithm.

Algorism, a Medieval Latin term named after al-Khwārizmī, refers to the rules for performing arithmetic using the Hindu-Arabic numeral system. Translated to Latin, al-Khwārizmī’s book on Hindu numerals was titled Algorithmi de Numero Indorum.

In the early 20th century, the word algorithm came into its current definition and usage: “a procedure for solving a mathematical problem in a finite number of steps; a step-by-step procedure for solving a problem”.

Muhammad ibn Mūsā al-Khwārizmī played a central role in the development of mathematics and computer science as we know them today.

The next time you use any digital technology – from your social media feed to your online bank account to your Spotify app – remember that none of it would be possible without the pioneering work of an ancient Persian polymath.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to The Conversation

 


Millennium Prize: the Birch and Swinnerton-Dyer Conjecture

Elliptic curves have a long and distinguished history that can be traced back to antiquity. They are prevalent in many branches of modern mathematics, foremost of which is number theory.

In simplest terms, one can describe these curves by using a cubic equation of the form

where A and B are fixed rational numbers (to ensure the curve E is nice and smooth everywhere, one also needs to assume that its discriminant 4A3 + 27B2 is non-zero).

To illustrate, let’s consider an example: choosing A=-1 and B=0, we obtain the following picture:

At this point it becomes clear that, despite their name, elliptic curves have nothing whatsoever to do with ellipses! The reason for this historical confusion is that these curves have a strong connection to elliptic integrals, which arise when describing the motion of planetary bodies in space.

The ancient Greek mathematician Diophantus is considered by many to be the father of algebra. His major mathematical work was written up in the tome Arithmetica which was essentially a school textbook for geniuses. Within it, he outlined many tools for studying solutions to polynomial equations with several variables, termed Diophantine Equations in his honour.

One of the main problems Diophantus considered was to find all solutions to a particular polynomial equation that lie in the field of rational numbers Q. For equations of “degree two” (circles, ellipses, parabolas, hyperbolas) we now have a complete answer to this problem. This answer is thanks to the late German mathematician Helmut Hasse, and allows one to find all such points, should they exist at all.

Returning to our elliptic curve E, the analogous problem is to find all the rational solutions (x,y) which satisfy the equation defining E. If we call this set of points E(Q), then we are asking if there exists an algorithm that allows us to obtain all points (x,y) belonging to E(Q).

At this juncture we need to introduce a group law on E, which gives an eccentric way of fusing together two points (p₁ and p₂) on the curve, to obtain a brand new point (p₄). This mimics the addition law for numbers we learn from childhood (i.e. the sum or difference of any two numbers is still a number). There’s an illustration of this rule below:

Under this geometric model, the point p₄ is defined to be the sum of p₁ and p₂ (it’s easy to see that the addition law does not depend on the order of the points p₁, p₂). Moreover the set of rational points is preserved by this notion of addition; in other words, the sum of two rational points is again a rational point.

Louis Mordell, who was Sadleirian Professor of Pure Mathematics at Cambridge University from 1945 to 1953, was the first to determine the structure of this group of rational points. In 1922 he proved

where the number of copies of the integers Z above is called the “rank r(E) of the elliptic curve E”. The finite group ΤE(Q) on the end is uninteresting, as it never has more than 16 elements.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Daniel Delbourgo*


Maths tells when to be more alert on the roads

This shows a stretch of the M30. In the bottom left-hand corner, you can see the square frames under which the detectors are placed.

Technicians from Madrid City Council and a team of Pole and Spanish researchers have analysed the density and intensity of traffic on Madrid’s M30 motorway (Spain) throughout the day. By applying mathematical algorithms, they have verified that drivers should pay more attention to the road between 6pm and 8pm to avoid accidents.

Detection devices installed by the Department of Traffic Technologies of Madrid City Council on the M30 motorway and its access roads were used to conduct a scientific study. Researchers from SICE, the traffic management company in charge of this thoroughfare, used past records to develop a new device that determines the time during which more attention should be paid to the road.

This period is the same as the shortest lifetime of spatio-temporal correlations of traffic intensity. In the case of the M30, it has proven to be between 6pm and 8pm, according to the study published in the Central European Journal of Physics.

“Between 6pm and 8pm, the most ‘stop and go’ phenomena occur. In other words, some vehicles break and others set off or accelerate at different speeds,” as explained to SINC by Cristina Beltrán, SICE engineer, who goes on to say that “vehicle speeds at consecutive stretches of the motorway are less correlated during these periods.”

The researcher clarifies that traffic conditions that vary quickly in space and time means that “drivers should always pay more attention on the roads as to whether they should reduce or increase their speed or be aware of road sign recommendations.”

Reference data were taken during a ‘typical week’ on the 13 kilometre stretch of the M30 using detectors at intervals of approximately 500 metres. These devices record the passing speed of vehicles and also how busy the road is (the time that vehicles remain stationary in a given place). Then, using algorithms and models developed by AGH University of Science and Technology (Poland), correlations were analysed.

Free flow, Passing and Congested Traffic

The team focused mainly on the intensity of traffic (vehicles/hour) and density (vehicle/km) during the three phases of traffic: free flow, congested and an intermission named ‘passing’ or synchronised. The easiest to categorise is the first, where intensity and density grow exponentially with hardly any variation, but the other two also show correlations.

This information helps us to take traffic control measures during rush hours, provide speed recommendations that can alter traffic characteristics and offer alternative routes via less congested areas,” outlines Beltrán. “This is all part of Madrid City Council’s objective to actively research new systems for improving traffic flow on the M30.”

The study enjoyed the support of the European Union’s 7th Framework Programme through the SOCIONICAL Project (www.socionical.eu) and the results were cross-referenced with data from the USA’s Insurance Institute for Highway Safety. The work of this scientific and educational organisation is geared towards reducing human and material loss as a result of road accidents.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Spanish Foundation for Science and Technology (FECYT)


Technique could efficiently solve partial differential equations for numerous applications

In fields such as physics and engineering, partial differential equations (PDEs) are used to model complex physical processes to generate insight into how some of the most complicated physical and natural systems in the world function.

To solve these difficult equations, researchers use high-fidelity numerical solvers, which can be very time consuming and computationally expensive to run. The current simplified alternative, data-driven surrogate models, compute the goal property of a solution to PDEs rather than the whole solution. Those are trained on a set of data that has been generated by the high-fidelity solver, to predict the output of the PDEs for new inputs. This is data-intensive and expensive because complex physical systems require a large number of simulations to generate enough data.

In a new paper, “Physics-enhanced deep surrogates for partial differential equations,” published in December in Nature Machine Intelligence, a new method is proposed for developing data-driven surrogate models for complex physical systems in such fields as mechanics, optics, thermal transport, fluid dynamics, physical chemistry, and climate models.

The paper was authored by MIT’s professor of applied mathematics Steven G. Johnson along with Payel Das and Youssef Mroueh of the MIT-IBM Watson AI Lab and IBM Research; Chris Rackauckas of Julia Lab; and Raphaël Pestourie, a former MIT postdoc who is now at Georgia Tech. The authors call their method “physics-enhanced deep surrogate” (PEDS), which combines a low-fidelity, explainable physics simulator with a neural network generator. The neural network generator is trained end-to-end to match the output of the high-fidelity numerical solver.

“My aspiration is to replace the inefficient process of trial and error with systematic, computer-aided simulation and optimization,” says Pestourie. “Recent breakthroughs in AI like the large language model of ChatGPT rely on hundreds of billions of parameters and require vast amounts of resources to train and evaluate. In contrast, PEDS is affordable to all because it is incredibly efficient in computing resources and has a very low barrier in terms of infrastructure needed to use it.”

In the article, they show that PEDS surrogates can be up to three times more accurate than an ensemble of feedforward neural networks with limited data (approximately 1,000 training points), and reduce the training data needed by at least a factor of 100 to achieve a target error of 5%. Developed using the MIT-designed Julia programming language, this scientific machine-learning method is thus efficient in both computing and data.

The authors also report that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modeling complex systems. This technique offers accuracy, speed, data efficiency, and physical insights into the process.

Says Pestourie, “Since the 2000s, as computing capabilities improved, the trend of scientific models has been to increase the number of parameters to fit the data better, sometimes at the cost of a lower predictive accuracy. PEDS does the opposite by choosing its parameters smartly. It leverages the technology of automatic differentiation to train a neural network that makes a model with few parameters accurate.”

“The main challenge that prevents surrogate models from being used more widely in engineering is the curse of dimensionality—the fact that the needed data to train a model increases exponentially with the number of model variables,” says Pestourie. “PEDS reduces this curse by incorporating information from the data and from the field knowledge in the form of a low-fidelity model solver.”

The researchers say that PEDS has the potential to revive a whole body of the pre-2000 literature dedicated to minimal models—intuitive models that PEDS could make more accurate while also being predictive for surrogate model applications.

“The application of the PEDS framework is beyond what we showed in this study,” says Das. “Complex physical systems governed by PDEs are ubiquitous, from climate modeling to seismic modeling and beyond. Our physics-inspired fast and explainable surrogate models will be of great use in those applications, and play a complementary role to other emerging techniques, like foundation models.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Sandi Miller, Massachusetts Institute of Technology

 


Global symmetry found to be not completely necessary for the protection of topological boundary states

An international team led by researchers at Nankai University in China and at University of Zagreb in Croatia, along with team at the Institut national de la recherche scientifique (INRS) in Canada, led by Roberto Morandotti has made an important breakthrough in the study of topological phases. Their findings were recently published in Nature Physics.

In the last decade, topological photonics has attracted increasing attention due to the unique prospects to achieve light manipulation with high performance in terms of robustness and stability.

Discoveries in topological photonics have opened the way to the development of a novel generation of photonic devices, such as topological lasers and cavities, featuring topologically protected states that are immune to disorders and defects. The concept of topology in physics is inherited from mathematics, where topology is employed to study geometric properties of an object concerning quantities that are preserved under continuous deformation.

Two objects are topologically identical when the surface of one can be continuously deformed into that of the other one and vice versa, e.g., a coffee cup and a torus are equivalent from a topology viewpoint. In physics, the concept of topology is employed to describe the energy band characteristics, leading to prediction of novel topological states of matter and various topological materials.

Different topological phases (trivial and nontrivial) are distinguished by appropriately introducing quantized topological invariants, which enable establishing a link between the bulk properties and the emergence of the feature at the boundary of these materials, known as the bulk-boundary correspondence. In this regard, the most distinctive feature of a nontrivial topology is the existence of robust topological boundary states protected by specific spatial and/or intrinsic symmetries.

In general, in systems of symmetry-protected topological phase (SPT phase), it is believed that the close relationship between topological boundary states, topological invariants, and one or more overall symmetries is indispensable for maintaining topological protection against perturbations.

As consequence, both topological invariants and topological boundary states are irretrievably affected by any distortion that breaks the underlying symmetry. In this work, the international research team has challenged this traditional common belief, and thus broaden the understanding of SPT boundary states. They found that even if the system no longer has quantized topological invariants and some kinds of global symmetry, the topological boundary states can still exist in the corresponding subspaces, protected by the so-called sub-symmetries.

“Our discovery challenges the common thinking of the symmetry-protected topological phase in topology and renews the correspondence of topological invariant and boundary states,” said Domenico Bongiovanni one of the main investigators, Postdoctoral researcher at INRS-EMT. “Our idea has the potential to explain the topological origin of many unconventional states and can find application in different platforms and physical systems.”

The researchers, by introducing and exploring the concept of sub-symmetry, found that global symmetry in the traditional sense is not completely necessary for the protection of topological boundary states. In this regard, topological boundary states are preserved as long as the symmetries of specific subspaces are satisfied, even when the overall topological invariants no longer exist.

The research team cleverly designed and fabricated photonic lattice structures using a cw-laser writing technique to meet the conditions of different subspace symmetries. The experiments demonstrated a proof of concept with two most typical topological lattices: one-dimensional SSH and two-dimensional Kagome lattices.

In addition, the team innovatively introduced the concept of long-range coupling symmetry into the Kagome lattice model, which resolves the current controversies about the existence and topological protection of higher-order topological states in the Kagome lattice.

This study not only challenges the traditional comprehension of topological states protected by symmetry but also provides new ideas for the research and application of topological states in different physical backgrounds. This impact of this work is expected to further promote the development of topological photonics and its cutting-edge interdisciplinary fields, as well as the research and development of a new generation of topological photonic devices based on sub-symmetry-protected boundary states.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Institut national de la recherche scientifique – INRS


Super Models – Using Maths to Mitigate Natural Disasters

We can’t tame the oceans, but modelling can help us better understand them.

Last year will go on record as one of significant natural disasters both in Australia and overseas. Indeed, the flooding of the Brisbane River in January is still making news as the Queensland floods inquiry investigates whether water released from Wivenhoe Dam was responsible. Water modelling is being used to answer the question: could modelling have avoided the problem in the first place?

This natural disaster – as well as the Japanese tsunami in March and the flooding in Bangkok in October – involved the movement of fluids: water, mud or both. And all had a human cost – displaced persons, the spread of disease, disrupted transport, disrupted businesses, broken infrastructure and damaged or destroyed homes. With the planet now housing 7 billion people, the potential for adverse humanitarian effects from natural disasters is greater than ever.

Here in CSIRO’s division of Mathematical and Information Sciences, we’ve been working with various government agencies (in Australia and China) to model the flow of flood waters and the debris they carry. Governments are starting to realise just how powerful computational modelling is for understanding and analysing natural disasters and how to plan for them.

This power is based on two things – the power of computers and the power of the algorithms (computer processing steps) that run on the computers.

In recent years, the huge increase in computer power and speed coupled with advances in algorithm development has allowed mathematical modellers like us to make large strides in our research.

These advances have enabled us to model millions, even billions of water particles, allowing us to more accurately predict the effects of natural and man-made fluid flows, such as tsunamis, dam breaks, floods, mudslides, coastal inundation and storm surges.

So how does it work?

Well, fluids such as sea water can be represented as billions of particles moving around, filling spaces, flowing downwards, interacting with objects and in turn being interacted upon. Or they can be visualised as a mesh of the fluids’ shape.

Let’s consider a tsunami such as the one that struck the Japanese coast in March of last year. When a tsunami first emerges as a result of an earthquake, shallow water modelling techniques give us the most accurate view of the wave’s formation and early movement.

Mesh modelling of water being poured into a glass.

Once the wave is closer to the coast however, techniques known collectively as smoothed particle hydrodynamics (SPH) are better at predicting how the wave interacts with local geography. We’ve created models of a hypothetical tsunami off the northern Californian coastline to test this.

A dam break can also be modelled using SPH. The modelling shows how fast the water moves at certain times and in certain places, where water “overtops” hills and how quickly it reaches towns or infrastructure such as power stations.

This can help town planners to build mitigating structures and emergency services to co-ordinate an efficient response. Our models have been validated using historical data from a real dam that broke in California in 1928 – the St. Francis Dam.

Having established that our modelling techniques work better than others, we can apply them to a range of what-if situations.

In collaboration with the Satellite Surveying and Mapping Application Centre in China we tested scenarios such as the hypothetical collapse of the massive Geheyan Dam in China.

We combined our modelling techniques with digital terrain models to get a realistic picture of how such a disaster would unfold and, therefore, what actions could mitigate it.

Our experience in developing and using these techniques over several decades allows us to combine them in unique ways for each situation.

We’ve modelled fluids not just for natural disaster planning but also movie special effects, hot metal production, water sports and even something as everyday as insurance.

Insurance companies have been looking to us for help to understand how natural disasters unfold. They cop a lot of media flak after disasters for not covering people affected. People living in low-lying areas have traditionally had difficulty accessing flood insurance and find themselves unprotected in flood situations.

Insurers are starting to realise that the modelling of geophysical flows can provide a basis for predicting localised risk of damage due to flooding and make flood coverage a viable business proposition. One Australian insurance company has been working with us to quantify risk of inundation in particular areas.

Using data from the 1974 Brisbane floods, the floods of last year and fluid modelling data, an insurance company can reliably assess residents’ exposure to particular risks and thereby determine suitable premiums.

With evidence-based tools such as fluid modelling in their arsenal, decision-makers are better prepared for the future. That may be a future of more frequent natural disasters, a future with a more-densely-populated planet, or, more likely, a combination of both.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Mahesh Prakash*


Rubik’s Cube Solution Unlocked By Memorising 3915 Final Move Sequences

For the first time, a speedcuber has demonstrated a solution to the Rubik’s cube that combines the two final steps of the puzzle’s solution into one.

A Rubik’s cube solver has become the first person to show proof of successfully combining the final two steps of solving the mechanical puzzle into one move. The feat required the memorisation of thousands of possible sequences for the final step.

Most skilled speedcubers – people who compete to solve Rubik’s cubes with the most speed and efficiency – choose to solve the final layer of the cube with two separate moves that involve 57 possible sequences for the penultimate step and 21 possible sequences for the final move.

Combining those two separate actions into a single move requires a person to memorise 3915 possible sequences. These sequences were previously known to be possible, but nobody is reported to have successfully achieved this so-called “Full 1 Look Last Layer” (Full 1LLL) move until a speedcuber going by the online username “edmarter” shared a YouTube video demonstrating that accomplishment.

Edmarter says he decided to take up the challenge after seeing notable speedcubers try and fail. Over the course of about a year, he spent 10 hours each weekend and any free time during the week practising and memorising the necessary sequences, he told New Scientist. That often involved memorising 144 movement sequences in a single day.

All that effort paid off on 4 August 2022 when edmarter uploaded a video demonstrating the Full 1LLL over the course of 100 separate puzzle solves. He also posted his accomplishment to Reddit’s r/Cubers community.

His average solve time for each Rubik’s cube over the course of that video demonstration run was 14.79 seconds. He says he had an average solve time as low as 12.50 seconds during two practice runs before recording the video.

The Rubik’s cube community has reacted with overwhelming enthusiasm and awe. The top-voted comment on his Reddit post detailing the achievement simply reads: “This is absolutely insane.”

But he is not resting on his laurels. Next up, he plans to try practising some other methods for finishing the Rubik’s cube that have only previously been mastered by a handful of people.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jeremy Hsu*


Mathematicians model a puzzling breakdown in cooperative behaviour

A model developed by evolutionary mathematicians in Canada and Europe shows that as cooperation becomes easier, it can unexpectedly break down. The researchers at the University of British Columbia and Hungarian Research Network used computational spatial models to arrange individuals from the two species on separate lattices facing one another. Credit: Christoph Hauert and György Szabó

Darwin was puzzled by cooperation in nature—it ran directly against natural selection and the notion of survival of the fittest. But over the past decades, evolutionary mathematicians have used game theory to better understand why mutual cooperation persists when evolution should favour self-serving cheaters.

At a basic level, cooperation flourishes when the costs to cooperation are low or the benefits large. When cooperation becomes too costly, it disappears—at least in the realm of pure mathematics. Symbiotic relationships between species—like those between pollinators and plants–are more complex but follow similar patterns.

But new modeling published today in PNAS Nexus adds a wrinkle to that theory, indicating that cooperative behaviour between species may break down in situations where, theoretically at least, it should flourish.

“As we began to improve the conditions for cooperation in our model, the frequency of mutually beneficial behaviour in both species increases, as expected,” says Dr. Christoph Hauert, a mathematician at the University of British Columbia who studies evolutionary dynamics.

“But as the frequency of cooperation in our simulation gets higher—closer to 50%—suddenly there’s a split. More cooperators pool in one species and fewer in the other—and this asymmetry continues to get stronger as the conditions for cooperation get more benign.”

While this “symmetry breaking of cooperation” between two populations has been modeled by mathematicians before, this is the first model that enables individuals in each group to interact and join forces in a more natural way.

Dr. Hauert and colleague Dr. György Szabó from the Hungarian Research Network used computational spatial models to arrange individuals from the two species on separate lattices facing one another. This enables cooperators to form clusters and reduce their exposure to (and exploitation by) cheaters by more frequently interacting with other cooperators.

“Because we chose symmetric interactions, the level of cooperation is the same in both populations,” says Dr. Hauert. “Clusters can still form and protect cooperators but now they need to be synchronized across lattices because that’s where the interactions occur.”

“The odd symmetry breaking in cooperation shows parallels to phase transitions in magnetic materials and highlights the success of approaches developed in statistical and solid state physics,” says Dr. Szabó.

“At the same time the model sheds light on spikes in dramatic changes in behaviour that can significantly affect the interactions in complex living systems.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of British Columbia