Octonions: The Strange Maths That Could Unite The Laws Of Nature

Could a system of eight-dimensional numbers help physicists find a single mathematical framework that describes the entire universe?

Words can be slippery. That is perhaps even more true in physics than it is in the rest of life. Think of a “particle”, for instance, and we might conjure an image of a tiny sphere. In truth, “particle” is just a poetic term for something far removed from our everyday experience – which is why our best descriptions of reality make use of the cold precision of mathematics.

But just as there are many human languages, so there is more than one type of number system. Most of us deal with only the familiar number line that begins 1, 2, 3. But other, more exotic systems are available. Recently, physicists have been asking a profound question: what if we are trying to describe reality with the wrong type of numbers?

Each mathematical system has its own special disposition, just like languages. Love poems sound better in French. German has that knack of expressing sophisticated concepts – like schadenfreude – in a few syllables. Now, in the wake of a fresh breakthrough revealing tantalising connections between models of how matter works at different energy scales, it seems increasingly likely that an exotic set of numbers known as the octonions might have what it takes to capture the truth about reality.

Mathematicians are excited because they reckon that by translating our theories of reality into the language of the octonions, it could tidy up some of the deepest problems in physics and clear a path to a “grand unified theory” that can describe the universe in one statement. “This feels like a very promising direction,” says Latham Boyle at the Perimeter Institute in Waterloo, Canada. “I find it irresistible to think about.”

Many physicists dream of finding a grand unified theory, a single mathematical framework that tells us where the forces of nature come from and how they act on matter. Critically, such a theory would also capture how and why these properties changed over the life of the universe, as we know they have.

So far, the closest we have come is the standard model of particle physics, which details the universe’s fundamental particles and forces: electrons, quarks, photons and the rest. The trouble is, the standard model has its shortcomings. To make it work, we must feed in around 20 measured numbers, such as the masses of particles. We don’t know why these numbers are what they are. Worse, the standard model has little to say about space-time, the canvas in which particles live. We seem to live in a four-dimensional space-time, but the standard model doesn’t specify that this must be so. “Why not, say, seven-dimensional space-time?” Boyle wonders.

Real and imaginary numbers

Many think the solution to these woes will come when experiments uncover a missing piece of the standard model. But after years of effort, this hasn’t happened, and some are wondering if the problem is the maths itself.

Mathematicians have known for centuries that there are numbers other than the ones we can count on our fingers. Take the square root of -1, known as i. There is no meaningful answer to this expression, as both 1 × 1 and -1 × -1 are equal to 1, so i is an “imaginary number”. They found that by combining i with real numbers – which include all the numbers you could place on a number line, including negative numbers and decimals – they could fashion a new system called the complex numbers.

Think of complex numbers as being two-dimensional; the two parts of each number can record unrelated properties of the same object. This turns out to be extremely handy. All our electronic infrastructure relies on complex numbers. And quantum theory, our hugely successful description of the small-scale world, doesn’t work without them.

In 1843, Irish mathematician William Rowan Hamilton took things a step further. Supplementing the real and the imaginary numbers with two more sets of imaginary numbers called j and k, he gave us the quaternions, a set of four-dimensional numbers. Within a few months, Hamilton’s friend John Graves had found another system with eight dimensions called the octonions.

Real numbers, complex numbers, quarternions and octonions are collectively known as the normed division algebras. They are the only sets of numbers with which you can perform addition, subtraction, multiplication and division. Wilder systems are possible – the 16-dimensional sedenions, for example – but here the normal rules break down.

Today, physics makes prolific use of three of these systems. The real numbers are ubiquitous. Complex numbers are essential in particle physics as well as quantum physics. The mathematical structure of general relativity, Albert Einstein’s theory of gravity, can be expressed elegantly by the quaternions.

The octonions stand oddly apart as the only system not to tie in with a central physical law. But why would nature map onto only three of these four number systems? “This makes one suspect that the octonions – the grandest and least understood of the four – should turn out to be important too,” says Boyle.

In truth, physicists have been thinking such thoughts since the 1970s, but the octonions have yet to fulfil their promise. Michael Duff at Imperial College London was, and still is, drawn to the octonions, but he knows many have tried and failed to decipher their role in describing reality. “The octonions became known as the graveyard of theoretical physics,” he says.

That hasn’t put off a new generation of octonion wranglers, including Nichol Furey at Humboldt University of Berlin. She likes to look at questions in physics without making any assumptions. “I try to solve problems right from scratch,” she says. “In doing so, you can often find alternate paths that earlier authors may have missed.” Now, it seems she and others might be making the beginnings of an octonion breakthrough.

Internal symmetries in quantum mechanics

To get to grips with Furey’s work, it helps to understand a concept in physics called internal symmetry. This isn’t the same as the rotational or reflectional symmetry of a snowflake. Instead, it refers to a number of more abstract properties, such as the character of certain forces and the relationships between fundamental particles. All these particles are defined by a series of quantum numbers – their mass, charge and a quantum property called spin, for instance. If a particle transforms into another particle – an electron becoming a neutrino, say – some of those numbers will change while others won’t. These symmetries define the structure of the standard model.

Internal symmetries are central to the quest for a grand unified theory. Physicists have already found various mathematical models that might explain how reality worked back at the time when the universe had much more energy. At these higher energies, it is thought there would have been more symmetries, meaning that some forces we now experience as distinct would have been one and the same. None of these models have managed to rope gravity into the fold: that would require an even grander “theory of everything”. But they do show, for instance, that the electromagnetic force and weak nuclear force would have been one “electroweak” force until a fraction of a second after the big bang. As the universe cooled, some of the symmetries broke, meaning this particular model would no longer apply.

Each different epoch requires a different mathematical model with a gradually reducing number of symmetries. In a sense, these models all contain each other, like a set of Russian dolls.

One of the most popular candidates for the outermost doll – the grand unified theory that contains all the others – is known as the spin(10) model. It has a whopping 45 symmetries. In one formulation, inside this sits the Pati-Salam model, with 21 symmetries. Then comes the left-right symmetric model, with 15 symmetries, including one known as parity, the kind of left-right symmetry that we encounter when we look in a mirror. Finally, we reach the standard model, with 12 symmetries. The reason we study each of these models is that they work; their symmetries are consistent with experimental evidence. But we have never understood what determines which symmetries fall away at each stage.

In August 2022, Furey, together with Mia Hughes at Imperial College London, showed for the first time that the division algebras, including the octonions, could provide this link. To do so, they drew on ideas Furey had years ago to translate all the mathematical symmetries and particle descriptions of various models into the language of division algebras. “It took a long time,” says Furey. The task required using the Dixon algebra, a set of numbers that allow you to combine real, complex, quaternion and octonion maths. The result was a system that describes a set of octonions specified by quaternions, which are in turn specified by complex numbers that are specified by a set of real numbers. “It’s a fairly crazy beast,” says Hughes.

It is a powerful beast, too. The new formulation exposed an intriguing characteristic of the Russian doll layers. When some numbers involved in the complex, quaternion and octonion formulations are swapped from positive to negative, or vice versa, some of the symmetries change and some don’t. Only the ones that don’t are found in the next layer down. “It allowed us to see connections between these well-studied particle models that had not been picked up on before,” says Furey. This “division algebraic reflection”, as Furey calls it, could be dictating what we encounter in the real physical universe, and – perhaps – showing us the symmetry-breaking road up to the long-sought grand unified theory.

The result is new, and Furey and Hughes haven’t yet been able to see where it may lead. “It hints that there might be some physical symmetry-breaking process that somehow depends upon these division algebraic reflections, but so far the nature of that process is fairly mysterious,” says Hughes.

Furey says the result might have implications for experiments. “We are currently investigating whether the division algebras are telling us what can and cannot be directly measured at different energy scales,” she says. It is a work in progress, but analysis of the reflections seems to suggest that there are certain sets of measurements that physicists should be able to make on particles at low energies – such as the measurement of an electron’s spin – and certain things that won’t be measurable, such as the colour charge of quarks.

Among those who work on octonions, the research is making waves. Duff says that trying to fit the standard model into octonionic language is a relatively new approach: “If it paid off, it would be very significant, so it’s worth trying.” Corinne Manogue at Oregon State University has worked with octonions for decades and has seen interest ebb and flow. “This moment does seem to be a relative high,” she says, “primarily, I think, because of Furey’s strong reputation and advocacy.

The insights from the octonions don’t stop there. Boyle has been toying with another bit of exotic maths called the “exceptional Jordan algebra”, which was invented by German physicist Pascual Jordan in the 1930s. Working with two other luminaries of quantum theory, Eugene Wigner and John von Neumann, Jordan found a set of mathematical properties of quantum theory that resisted classification and were closely related to the octonions.

Probe this exceptional Jordan algebra deeply enough and you will find it contains the mathematical structure that we use to describe Einstein’s four-dimensional space-time. What’s more, we have known for decades that within the exceptional Jordan algebra, you will find a peculiar mathematical structure that we derived through an entirely separate route and process in the early 1970s to describe the standard model’s particles and forces. In other words, this is an octonionic link between our theories of space, time, gravity and quantum theory. “I think this is a very striking, intriguing and suggestive observation,” says Boyle.

Responding to this, Boyle has dug deeper and discovered something intriguing about the way a class of particles called fermions, which includes common particles like electrons and quarks, fits into the octonion-based language. Fermions are “chiral”, meaning their mirror-image reflections – the symmetry physicists call parity – look different. This had created a problem when incorporating fermions into the octonion-based versions of the standard model. But Boyle has now found a way to fix that – and it has a fascinating spin-off. Restoring the mirror symmetry that is broken in the standard model also enables octonionic fermions to sit comfortably in the left-right symmetric model, one level further up towards the grand unified theory.

Beyond the big bang

This line of thinking might even take us beyond the grand unified theory, towards an explanation of where the universe came from. Boyle has been working with Neil Turok, his colleague at the Perimeter Institute, on what they call a “two-sheeted universe” that involves a set of symmetries known as charge, parity and time (CPT). “In this hypothesis, the big bang is a kind of mirror separating our half of the universe from its CPT mirror image on the other side of the bang,” says Boyle. The octonionic properties of fermions that sit in the left-right symmetric model are relevant in developing a coherent theory for this universe, it turns out. “I suspect that combining the octonionic picture with the two-sheeted picture of the cosmos is a further step in the direction of finding the right mathematical framework for describing nature,” says Boyle.

As with all the discoveries linking the octonions to our theories of physics so far, Boyle’s work is only suggestive. No one has yet created a fully fledged theory of physics based on octonions that makes new predictions we can test by using particle colliders, say. “There’s still nothing concrete yet: there’s nothing we can tell the experimentalists to go and look for,” says Duff. Furey agrees: “It is important to say that we are nowhere near being finished.

But Boyle, Furey, Hughes and many others are increasingly absorbed by the possibility that this strange maths really could be our best route to understanding where the laws of nature come from. In fact, Boyle thinks that the octonion-based approach could be just as fruitful as doing new experiments to find new particles. “Most people are imagining that the next bit of progress will be from some new pieces being dropped onto the table,” he says. “That would be great, but maybe we have not yet finished the process of fitting the current pieces together.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Brooks*


Crowds Beat Computers in Answer to Wikipedia-Sized Maths Problem

A maths problem previously tackled with the help of a computer, which produced a proof the size of Wikipedia, has now been cut down to size by a human. Although it is unlikely to have practical applications, the result highlights the differences between two modern approaches to mathematics: crowdsourcing and computers.

Terence Tao of the University of California, Los Angeles, has published a proof of the Erdős discrepancy problem, a puzzle about the properties of an infinite, random sequence of +1s and -1s. In the 1930s, Hungarian mathematician Paul Erdős wondered whether such a sequence would always contain patterns and structure within the randomness.

One way to measure this is by calculating a value known as the discrepancy. This involves adding up all the +1s and -1s within every possible sub-sequence. You might think the pluses and minuses would cancel out to make zero, but Erdős said that as your sub-sequences got longer, this sum would have to go up, revealing an unavoidable structure. In fact, he said the discrepancy would be infinite, meaning you would have to add forever, so mathematicians started by looking at smaller cases in the hopes of finding clues to attack the problem in a different way.

Last year, Alexei Lisitsa and Boris Konev of the University of Liverpool, UK used a computer to prove that the discrepancy will always be larger than two. The resulting proof was a 13 gigabyte file – around the size of the entire text of Wikipedia – that no human could ever hope to check.

Helping hands

Tao has used more traditional mathematics to prove that Erdős was right, and the discrepancy is infinite no matter the sequence you choose. He did it by combining recent results in number theory with some earlier, crowdsourced work.

In 2010, a group of mathematicians, including Tao, decided to work on the problem as the fifth Polymath project, an initiative that allows professionals and amateurs alike to contribute ideas through SaiBlogs and wikis as part of mathematical super-brain. They made some progress, but ultimately had to give up.

“We had figured out an interesting reduction of the Erdős discrepancy problem to a seemingly simpler problem involving a special type of sequence called a completely multiplicative function,” says Tao.

Then, in January this year, a new development in the study of these functions made Tao look again at the Erdős discrepancy problem, after a commenter on his SaiBlog pointed out a possible link to the Polymath project and another problem called the Elliot conjecture.

Not just conjecture

“At first I thought the similarity was only superficial, but after thinking about it more carefully, and revisiting some of the previous partial results from Polymath5, I realised there was a link: if one could prove the Elliott conjecture completely, then one could also resolve the Erdős discrepancy problem,” says Tao.

“I have always felt that that project, despite not solving the problem, was a distinct success,” writes University of Cambridge mathematician Tim Gowers, who started the Polymath project and hopes that others will be encouraged to participate in future. “We now know that Polymath5 has accelerated the solution of a famous open problem.”

Lisitsa praises Tao for doing what his algorithm couldn’t. “It is a typical example of high-class human mathematics,” he says. But mathematicians are increasingly turning to machines for help, a trend that seems likely to continue. “Computers are not needed for this problem to be solved, but I believe they may be useful in other problems,” Lisitsa says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jacob Aron*

 


The Mathematically Perfect Table Size For Solving Any Jigsaw Puzzle

When choosing the perfectly sized table to do your jigsaw puzzle on, work out the area of the completed puzzle and multiply it by 1.73.

People may require a larger table if they like to lay all the pieces out at the start, rather than keeping them in the box or in piles

How large does your table need to be when doing a jigsaw puzzle? The answer is the area of the puzzle when assembled multiplied by 1.73. This creates just enough space for all the pieces to be laid flat without any overlap.

“My husband and I were doing a jigsaw puzzle one day and I just wondered if you could estimate the area that the pieces take up before you put the puzzle together,” says Madeleine Bonsma-Fisher at the University of Toronto in Canada.

To uncover this, Bonsma-Fisher and her husband Kent Bonsma-Fisher, at the National Research Council Canada, turned to mathematics.

Puzzle pieces take on a range of “funky shapes” that are often a bit rectangular or square, says Madeleine Bonsma-Fisher. To get around the variation in shapes, the pair worked on the basis that all the pieces took up the surface area of a square. They then imagined each square sitting inside a circle that touches its corners.

By considering the area around each puzzle piece as a circle, a shape that can be packed in multiple ways, they found that a hexagonal lattice, similar to honeycomb, would mean the pieces could interlock with no overlap. Within each hexagon is one full circle and parts of six circles.

They then found that the area taken up by the unassembled puzzle pieces arranged in the hexagonal pattern would always be the total area of the completed puzzle – calculated by multiplying its length by its width – multiplied by the root of 3, or 1.73.

This also applies to jigsaw puzzle pieces with rectangular shapes, seeing as these would similarly fit within a circle.

While doing a puzzle, some people keep pieces that haven’t yet been used in the box, while others arrange them in piles or lay them on a surface, the latter being Madeleine Bonsma-Fisher’s preferred method. “If you really want to lay all your pieces out flat and be comfortable, your table should be a little over twice as big as your sample puzzle,” she says.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Chen Ly*


Digit Patterns in Square Numbers

If You take a look at the square numbers (n^2, n a positive integer), you’ll notice plenty of patterns in the digits. For example, if you look at just the last digit of each square, you’ll observe the repeating pattern 1, 4, 9, 6, 5, 6, 9, 4, 1, 0, … If you construct a graph of “last digit” vs n (like the one below, built with Falthom), the symmetry and period of this digit pattern is apparent.

Why does this happen? The periodic nature of the pattern is easy to understand – when you square a number, only the digit in the ones place contributes to ones place of the product. For example, 22*22 and 32*32 are both going to have a 4 as their last digit – the values in the tens place (or any other place other than the ones) do not affect what ends up as the last digit.

The reason for the symmetry about n=5 is a little less obvious. To see what is going on, it is helpful to use modular arithmetic and to realize that ” last digit of n” is the same as “n mod 10”. Considering what 10-n looks like mod 10 after it is squared, we have the equation below.

This tells us that the last digit of (10-n)^2 is the same as the last digit of n^2, because everything else that is different about these two numbers is divisible by 10.

If you look at the last two digits of the square numbers, you see another repeating pattern that has similar symmetries.

This is a nice looking graph – the period is 50 with a line of symmetry at n=25. You can think about it in the same way as the one-digit case, this time the symmetry is understood by looking at (50-n)^2 mod 100. (Looking at numbers mod 100 tells us their last two digits.)

If you decide to investigate patterns in cubes or higher powers, you’ll see somewhat similar results. Using the binomial theorem and modular arithmetic, you can see why even powers give symmetry similar to the n^2 case, while odd powers do not (although all are periodic).

This graph shows the pattern in the last digit of n^3.

This last graph shows the pattern for the last two digits of n^4.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Mathematician Cracks Centuries-Old Problem About The Number 33

The number 33 has surprising depth

Add three cubed numbers, and what do you get? It is a question that has puzzled mathematicians for centuries.

In 1825, a mathematician known as S. Ryley proved that any fraction could be represented as the sum of three cubes of fractions. In the 1950s, mathematician Louis Mordell asked whether the same could be done for integers, or whole numbers. In other words, are there integers k, x, y and z such that k = x3 + y3 + z3 for each possible value of k?

We still don’t know. “It’s long been clear that there are maths problems that are easy to state, but fiendishly hard to solve,” says Andrew Booker at the University of Bristol, UK – Fermat’s last theorem is a famous example.

Booker has now made another dent in the cube problem by finding a sum for the number 33, previously the lowest unsolved example. He used a computer algorithm to search for a solution:

33 = 8,866,128,975,287,5283 + (-8,778,405,442,862,239)3 + (-2,736,111,468,807,040)3

To cut down calculation time, the program eliminated certain combinations of numbers. “For instance, if x, y and z are all positive and large, then there’s no way that x3 + y3 + z3 is going to be a small number,” says Booker. Even so, it took 15 years of computer-processing time and three weeks of real time to come up with the result.

For some numbers, finding a solution to the equation k = x3 + y3 + z3 is simple, but others involve huge strings of digits. “It’s really easy to find solutions for 29, and we know a solution for 30, but that wasn’t found until 1999, and the numbers were in the millions,” says Booker.

Another example is for the number 3, which has two simple solutions: 1+ 1+ 1 and 4+ 4+ (-5) 3 . “But to this day, we still don’t know whether there are more,” he says.

There are certain numbers that we know definitely can’t be the sum of three cubes, including 4, 5, 13, 14 and infinitely many more.

The solution to 74 was only found in 2016, which leaves 42 as the only number less than 100 without a possible solution. There are still 12 unsolved numbers less than 1000.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Donna Lu*


Magic Numbers: The Beauty Of Decimal Notation

While adding up your grocery bill in the supermarket, you’re probably not thinking how important or sophisticated our number system is.

But the discovery of the present system, by unknown mathematicians in India roughly 2,000 years ago – and shared with Europe from the 13th century onwards – was pivotal to the development of our modern world.

Now, what if our “decimal” arithmetic, often called the Indo-Arabic system, had been discovered earlier? Or what if it had been shared with the Western world earlier than the 13th century?

First, let’s define “decimal” arithmetic: we’re talking about the combination of zero, the digits one through nine, positional notation, and efficient rules for arithmetic.

“Positional notation” means that the value represented by a digit depends both on its value and position in a string of digits.

Thus 7,654 means:

(7 × 1000) + (6 × 100) + (5 × 10) + 4 = 7,654

The benefit of this positional notation system is that we need no new symbols or calculation schemes for tens, hundreds or thousands, as was needed when manipulating Roman numerals.

While numerals for the counting numbers one, two and three were seen in all ancient civilisations – and some form of zero appeared in two or three of those civilisations (including India) – the crucial combination of zero and positional notation arose only in India and Central America.

Importantly, only the Indian system was suitable for efficient calculation.

Positional arithmetic can be in base-ten (or decimal) for humans, or in base-two (binary) for computers.

In binary, 10101 means:

(1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + 1

Which, in the more-familiar decimal notation, is 21.

The rules we learned in primary school for addition, subtraction, multiplication and division can be easily extended to binary.

The binary system has been implemented in electronic circuits on computers, mostly because the multiplication table for binary arithmetic is much simpler than the decimal system.

Of course, computers can readily convert binary results to decimal notation for us humans.

As easy as counting from one to ten

Perhaps because we learn decimal arithmetic so early, we consider it “trivial”.

Indeed the discovery of decimal arithmetic is given disappointingly brief mention in most western histories of mathematics.

In reality, decimal arithmetic is anything but “trivial” since it eluded the best minds of the ancient world including Greek mathematical super-genius Archimedes of Syracuse.

Archimedes – who lived in the 3rd century BCE – saw far beyond the mathematics of his time, even anticipating numerous key ideas of modern calculus. He also used mathematics in engineering applications.

Nonetheless, he used a cumbersome Greek numeral system that hobbled his calculations.

Imagine trying to multiply the Roman numerals XXXI (31) and XIV (14).

First, one must rewrite the second argument as XIIII, then multiply the second by each letter of the first to obtain CXXXX CXXXX CXXXX XIIII.

These numerals can then be sorted by magnitude to arrive at CCCXXXXXXXXXXXXXIIII.

This can then be rewritten to yield CDXXXIV (434).

(For a bit of fun, try adding MCMLXXXIV and MMXI. First person to comment with the correct answer and their method gets a jelly bean.)

Thus, while possible, calculation with Roman numerals is significantly more time-consuming and error prone than our decimal system (although it is harder to alter the amount payable on a Roman cheque).

History lesson

Although decimal arithmetic was known in the Arab world by the 9th century, it took many centuries to make its way to Europe.

Italian mathematician Leonardo Fibonacci travelled the Mediterranean world in the 13th century, learning from the best Arab mathematicians of the time. Even then, it was several more centuries until decimal arithmetic was fully established in Europe.

Johannes Kepler and Isaac Newton – both giants in the world of physics – relied heavily on extensive decimal calculations (by hand) to devise their theories of planetary motion.

In a similar way, present-day scientists rely on massive computer calculations to test hypotheses and design products. Even our mobile phones do surprisingly sophisticated calculations to process voice and video.

But let us indulge in some alternate history of mathematics. What if decimal arithmetic had been discovered in India even earlier, say 300 BCE? (There are indications it was known by this date, just not well documented.)

And what if a cultural connection along the silk-road had been made between Indian mathematicians and Greek mathematicians at the time?

Such an exchange would have greatly enhanced both worlds, resulting in advances beyond the reach of each system on its own.

For example, a fusion of Indian arithmetic and Greek geometry might well have led to full-fledged trigonometry and calculus, thus enabling ancient astronomers to deduce the laws of motion and gravitation nearly two millennia before Newton.

In fact, the combination of mathematics, efficient arithmetic and physics might have accelerated the development of modern technology by more than two millennia.

It is clear from history that without mathematics, real progress in science and technology is not possible (try building a mobile phone without mathematics). But it’s also clear that mathematics alone is not sufficient.

The prodigious computational skills of ancient Indian mathematicians never flowered into advanced technology, nor did the great mathematical achievements of the Greeks, or many developments in China.

On the other hand, the Romans, who were not known for their mathematics, still managed to develop some impressive technology.

But a combination of advanced mathematics, computation, and technology makes a huge difference.

Our bodies and our brains today are virtually indistinguishable from those of ancient times.

With the earlier adoption of Indo-Arabic decimal arithmetic, the modern technological world of today might – for better or worse – have been achieved centuries ago.

And that’s something worth thinking about next time you’re out grocery shopping.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Students’ Awareness of Their Cognitive Processes Facilitates The Learning of Math, Finds Study

The purpose of education is to ensure that students acquire the skills necessary for succeeding in a world that is constantly changing. Self-assessment, or teaching students how to examine and evaluate their own learning and cognitive processes, has proven to be an effective method, and this competence is partly based on metacognitive knowledge.

A new study conducted at the University of Eastern Finland shows that metacognitive knowledge, i.e., awareness of one’s cognitive processes, is also a key factor in the learning of mathematics. The work is published in the journal Cogent Education.

The study explored thinking skills and possible grade-level differences in children attending comprehensive school in Finland. The researchers investigated 6th, 7th and 9th graders’ metacognitive knowledge in the context of mathematics.

“The study showed that ninth graders excelled at explaining their use of learning strategies, while 7th graders demonstrated proficiency in understanding when and why certain strategies should be used. No other differences between grade levels were observed, which highlights the need for continuous support throughout the learning path,” says Susanna Toikka of the University of Eastern Finland, the first author of the article.

The findings emphasize the need to incorporate elements that support metacognitive knowledge into mathematics learning materials, as well as into teachers’ pedagogical practices.

Self-assessment and understanding of one’s own learning help to face new challenges

Metacognitive knowledge helps students not only to learn mathematics, but also more broadly in self-assessment and lifelong learning. Students who can assess their own learning and understanding are better equipped to face new challenges and adapt to changing environments. Such skills are crucial for lifelong learning, as they enable continuous development and learning throughout life.

“Metacognitive knowledge is a key factor in learning mathematics and problem-solving, but its significance also extends to self-assessment and lifelong learning,” says Toikka.

In schools, metacognitive knowledge can be effectively developed as part of education. Based on earlier studies, Toikka and colleagues have developed a combination of frameworks for metacognitive knowledge, which helps to identify students’ needs for development regarding metacognitive knowledge by offering an alternative perspective to that of traditional developmental psychology.

“This also supports teachers in promoting students’ metacognitive knowledge. Teachers can use the combination of frameworks to design and implement targeted interventions that support students’ skills in lifelong learning.”

According to Toikka, the combination of frameworks enhances understanding of metacognitive knowledge and helps to identify areas where individual support is needed: “This type of understanding is crucial for the development of metacognitive knowledge among diverse learners.”

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to University of Eastern Finland


Millennium Prize: the Birch and Swinnerton-Dyer Conjecture

Elliptic curves have a long and distinguished history that can be traced back to antiquity. They are prevalent in many branches of modern mathematics, foremost of which is number theory.

In simplest terms, one can describe these curves by using a cubic equation of the form

where A and B are fixed rational numbers (to ensure the curve E is nice and smooth everywhere, one also needs to assume that its discriminant 4A3 + 27B2 is non-zero).

To illustrate, let’s consider an example: choosing A=-1 and B=0, we obtain the following picture:

At this point it becomes clear that, despite their name, elliptic curves have nothing whatsoever to do with ellipses! The reason for this historical confusion is that these curves have a strong connection to elliptic integrals, which arise when describing the motion of planetary bodies in space.

The ancient Greek mathematician Diophantus is considered by many to be the father of algebra. His major mathematical work was written up in the tome Arithmetica which was essentially a school textbook for geniuses. Within it, he outlined many tools for studying solutions to polynomial equations with several variables, termed Diophantine Equations in his honour.

One of the main problems Diophantus considered was to find all solutions to a particular polynomial equation that lie in the field of rational numbers Q. For equations of “degree two” (circles, ellipses, parabolas, hyperbolas) we now have a complete answer to this problem. This answer is thanks to the late German mathematician Helmut Hasse, and allows one to find all such points, should they exist at all.

Returning to our elliptic curve E, the analogous problem is to find all the rational solutions (x,y) which satisfy the equation defining E. If we call this set of points E(Q), then we are asking if there exists an algorithm that allows us to obtain all points (x,y) belonging to E(Q).

At this juncture we need to introduce a group law on E, which gives an eccentric way of fusing together two points (p₁ and p₂) on the curve, to obtain a brand new point (p₄). This mimics the addition law for numbers we learn from childhood (i.e. the sum or difference of any two numbers is still a number). There’s an illustration of this rule below:

Under this geometric model, the point p₄ is defined to be the sum of p₁ and p₂ (it’s easy to see that the addition law does not depend on the order of the points p₁, p₂). Moreover the set of rational points is preserved by this notion of addition; in other words, the sum of two rational points is again a rational point.

Louis Mordell, who was Sadleirian Professor of Pure Mathematics at Cambridge University from 1945 to 1953, was the first to determine the structure of this group of rational points. In 1922 he proved

where the number of copies of the integers Z above is called the “rank r(E) of the elliptic curve E”. The finite group ΤE(Q) on the end is uninteresting, as it never has more than 16 elements.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Daniel Delbourgo*


Higher Polygonal Numbers and Pascal’s Triangle

The third diagonal column in Pascal’s Triangle (r = 2 in the usual way of labeling and numbering) consists of the triangular numbers (1, 3, 6, 10, …) – numbers that can be arranged in 2-dimensional triangular patterns. The fourth column of Pascal’s triangle gives us triangular-based pyramidal numbers (1, 4, 10, 20, …), built by stacking the triangular numbers. The columns further out give “higher dimensional” triangular numbers that arise from stacking the triangular numbers from the previous dimension.

It is not by coincidence that the triangular and higher-dimensional triangular numbers appear in Pascal’s Triangle. If you think about layering of polygonal numbers in terms of equations, you get

In the above equation p^d_(k,n) is the nth k-polygonal number of dimension d. Triangular numbers are the 3-polygonal numbers of dimension 2, square numbers are the 4-polygonal numbers of dimension 2, “square based pyramidal numbers” would be denoted as p^3_(4,n).
from the sum above, you can obtain this equation:

Which looks very much like the Pascal Identity C(n,r) = C(n-1,r-1) + C(n-1,r), except for some translation of the variables. To be precise, if we consider the case where k=3 and use r = d and n‘ = n+d-1 we can translate the triangular numbers into the appropriate positions in Pascal’s Triangle.

Along with the definitions for the end columns, the Pascal Identity allows us to generate the whole triangle. This suggests the following strategy for calculating the higher k-Polygonal numbers: create a modified Pascal’s Triangle whose first column is equal to k-2 (instead of 1), and whose last column is equal to 1 (as usual). This modified Pascal’s Triangle is generated using these initial values and the usual Pascal Identity.

Here is an example with k=5, which sets the first column values equal to 3 (except for the top value, which we keep as 1) and yields the pentagonal numbers (column 3) and the higher pentagonal numbers.

The formula for these modified Pascal Triangles is given by this equation:

If we apply the change of variables mentioned above, we can obtain this general formula for the higher polygonal numbers in terms of combinations:

This formula illustrates how polygonal numbers are built out of triangular numbers. It says that the nth d-dimensional k-polygonal number is equal to the nth d-dimensional triangular number, plus (k-3) copies of the n-1 d-dimensional triangular number. This is a little easier to understand when you forget about the higher-dimensions and look at the regular 2-dimensional polygonal number.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


Mathematicians Are Bitterly Divided Over A Controversial Proof

An attempt to settle a decade-long argument over a controversial proof by mathematician Shinichi Mochizuki has seen a war of words on both sides, with Mochizuki dubbing the latest effort as akin to a “hallucination” produced by ChatGPT,

An attempt to fix problems with a controversial mathematical proof has itself become mired in controversy, in the latest twist in a saga that has been running for over a decade and has seen mathematicians trading unusually pointed barbs.

The story began in 2012, when Shinichi Mochizuki at Kyoto University, Japan, published a 500-page proof of a problem called the ABC conjecture. The conjecture concerns prime numbers involved in solutions to the equation a + b = c, and despite its seemingly simple form, it provides deep insights into the nature of numbers. Mochizuki published a series of papers claiming to have proved ABC using new mathematical tools he collectively called Inter-universal Teichmüller (IUT) theory, but many mathematicians found the initial proof baffling and incomprehensible.

While a small number of mathematicians have since accepted that Mochizuki’s papers prove the conjecture, other researchers say there are holes in his argument and it needs further work, dividing the mathematical community in two and prompting a prize of up to $1 million for a resolution to the quandary.

Now, Kirti Joshi at the University of Arizona has published a proposed proof that he says fixes the problems with IUT and proves the ABC conjecture. But Mochizuki and his supporters, as well as mathematicians who critiqued Mochizuki’s original papers, remain unconvinced, with Mochizuki declaring that Joshi’s proposal doesn’t contain “any meaningful mathematical content whatsoever”.

Central to Joshi’s work is an apparent problem, previously identified by Peter Scholze at the University of Bonn, Germany, and Jakob Stix at Goethe University Frankfurt, Germany, with a part of Mochizuki’s proof called Conjecture 3.12. The conjecture involves comparing two mathematical objects, which Scholze and Stix say Mochizuki did incorrectly. Joshi claims to have found a more satisfactory way to make the comparison.

Joshi also says that his theory goes beyond Mochizuki’s and establishes a “new and radical way of thinking about arithmetic of number fields”. The paper, which hasn’t been peer-reviewed, is the culmination of several smaller papers on ABC that Joshi has published over several years, describing them as a “Rosetta Stone” for understanding Mochizuki’s impenetrable maths.

Neither Joshi nor Mochizuki responded to a request for comment on this article, and, indeed, the two seem reluctant to communicate directly with each other. In his paper, Joshi says Mochizuki hasn’t responded to his emails, calling the situation “truly unfortunate”. And yet, several days after the paper was posted online, Mochizuki published a 10-page response, saying that Joshi’s work was “mathematically meaningless” and that it reminded him of “hallucinations produced by artificial intelligence algorithms, such as ChatGPT”.

Mathematicians who support Mochizuki’s original proof express a similar sentiment. “There is nothing to talk about, since his [Joshi’s] proof is totally flawed,” says Ivan Fesenko at Westlake University in China. “He has no expertise in IUT whatsoever. No experts in IUT, and the number is in two digits, takes his preprints seriously,” he says. “It won’t pass peer review.”

And Mochizuki’s critics also disagree with Joshi. “Unfortunately, this paper and its predecessors does not introduce any powerful mathematical technology, and falls far short of giving a proof of ABC,” says Scholze, who has emailed Joshi to discuss the work further. For now, the saga continues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*