Factor Lattices

The objects pictured above are interesting structures – they are derived from the prime factorization of a given number n. They can be described in a number of ways – for example, as directed graphs. Because they are nicely structured, they actually form something more special – a lattice. Accordingly, these structures are called factor lattices.
It’s easy to start drawing these by hand following the instructions below.

1. The first node is 1
2. Draw arrows out of this node for each of the prime factors of n.
3. The arrows that you just drew should connect to nodes labled with the prime factors of n.

Now, for each of the new nodes that you drew do the following:

4. Start from a node x that is not equal to n.
5. Draw arrows out of this node for each of the prime factors of n/x.
6. The arrows that you just drew (one for each p = n/x) should connect to nodes labled with the numbers p*x.

7. Now repeat 4,5, and 6 for each new node that you have drawn that is not equal to n.

This process is recursive, and ends when you have the complete lattice. The process is well suited for implementation as a computer program – the images above were created using SAGE using output from a Java program based on the algorithm above.

Manually trying out the steps out for a number like n = 24 goes something like this: First write out the prime factorization of 24, 24=(2*2*2)*3 = (2^3)*3. Starting with 1, draw arrows out to 2 and 3. Now looking at each node and following the algorithm, from the 2 you will get arrows out to 4 and 6. From the 3 you will get an arrow out to 6 as well. From 4 you will get arrows out to 8 or 12. From 6 you will get an arrow out to 12 as well. From 8 and from 12 you get arrows out to 24, and you are done.

In general, the algorithm produces a lattice that can be described as follows. Each node is a factor of the given number n. Two nodes are connected by an edge if their prime factorization differs by a single prime number. In other words, if a and b are nodes, and p = b/a, then there is an arrow p:a–>b.

It’s a good exercise to make the connections between the lattice structure and the prime factorization of a number n.

1. What does the factor lattice of a prime number look like?
2. If a number is just a power of a prime, what does its lattice look like?
3. If you know the factorization, can you find the number of nodes without drawing the lattice.

The answer to the last question (3) can be expressed as:

For example, if n = 24= 2^3*3, then the number of nodes will be (3+1)(1+1) = 8

That these structures can be thought of as “lattices”comes from the fact that you can think of the arrows as an ordering of the nodes, ab. The number 1 is always the least node in the factor lattice for n, while n itself is the greatest node. The property that actually makes these structures a “lattice” is that for any two nodes there is always a lower-bound for any pair of nodes in the lattice, and always an upper-bound for the pair (these are often referred to as meets and joins).

The Wolfram Demonstrations Project has a nice factor lattice demo that will draw factor lattices for a large number of integers for you. There is also a good Wikipedia entry for lattices in general.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


Magic numbers: the beauty of decimal notation

While adding up your grocery bill in the supermarket, you’re probably not thinking how important or sophisticated our number system is.

But the discovery of the present system, by unknown mathematicians in India roughly 2,000 years ago – and shared with Europe from the 13th century onwards – was pivotal to the development of our modern world.

Now, what if our “decimal” arithmetic, often called the Indo-Arabic system, had been discovered earlier? Or what if it had been shared with the Western world earlier than the 13th century?

First, let’s define “decimal” arithmetic: we’re talking about the combination of zero, the digits one through nine, positional notation, and efficient rules for arithmetic.

“Positional notation” means that the value represented by a digit depends both on its value and position in a string of digits.

Thus 7,654 means:

(7 × 1000) + (6 × 100) + (5 × 10) + 4 = 7,654

The benefit of this positional notation system is that we need no new symbols or calculation schemes for tens, hundreds or thousands, as was needed when manipulating Roman numerals.

While numerals for the counting numbers one, two and three were seen in all ancient civilisations – and some form of zero appeared in two or three of those civilisations (including India) – the crucial combination of zero and positional notation arose only in India and Central America.

Importantly, only the Indian system was suitable for efficient calculation.

Positional arithmetic can be in base-ten (or decimal) for humans, or in base-two (binary) for computers.

In binary, 10101 means:

(1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + 1

Which, in the more-familiar decimal notation, is 21.

The rules we learned in primary school for addition, subtraction, multiplication and division can be easily extended to binary.

The binary system has been implemented in electronic circuits on computers, mostly because the multiplication table for binary arithmetic is much simpler than the decimal system.

Of course, computers can readily convert binary results to decimal notation for us humans.

As easy as counting from one to ten

Perhaps because we learn decimal arithmetic so early, we consider it “trivial”.

Indeed, the discovery of decimal arithmetic is given disappointingly brief mention in most western histories of mathematics.

In reality, decimal arithmetic is anything but “trivial” since it eluded the best minds of the ancient world including Greek mathematical super-genius Archimedes of Syracuse.

Archimedes – who lived in the 3rd century BCE – saw far beyond the mathematics of his time, even anticipating numerous key ideas of modern calculus. He also used mathematics in engineering applications.

Nonetheless, he used a cumbersome Greek numeral system that hobbled his calculations.

Imagine trying to multiply the Roman numerals XXXI (31) and XIV (14).

First, one must rewrite the second argument as XIIII, then multiply the second by each letter of the first to obtain CXXXX CXXXX CXXXX XIIII.

These numerals can then be sorted by magnitude to arrive at CCCXXXXXXXXXXXXXIIII.

This can then be rewritten to yield CDXXXIV (434).

(For a bit of fun, try adding MCMLXXXIV and MMXI. First person to comment with the correct answer and their method gets a jelly bean.)

Thus, while possible, calculation with Roman numerals is significantly more time-consuming and error prone than our decimal system (although it is harder to alter the amount payable on a Roman cheque).

History lesson

Although decimal arithmetic was known in the Arab world by the 9th century, it took many centuries to make its way to Europe.

Italian mathematician Leonardo Fibonacci travelled the Mediterranean world in the 13th century, learning from the best Arab mathematicians of the time. Even then, it was several more centuries until decimal arithmetic was fully established in Europe.

Johannes Kepler and Isaac Newton – both giants in the world of physics – relied heavily on extensive decimal calculations (by hand) to devise their theories of planetary motion.

In a similar way, present-day scientists rely on massive computer calculations to test hypotheses and design products. Even our mobile phones do surprisingly sophisticated calculations to process voice and video.

But let us indulge in some alternate history of mathematics. What if decimal arithmetic had been discovered in India even earlier, say 300 BCE? (There are indications it was known by this date, just not well documented.)

And what if a cultural connection along the silk-road had been made between Indian mathematicians and Greek mathematicians at the time?

Such an exchange would have greatly enhanced both worlds, resulting in advances beyond the reach of each system on its own.

For example, a fusion of Indian arithmetic and Greek geometry might well have led to full-fledged trigonometry and calculus, thus enabling ancient astronomers to deduce the laws of motion and gravitation nearly two millennia before Newton.

In fact, the combination of mathematics, efficient arithmetic and physics might have accelerated the development of modern technology by more than two millennia.

It is clear from history that without mathematics, real progress in science and technology is not possible (try building a mobile phone without mathematics). But it’s also clear that mathematics alone is not sufficient.

The prodigious computational skills of ancient Indian mathematicians never flowered into advanced technology, nor did the great mathematical achievements of the Greeks, or many developments in China.

On the other hand, the Romans, who were not known for their mathematics, still managed to develop some impressive technology.

But a combination of advanced mathematics, computation, and technology makes a huge difference.

Our bodies and our brains today are virtually indistinguishable from those of ancient times.

With the earlier adoption of Indo-Arabic decimal arithmetic, the modern technological world of today might – for better or worse – have been achieved centuries ago.

And that’s something worth thinking about next time you’re out grocery shopping.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis


Magic Numbers: The Beauty Of Decimal Notation

While adding up your grocery bill in the supermarket, you’re probably not thinking how important or sophisticated our number system is.

But the discovery of the present system, by unknown mathematicians in India roughly 2,000 years ago – and shared with Europe from the 13th century onwards – was pivotal to the development of our modern world.

Now, what if our “decimal” arithmetic, often called the Indo-Arabic system, had been discovered earlier? Or what if it had been shared with the Western world earlier than the 13th century?

First, let’s define “decimal” arithmetic: we’re talking about the combination of zero, the digits one through nine, positional notation, and efficient rules for arithmetic.

“Positional notation” means that the value represented by a digit depends both on its value and position in a string of digits.

Thus 7,654 means:

(7 × 1000) + (6 × 100) + (5 × 10) + 4 = 7,654

The benefit of this positional notation system is that we need no new symbols or calculation schemes for tens, hundreds or thousands, as was needed when manipulating Roman numerals.

While numerals for the counting numbers one, two and three were seen in all ancient civilisations – and some form of zero appeared in two or three of those civilisations (including India) – the crucial combination of zero and positional notation arose only in India and Central America.

Importantly, only the Indian system was suitable for efficient calculation.

Positional arithmetic can be in base-ten (or decimal) for humans, or in base-two (binary) for computers.

In binary, 10101 means:

(1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + 1

Which, in the more-familiar decimal notation, is 21.

The rules we learned in primary school for addition, subtraction, multiplication and division can be easily extended to binary.

The binary system has been implemented in electronic circuits on computers, mostly because the multiplication table for binary arithmetic is much simpler than the decimal system.

Of course, computers can readily convert binary results to decimal notation for us humans.

As easy as counting from one to ten

Perhaps because we learn decimal arithmetic so early, we consider it “trivial”.

Indeed the discovery of decimal arithmetic is given disappointingly brief mention in most western histories of mathematics.

In reality, decimal arithmetic is anything but “trivial” since it eluded the best minds of the ancient world including Greek mathematical super-genius Archimedes of Syracuse.

Archimedes – who lived in the 3rd century BCE – saw far beyond the mathematics of his time, even anticipating numerous key ideas of modern calculus. He also used mathematics in engineering applications.

Nonetheless, he used a cumbersome Greek numeral system that hobbled his calculations.

Imagine trying to multiply the Roman numerals XXXI (31) and XIV (14).

First, one must rewrite the second argument as XIIII, then multiply the second by each letter of the first to obtain CXXXX CXXXX CXXXX XIIII.

These numerals can then be sorted by magnitude to arrive at CCCXXXXXXXXXXXXXIIII.

This can then be rewritten to yield CDXXXIV (434).

(For a bit of fun, try adding MCMLXXXIV and MMXI. First person to comment with the correct answer and their method gets a jelly bean.)

Thus, while possible, calculation with Roman numerals is significantly more time-consuming and error prone than our decimal system (although it is harder to alter the amount payable on a Roman cheque).

History lesson

Although decimal arithmetic was known in the Arab world by the 9th century, it took many centuries to make its way to Europe.

Italian mathematician Leonardo Fibonacci travelled the Mediterranean world in the 13th century, learning from the best Arab mathematicians of the time. Even then, it was several more centuries until decimal arithmetic was fully established in Europe.

Johannes Kepler and Isaac Newton – both giants in the world of physics – relied heavily on extensive decimal calculations (by hand) to devise their theories of planetary motion.

In a similar way, present-day scientists rely on massive computer calculations to test hypotheses and design products. Even our mobile phones do surprisingly sophisticated calculations to process voice and video.

But let us indulge in some alternate history of mathematics. What if decimal arithmetic had been discovered in India even earlier, say 300 BCE? (There are indications it was known by this date, just not well documented.)

And what if a cultural connection along the silk-road had been made between Indian mathematicians and Greek mathematicians at the time?

Such an exchange would have greatly enhanced both worlds, resulting in advances beyond the reach of each system on its own.

For example, a fusion of Indian arithmetic and Greek geometry might well have led to full-fledged trigonometry and calculus, thus enabling ancient astronomers to deduce the laws of motion and gravitation nearly two millennia before Newton.

In fact, the combination of mathematics, efficient arithmetic and physics might have accelerated the development of modern technology by more than two millennia.

It is clear from history that without mathematics, real progress in science and technology is not possible (try building a mobile phone without mathematics). But it’s also clear that mathematics alone is not sufficient.

The prodigious computational skills of ancient Indian mathematicians never flowered into advanced technology, nor did the great mathematical achievements of the Greeks, or many developments in China.

On the other hand, the Romans, who were not known for their mathematics, still managed to develop some impressive technology.

But a combination of advanced mathematics, computation, and technology makes a huge difference.

Our bodies and our brains today are virtually indistinguishable from those of ancient times.

With the earlier adoption of Indo-Arabic decimal arithmetic, the modern technological world of today might – for better or worse – have been achieved centuries ago.

And that’s something worth thinking about next time you’re out grocery shopping.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Getting Projections Right: Predicting Future Climate

Region by region projections of how climate is likely to change over the coming decades help to make the prospect of global warming more tangible and relevant.

Picturing the climate we are likely to have with unabated increases in greenhouse gas concentrations in, say, Melbourne, Sydney, or the Murray Darling, lets us weigh up the costs and benefits of actions to reduce greenhouse gas emissions.

Regional projections also let us plan how to adapt to any unavoidable changes in our climate. Planning changes to farming practices, water supply or natural ecosystem management, for example, requires some idea of what our future regional climate is likely to be.

Here in Australia we have had a long history of national climate change projections. Since 1990, CSIRO has released five updates of projected changes in temperature, rainfall, extreme events and many other key aspects of our climate system.

CSIRO’s last release was done with the Bureau of Meteorology in 2007. It provided the most detailed product available up to that time.

This release included the innovation (a world first amongst national projections at the time) of providing probabilities for the projected changes.

Why modelling?

The complexity of the climate system means that we cannot simply extrapolate past trends to forecast future conditions. Instead, we use climate models developed and utilised extensively over recent decades.

These are mathematical representations of the climate systems based on the laws of physics.

Results from all of the climate modelling centres around the world are considered in preparing Australian projections. We place greatest weight on the models that are best in representing our historical climate.

Global climate modelling has continued to develop over recent years. Most of the modelling centres are now running improved versions of their models compared to what was available in 2007.

As part of an international coordinated effort, a new database of the latest climate model output is being assembled for researchers to use ahead of the next report of the Intergovernmental Panel on Climate Change (IPCC). It is many times richer than any previously available.

Analysing this massive resource will be a focus of research of a large number of scientists in CSIRO, BoM and the universities over the next few years.

Putting the models to good use

While the science has been developing, so have the demands of users of this projection information. Policymakers at all levels of government, natural resource planners, industry, non-government organisations and individuals all are placing demands on climate projection science. These are growing in volume and complexity.

For example, researchers want regionally specific scenarios for changes in the frequency of hot days, extreme rainfall, fire, drought, cyclones, hail, evaporation, sunshine, coral bleaching temperatures, ocean acidification and sea level rise.

This type of information is particularly useful for risk assessments that can inform policy development and implementation.

For example, assessing future climate risks to infrastructure can place quite different demands on climate projection science compared to, say, assessing risks to agricultural enterprises.

Given these developments, the time is coming for the Australian climate research community to update and expand their projections. Planning has begun for a release in 2014. This will be just after the completion of the next IPCC assessment.

At that time, Australians will have the latest climate projections for the 21st century for a range of factors, including sea levels, seasonal-average temperatures and rainfall, as well as extreme weather events.

Resources permitting, these new projections will also include online services which will enable users to generate climate scenarios to suit the specific needs of many risk assessments.

Finding out more about summer rainfall

As climate scientists start to analyse these new model data, a major focus of attention will be simulated changes to summer rainfall over Australia.

Models have consistently indicated a drying trend for the winter rainfall regions in southern Australia and this is a result which also aligns with other evidence such as observed trends.

On the other hand, models give inconsistent projections for summer rainfall change, ranging from large increase to large decrease. Researchers will be hoping to reduce this key uncertainty as they begin to analyse the results.

However, when it comes to projecting our future climate, there will always be some uncertainty to deal with.

Dealing with uncertainty

Climate projection scientists have to clearly convey the uncertainties while not letting these overwhelm the robust findings about regional climate change that the science provides.

Climate projection uncertainties can be presented in many different ways, such as through ranges of plausible change, as probabilistic estimates, or as alternative scenarios.

We shouldn’t necessarily be most interested in the most likely future. In some cases, it may be more prudent to plan for less likely, but higher risk, future climates.

It can be difficult to make a complex message as relevant as possible to a wide range of decision-makers. CSIRO climate scientists are tackling this by working with social scientists to help develop new and more effective communication methods. These should be ready in time for the next projections release.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Penny Whetton*


Octonions: The Strange Maths That Could Unite The Laws Of Nature

Could a system of eight-dimensional numbers help physicists find a single mathematical framework that describes the entire universe?

Words can be slippery. That is perhaps even more true in physics than it is in the rest of life. Think of a “particle”, for instance, and we might conjure an image of a tiny sphere. In truth, “particle” is just a poetic term for something far removed from our everyday experience – which is why our best descriptions of reality make use of the cold precision of mathematics.

But just as there are many human languages, so there is more than one type of number system. Most of us deal with only the familiar number line that begins 1, 2, 3. But other, more exotic systems are available. Recently, physicists have been asking a profound question: what if we are trying to describe reality with the wrong type of numbers?

Each mathematical system has its own special disposition, just like languages. Love poems sound better in French. German has that knack of expressing sophisticated concepts – like schadenfreude – in a few syllables. Now, in the wake of a fresh breakthrough revealing tantalising connections between models of how matter works at different energy scales, it seems increasingly likely that an exotic set of numbers known as the octonions might have what it takes to capture the truth about reality.

Mathematicians are excited because they reckon that by translating our theories of reality into the language of the octonions, it could tidy up some of the deepest problems in physics and clear a path to a “grand unified theory” that can describe the universe in one statement. “This feels like a very promising direction,” says Latham Boyle at the Perimeter Institute in Waterloo, Canada. “I find it irresistible to think about.”

Many physicists dream of finding a grand unified theory, a single mathematical framework that tells us where the forces of nature come from and how they act on matter. Critically, such a theory would also capture how and why these properties changed over the life of the universe, as we know they have.

So far, the closest we have come is the standard model of particle physics, which details the universe’s fundamental particles and forces: electrons, quarks, photons and the rest. The trouble is, the standard model has its shortcomings. To make it work, we must feed in around 20 measured numbers, such as the masses of particles. We don’t know why these numbers are what they are. Worse, the standard model has little to say about space-time, the canvas in which particles live. We seem to live in a four-dimensional space-time, but the standard model doesn’t specify that this must be so. “Why not, say, seven-dimensional space-time?” Boyle wonders.

Real and imaginary numbers

Many think the solution to these woes will come when experiments uncover a missing piece of the standard model. But after years of effort, this hasn’t happened, and some are wondering if the problem is the maths itself.

Mathematicians have known for centuries that there are numbers other than the ones we can count on our fingers. Take the square root of -1, known as i. There is no meaningful answer to this expression, as both 1 × 1 and -1 × -1 are equal to 1, so i is an “imaginary number”. They found that by combining i with real numbers – which include all the numbers you could place on a number line, including negative numbers and decimals – they could fashion a new system called the complex numbers.

Think of complex numbers as being two-dimensional; the two parts of each number can record unrelated properties of the same object. This turns out to be extremely handy. All our electronic infrastructure relies on complex numbers. And quantum theory, our hugely successful description of the small-scale world, doesn’t work without them.

In 1843, Irish mathematician William Rowan Hamilton took things a step further. Supplementing the real and the imaginary numbers with two more sets of imaginary numbers called j and k, he gave us the quaternions, a set of four-dimensional numbers. Within a few months, Hamilton’s friend John Graves had found another system with eight dimensions called the octonions.

Real numbers, complex numbers, quarternions and octonions are collectively known as the normed division algebras. They are the only sets of numbers with which you can perform addition, subtraction, multiplication and division. Wilder systems are possible – the 16-dimensional sedenions, for example – but here the normal rules break down.

Today, physics makes prolific use of three of these systems. The real numbers are ubiquitous. Complex numbers are essential in particle physics as well as quantum physics. The mathematical structure of general relativity, Albert Einstein’s theory of gravity, can be expressed elegantly by the quaternions.

The octonions stand oddly apart as the only system not to tie in with a central physical law. But why would nature map onto only three of these four number systems? “This makes one suspect that the octonions – the grandest and least understood of the four – should turn out to be important too,” says Boyle.

In truth, physicists have been thinking such thoughts since the 1970s, but the octonions have yet to fulfil their promise. Michael Duff at Imperial College London was, and still is, drawn to the octonions, but he knows many have tried and failed to decipher their role in describing reality. “The octonions became known as the graveyard of theoretical physics,” he says.

That hasn’t put off a new generation of octonion wranglers, including Nichol Furey at Humboldt University of Berlin. She likes to look at questions in physics without making any assumptions. “I try to solve problems right from scratch,” she says. “In doing so, you can often find alternate paths that earlier authors may have missed.” Now, it seems she and others might be making the beginnings of an octonion breakthrough.

Internal symmetries in quantum mechanics

To get to grips with Furey’s work, it helps to understand a concept in physics called internal symmetry. This isn’t the same as the rotational or reflectional symmetry of a snowflake. Instead, it refers to a number of more abstract properties, such as the character of certain forces and the relationships between fundamental particles. All these particles are defined by a series of quantum numbers – their mass, charge and a quantum property called spin, for instance. If a particle transforms into another particle – an electron becoming a neutrino, say – some of those numbers will change while others won’t. These symmetries define the structure of the standard model.

Internal symmetries are central to the quest for a grand unified theory. Physicists have already found various mathematical models that might explain how reality worked back at the time when the universe had much more energy. At these higher energies, it is thought there would have been more symmetries, meaning that some forces we now experience as distinct would have been one and the same. None of these models have managed to rope gravity into the fold: that would require an even grander “theory of everything”. But they do show, for instance, that the electromagnetic force and weak nuclear force would have been one “electroweak” force until a fraction of a second after the big bang. As the universe cooled, some of the symmetries broke, meaning this particular model would no longer apply.

Each different epoch requires a different mathematical model with a gradually reducing number of symmetries. In a sense, these models all contain each other, like a set of Russian dolls.

One of the most popular candidates for the outermost doll – the grand unified theory that contains all the others – is known as the spin(10) model. It has a whopping 45 symmetries. In one formulation, inside this sits the Pati-Salam model, with 21 symmetries. Then comes the left-right symmetric model, with 15 symmetries, including one known as parity, the kind of left-right symmetry that we encounter when we look in a mirror. Finally, we reach the standard model, with 12 symmetries. The reason we study each of these models is that they work; their symmetries are consistent with experimental evidence. But we have never understood what determines which symmetries fall away at each stage.

In August 2022, Furey, together with Mia Hughes at Imperial College London, showed for the first time that the division algebras, including the octonions, could provide this link. To do so, they drew on ideas Furey had years ago to translate all the mathematical symmetries and particle descriptions of various models into the language of division algebras. “It took a long time,” says Furey. The task required using the Dixon algebra, a set of numbers that allow you to combine real, complex, quaternion and octonion maths. The result was a system that describes a set of octonions specified by quaternions, which are in turn specified by complex numbers that are specified by a set of real numbers. “It’s a fairly crazy beast,” says Hughes.

It is a powerful beast, too. The new formulation exposed an intriguing characteristic of the Russian doll layers. When some numbers involved in the complex, quaternion and octonion formulations are swapped from positive to negative, or vice versa, some of the symmetries change and some don’t. Only the ones that don’t are found in the next layer down. “It allowed us to see connections between these well-studied particle models that had not been picked up on before,” says Furey. This “division algebraic reflection”, as Furey calls it, could be dictating what we encounter in the real physical universe, and – perhaps – showing us the symmetry-breaking road up to the long-sought grand unified theory.

The result is new, and Furey and Hughes haven’t yet been able to see where it may lead. “It hints that there might be some physical symmetry-breaking process that somehow depends upon these division algebraic reflections, but so far the nature of that process is fairly mysterious,” says Hughes.

Furey says the result might have implications for experiments. “We are currently investigating whether the division algebras are telling us what can and cannot be directly measured at different energy scales,” she says. It is a work in progress, but analysis of the reflections seems to suggest that there are certain sets of measurements that physicists should be able to make on particles at low energies – such as the measurement of an electron’s spin – and certain things that won’t be measurable, such as the colour charge of quarks.

Among those who work on octonions, the research is making waves. Duff says that trying to fit the standard model into octonionic language is a relatively new approach: “If it paid off, it would be very significant, so it’s worth trying.” Corinne Manogue at Oregon State University has worked with octonions for decades and has seen interest ebb and flow. “This moment does seem to be a relative high,” she says, “primarily, I think, because of Furey’s strong reputation and advocacy.

The insights from the octonions don’t stop there. Boyle has been toying with another bit of exotic maths called the “exceptional Jordan algebra”, which was invented by German physicist Pascual Jordan in the 1930s. Working with two other luminaries of quantum theory, Eugene Wigner and John von Neumann, Jordan found a set of mathematical properties of quantum theory that resisted classification and were closely related to the octonions.

Probe this exceptional Jordan algebra deeply enough and you will find it contains the mathematical structure that we use to describe Einstein’s four-dimensional space-time. What’s more, we have known for decades that within the exceptional Jordan algebra, you will find a peculiar mathematical structure that we derived through an entirely separate route and process in the early 1970s to describe the standard model’s particles and forces. In other words, this is an octonionic link between our theories of space, time, gravity and quantum theory. “I think this is a very striking, intriguing and suggestive observation,” says Boyle.

Responding to this, Boyle has dug deeper and discovered something intriguing about the way a class of particles called fermions, which includes common particles like electrons and quarks, fits into the octonion-based language. Fermions are “chiral”, meaning their mirror-image reflections – the symmetry physicists call parity – look different. This had created a problem when incorporating fermions into the octonion-based versions of the standard model. But Boyle has now found a way to fix that – and it has a fascinating spin-off. Restoring the mirror symmetry that is broken in the standard model also enables octonionic fermions to sit comfortably in the left-right symmetric model, one level further up towards the grand unified theory.

Beyond the big bang

This line of thinking might even take us beyond the grand unified theory, towards an explanation of where the universe came from. Boyle has been working with Neil Turok, his colleague at the Perimeter Institute, on what they call a “two-sheeted universe” that involves a set of symmetries known as charge, parity and time (CPT). “In this hypothesis, the big bang is a kind of mirror separating our half of the universe from its CPT mirror image on the other side of the bang,” says Boyle. The octonionic properties of fermions that sit in the left-right symmetric model are relevant in developing a coherent theory for this universe, it turns out. “I suspect that combining the octonionic picture with the two-sheeted picture of the cosmos is a further step in the direction of finding the right mathematical framework for describing nature,” says Boyle.

As with all the discoveries linking the octonions to our theories of physics so far, Boyle’s work is only suggestive. No one has yet created a fully fledged theory of physics based on octonions that makes new predictions we can test by using particle colliders, say. “There’s still nothing concrete yet: there’s nothing we can tell the experimentalists to go and look for,” says Duff. Furey agrees: “It is important to say that we are nowhere near being finished.

But Boyle, Furey, Hughes and many others are increasingly absorbed by the possibility that this strange maths really could be our best route to understanding where the laws of nature come from. In fact, Boyle thinks that the octonion-based approach could be just as fruitful as doing new experiments to find new particles. “Most people are imagining that the next bit of progress will be from some new pieces being dropped onto the table,” he says. “That would be great, but maybe we have not yet finished the process of fitting the current pieces together.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Brooks*


How linguists are unlocking the meanings of Shakespeare’s words using numbers

Today it would seem odd to describe a flower with the word “bastard”—why apply a term of personal abuse to a flower? But in Shakespeare’s time, “bastard” was a technical term describing certain plants.

Similarly, associating the word “bad” with success and talking of a “bad success” would be decidedly odd today. But it was not unusual then, when success meant outcome, which could be good or bad.

Corpus linguistics is a branch of linguistics which uses computers to explore the use of words in huge collections of language. It can spot nuances that might be overlooked by linguists working manually, or large patterns that a lifetime of studying may not reveal. And numbers, counts of words and keeping track of where the words are occurring, are key.

In my experience at conferences and the like, talk of numbers is not unanimously well received in the world of literary studies. Numbers are sometimes perceived as being reductive, or inappropriate when discussing creative works, or only accessible to specialists.

Yet, describing any pattern involves numbers. In the first paragraph above, I used the words “normal,” “odd” and “unusual” as soft ways of describing frequencies—the numbers of occurrences (think also of, for example, “unique,” “rare,” “common”).

Even talking about “associations” involves numbers. Often associations evolve from an unusually high number of encounters among two or more things. And numbers help us to see things.

Changing meanings

Along with my team at Lancaster University, I have used computers to examine some 20,000 words gleaned from a million-word corpus (a collection of written texts) of Shakespeare’s plays, resulting in a new kind of dictionary.

People have created Shakespeare dictionaries before, but this is the first to use the full armory of corpus techniques and the first to be comparative. It not only looks at words inside Shakespeare’s plays, but also compares them with a matching million-word corpus of contemporary early modern plays, along with huge corpus of 320 million words of various writings of the period.

Of course, words in early modern England had lives outside Shakespeare. “Bastard” was generally a term for a hybrid plant, occurring in technical texts on horticulture.

It could be, and very occasionally was, used for personal abuse, as in King Lear, where Edmund is referred to as a “bastard.” But this is no general term of abuse, let alone banter, as you might see it used today. It is a pointed attack on him being of illegitimate parentage, genetically hybrid, suspect at his core.

The word “bad” is not now associated with the word “success,” yet 400 years ago it was, as were other negative words, including “disastrous,” “unfortunate,” “ill,” “unhappy” and “unlucky.”

We can tap into a word’s associations by examining its collocates, that is, words with which it tends to occur (rather like we make judgements about people partly on the basis of the company they keep). In this way we can see that the meaning of “success” was “outcome” and that outcome, given its collocates, could be good or bad.

Highly frequent words

We can use intuition to guess some word patterns. It’s no surprise that in early modern English, the word “wicked” occurred very frequently in religious texts of the time. But less intuitively, so did “ourselves,” a word associated with sermons and plays, both of which have in common a habit of making statements about people on earth.

Highly frequent words, so often excluded by historical dictionaries and reference works, are often short words that seem insignificant. They have a wood-for-trees problem.

Yet corpus techniques highlight the interesting patterns. It turns out that a frequent sense of the humble preposition “by” is religious: to reinforce the sincerity of a statement by invoking the divine (for example, “by God”).

Numbers can also reveal what is happening inside Shakespeare’s works. Frequent words such as “alas” or “ah” are revealed to be heavily used by Shakespeare’s female characters, showing that they do the emotional work of lamentation in the plays, especially his histories.

Infrequent words

What of the infrequent? Words that occur only once in Shakespeare—so-called hapax legomena—are nuggets of interest. The single case of “bone-ache” in Troilus and Cressida evokes the horrifying torture that syphilis, which it applies to, would have been. In contrast, “ear-kissing” in King Lear is Shakespeare’s rather more pleasant and creative metaphor for whispering (interestingly, other writers used it for the notion of flattering).

Another group of interesting infrequent words concerns words that seem to have their earliest occurrence in Shakespeare. Corpus techniques allowed us to navigate the troubled waters of spelling variation. Before spelling standardization, searching for the word “sweet,” for instance, would miss cases spelt “sweete,” “swete” or “svveet.”

In this way, we can better establish whether a word written by a writer really is the earliest instance. Shakespearean firsts include the rather boring “branchless” (Antony and Cleopatra), a word probably not coined by Shakespeare but merely first recorded in his text. But there is also the more creative “ear-piercing” (Othello) and the distinctly modern-sounding “self-harming” (The Comedy of Errors and Richard II).

Why are these advances in historical corpus linguistics happening now? Much of the technology to produce these findings was not in place until relatively recently.

Programs to deal with spelling variation (such as Vard) or to analyse vast collections of electronic texts in sophisticated ways (such as CQPweb), to say nothing of the vast quantities of computer-readable early modern language data (such as EEBO-TCP), have only been widely used in the last 10 or so years. We are therefore on the cusp of a significant increase in our understanding and appreciation of major writers such as Shakespeare.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Culpeper, The Conversation