Where is everybody? Doing the maths on extraterrestrial life

During a lunch in the summer of 1950, physicists Enrico Fermi, Edward Teller and Herbert York were chatting about a recent New Yorker cartoon depicting aliens abducting trash cans in flying saucers. Suddenly, Fermi blurted out, “Where is everybody?”

He reasoned: “Since there are likely many other technological civilisations in the Milky Way galaxy, and since in a few tens of thousands of years at most they could have explored or even colonised many distant planets, why don’t we see any evidence of even a single extraterrestrial civilisation?”

This has come to be known as Fermi’s Paradox.

Clearly the question of whether other civilisations exist is one of the most important questions of modern science. Any discovery of a distant civilisation – say by analysis of microwave data – would rank as among the most far-reaching of all scientific discoveries.

Drake equation

At a 1960 conference regarding extraterrestrial intelligence, Frank Drake (1930 —) sketched out what is now the Drake equation, estimating the number of civilisations in the Milky Way with which we could potentially communicate:

where

N = number of civilisations in our galaxy that can communicate.

R* = average rate of star formation per year in galaxy.

fp = fraction of those stars that have planets.

ne = average number of planets that can support life, per star that has planets.

fl = fraction of the above that eventually develop life.

fi = fraction of the above that eventually develop intelligent life.

fc = fraction of civilisations that develop technology that signals existence into space.

L = length of time such civilisations release detectable signals into space.

The result? Drake estimated ten such civilisations were out there somewhere in the Milky Way.

This analysis, led to the Search for Extraterrestrial Intelligence (SETI) project, looking for radio transmissions in a region of the electromagnetic spectrum thought best suited for interstellar communication.

But after 50 years of searching, using increasingly powerful equipment, nothing has been found.

So where is everybody?

Proposed solutions to Fermi’s paradox

Numerous scientists have examined Fermi’s paradox and proposed solutions. The following is a list of some of the proposed solutions, and common rejoinders:

  • Such civilisations are here, or are observing us, but are under orders not to disclose their existence.

Common rejoinder: This explanation (known as the “zookeeper’s theory”) is preferred by some scientists including, for instance, the late Carl Sagan. But it falls prey to the fact that it would take just one member of an extraterrestrial society to break the pact of silence – and this would seem inevitable.

  • Such civilisations have been here and planted seeds of life, or perhaps left messages in DNA.

Common rejoinder: The notion that life began on Earth from bacterial spores or the like that originated elsewhere, known as the “panspermia theory”, only pushes the origin of life problem to some other star system – scientists see no evidence in DNA sequences of anything artificial.

  • Such civilisations exist but are too far away.

Common rejoinder: A sufficiently advanced civilisation could send probes to distant stars, which could scout out suitable planets, land and construct copies of themselves, using the latest software beamed from home.

So, the entire Milky Way galaxy could be explored within, at most, a few million years.

  • Such civilisations exist but have lost interest in interstellar engagement.

Common rejoinder: As with the zookeeper theory, this would require each civilisation to forever lack interest in communication and transportation – and someone would most likely break the pact of silence.

  • Such civilisations are calling, but we don’t recognise the signal.

Common rejoinder: This explanation doesn’t apply to signals sent with the direct purpose of communicating to nascent technological societies. Again, it is hard to see how a galactic society could enforce a global ban.

  • Civilisations invariably self-destruct.

Common rejoinder: This contingency is already figured into the Drake equation (the L term, above). In any event, we have survived at least 100 years of technological adolescence, and have managed (until now) not to destroy ourselves in a nuclear or biological apocalypse.

Relatively soon we will colonise the moon and Mars, and our long-term survival will no longer rely on Earth.

  • Earth is a unique planet in fostering long-lived ecosystems resulting in intelligent life.

Common rejoinder: Perhaps, but the latest studies, in particular the detections of extrasolar planets point in the opposite direction. Environments like ours appear quite common.

  • We are alone in the Milky Way galaxy. Some scientists further conclude we are alone in the entire observable universe.

Common rejoinder: This conclusion flies in the face of the “principle of mediocrity,” namely the presumption, popular since the time of Copernicus, that there’s nothing special about human society or environment.

Stephen Webb’s 2002 book If the Universe Is Teeming with Aliens … Where is Everybody, provides numerous other proposed solutions and rejoinders.

Two of Drake’s key terms – fp (the fraction of stars that have planets) and ne (the average number of planets that can support life, per star that has planets) are subject to measurement.

Scientists once thought stable planetary systems and Earth-like planets were a rarity. But recent evidence suggests otherwise.

Thanks to Kepler and other projects, these two terms have been found to have reasonable values, although not quite as optimistic as Drake and his colleagues first estimated.

With every new research finding in the area of extrasolar planets and possible extraterrestrial living organisms, the mystery of Fermi’s paradox deepens.

“Where is everybody?” is a question that now carries even greater resonance.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis.


Peer Review: The Fallacy of Fine-Tuning

We are a product of evolution, and are not surprised that our bodies seem to be well-suited to the environment.

Our leg bones are strong enough to allow for Earth’s gravitational pull – not too weak to shatter, not so massively over-engineered as to be wasteful.

But it could also be claimed we are special and the environment was formed and shaped for us.

This, as we know, is the basis of many religious ideas.

In recent years, such ideas have been expanded beyond Earth to look at the entire universe and our place within it.

The so-called Fine-Tuning Argument – that the laws of physics have been specially-tuned, potentially by some Supreme Being, to allow human life to arise – is the focus of Victor J. Stenger’s book.

Stenger presents the mathematics underpinning cosmic evolution, the lifetime of stars, the quantum nature of atoms and so on. His central is that “fine-tuning” claims are fatally flawed.

He points out that some key areas of physics – such as the equality of the charges on the electron and proton – are set by conservation laws determined by symmetries in the universe, and so are not free to play with.

Some flaws in the theory, he argues, run deeper.

A key component of the fine-tuning argument is that there are many parameters governing our universe, and that changing any one of these would likely produce a sterile universe unlike our own.

But think of baking a cake. Arbitrarily doubling only the flour, or sugar or vanilla essence may end in a cooking disaster, but doubling all the ingredients results in a perfectly tasty cake.

The interrelationships between the laws of physics are somewhat more complicated, but the idea is the same.

A hypothetical universe in which gravity was stronger, the masses of the fundamental particles smaller and electomagnetic force weaker may well result in the following: a universe that appears a little different to our own, but is still capable of producing long-lived stars and heavy chemical elements, the basic requirements for complex life.

Stenger backs up such points with his own research, and provides access to a web-based program he wrote called MonkeyGod.

The program allows you to conjure up universes with differing underlying physics. And, as Stenger shows, randomly plucking universe parameters from thin air can still produce universes quite capable of harbouring life.

This book is a good read for those wanting to understand the fine-tuning issues in cosmology, and it’s clear Stenger really understands the science.

But while many of the discussions are robust, I felt that in places some elements of the fine-tuning argument were brushed aside with little real justification.

As a case in point, Stenger falls back on multiverse theory and the anthropic principle, whereby we occupy but one of an almost infinite sea of different universes, each with a different law of physics.

In multiverse theory, most universes would be sterile (though we should not be surprised to find ourselves in a habitable universe).

While such a multiverse – the staple of superstring and brane ideas of the cosmos – is often sold as science fact, it actually lies much closer to the world of science speculation (or, to many, fiction).

We are not out of the fine-tuning waters yet, but Stenger’s book is a good place to start getting to grips with the issues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Geraint Lewis*


Cutting cake (and eating it too) – the sticky maths of fair division

I work on the mathematics of sharing resources, which has led me to consider emotions such as envy, behaviour such as risk-taking and the best way to cut a cake.

Like, I suspect, many women, my wife enjoys eating dessert but not ordering it. I therefore dutifully order what I think she’ll like, cut it in half and invite her to choose a piece.

This is a sure-fire recipe for marital accord. Indeed, many mathematicians, economists, political scientists and others have studied this protocol and would agree. The protocol is known as the “cut-and-choose” procedure. I cut. You choose.

Cut-and-choose

Cut-and-choose is not limited to the dining table – it dates back to antiquity. It appears nearly 3,000 years ago in Hesiod’s poem Theogeny where Prometheus divides a cow and Zeus selects the part he prefers.

In more recent times, cut-and-choose has been enshrined in the UN’s 1982 Convention of the Law of the Sea where it was proposed as a mechanism to resolve disputes when dividing the seabed for mining.

To study the division of cake, cows and the seabed in a more formal way, various mathematical models have been developed. As with all models, these need to make a number of simplifying assumptions.

One typical assumption is that the people employing the cut-and-choose method are risk-averse. They won’t adopt a risky strategy that may give them less cake than a more conservative strategy.

With such assumptions in place, we can then prove what properties cake cutting procedures have and don’t have. For instance, cut-and-choose is envy free.

You won’t envy the cake I have, otherwise you would have taken this piece. And I won’t envy the piece you have, as the only risk-averse strategy is for me to cut the cake into two parts that I value equally.

On the other hand, the cutting of the cake is not totally equitable since the player who chooses can get cake that has more than half the total value for them.

With two players, it’s hard to do better than cut-and-choose. But I should record that my wife argues with me about this.

She believes it favours the second player since the first player inevitably can’t divide the cake perfectly and the second player can capitalise on this. This is the sort of assumption ignored in our mathematical models.

My wife might prefer the moving-knife procedure which doesn’t favour either player. A knife is moved over the cake, and either player calls “cut” when they are happy with the slice.

Again, this will divide the cake in such a way that neither player will envy the other (else they would have called “cut” themselves).

Three’s a crowd

Unfortunately, moving beyond two players increases the complexity of cutting cake significantly.

With two players, we needed just one cut to get to an envy free state. With three players, a complex series of five cuts of the cake might be needed. Of course, only two cuts are needed to get three slices.

The other three cuts are needed to remove any envy. And with four players, the problem explodes in our face.

An infinite number of cuts may be required to get to a situation where no one envies another’s cake. I’m sure there’s some moral here about too many cake cutters spoiling the dessert.

There are many interesting extensions of the problem. One such extension is to indivisible goods.

Suppose you have a bag of toys to divide between two children. How do you divide them fairly? As a twin myself, I know that the best solution is to ensure you buy two of everything.

It’s much more difficult when your great aunt gives you one Zhu Zhu pet, one Bratz doll and three Silly Bandz bracelets to share.

Online

More recently, I have been studying a version of the problem applicable to online settings. In such problems, not all players may be available all of the time. Consider, for instance, allocating time on a large telescope.

Astronomers will have different preferences for when to use the telescope depending on what objects are visible, the position of the sun, etcetera. How do we design a web-based reservation system so that astronomers can choose observation times that is fair to all?

We don’t want to insist all astronomers log in at the same time to decide an allocation. And we might have to start allocating time on the telescope now, before everyone has expressed their preferences. We can view this as a cake-cutting problem where the cake is made up of the time slots for observations.

The online nature of such cake-cutting problems poses some interesting new challenges.

How can we ensure that late-arriving players don’t envy cake already given to earlier players? The bad news is that we cannot now achieve even a simple property like envy freeness.

No procedure can guarantee situations where players don’t envy one another. But more relaxed properties are possible, such as not envying cake allocated whilst you are participating in the cutting of the cake.

Ham sandwich

There’s a brilliantly named piece of mathematics due to Arthur H. Stone and John Tukey. The Ham Sandwich Theorem which proves we can always cut a three-layered cake perfectly with a single cut.

Suppose we have three objects. Let’s call them “the top slice of bread”, “the ham filling” and “the bottom slice of bread”. Or if you prefer “the top layer” of the cake, “the middle layer” and “the bottom layer”.

The ham sandwich theorem proves a single slice can always perfectly bisect the three objects. Actually, the ham sandwich theorem works in any number of dimensions: any n objects in n-dimensional space can be simultaneously bisected by a single (n − 1) dimensional hyperplane.

So, in the case of the three-layered cake, n = 3, and the three-layered cake can be bisected (or cut) using a single, two-dimensional “hyperplane”. Such as, say, a knife.

Who would have thought that cutting cake would lead to higher dimensions of mathematics by way of a ham sandwich?

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Toby Walsh


Magic numbers: the beauty of decimal notation

While adding up your grocery bill in the supermarket, you’re probably not thinking how important or sophisticated our number system is.

But the discovery of the present system, by unknown mathematicians in India roughly 2,000 years ago – and shared with Europe from the 13th century onwards – was pivotal to the development of our modern world.

Now, what if our “decimal” arithmetic, often called the Indo-Arabic system, had been discovered earlier? Or what if it had been shared with the Western world earlier than the 13th century?

First, let’s define “decimal” arithmetic: we’re talking about the combination of zero, the digits one through nine, positional notation, and efficient rules for arithmetic.

“Positional notation” means that the value represented by a digit depends both on its value and position in a string of digits.

Thus 7,654 means:

(7 × 1000) + (6 × 100) + (5 × 10) + 4 = 7,654

The benefit of this positional notation system is that we need no new symbols or calculation schemes for tens, hundreds or thousands, as was needed when manipulating Roman numerals.

While numerals for the counting numbers one, two and three were seen in all ancient civilisations – and some form of zero appeared in two or three of those civilisations (including India) – the crucial combination of zero and positional notation arose only in India and Central America.

Importantly, only the Indian system was suitable for efficient calculation.

Positional arithmetic can be in base-ten (or decimal) for humans, or in base-two (binary) for computers.

In binary, 10101 means:

(1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + 1

Which, in the more-familiar decimal notation, is 21.

The rules we learned in primary school for addition, subtraction, multiplication and division can be easily extended to binary.

The binary system has been implemented in electronic circuits on computers, mostly because the multiplication table for binary arithmetic is much simpler than the decimal system.

Of course, computers can readily convert binary results to decimal notation for us humans.

As easy as counting from one to ten

Perhaps because we learn decimal arithmetic so early, we consider it “trivial”.

Indeed, the discovery of decimal arithmetic is given disappointingly brief mention in most western histories of mathematics.

In reality, decimal arithmetic is anything but “trivial” since it eluded the best minds of the ancient world including Greek mathematical super-genius Archimedes of Syracuse.

Archimedes – who lived in the 3rd century BCE – saw far beyond the mathematics of his time, even anticipating numerous key ideas of modern calculus. He also used mathematics in engineering applications.

Nonetheless, he used a cumbersome Greek numeral system that hobbled his calculations.

Imagine trying to multiply the Roman numerals XXXI (31) and XIV (14).

First, one must rewrite the second argument as XIIII, then multiply the second by each letter of the first to obtain CXXXX CXXXX CXXXX XIIII.

These numerals can then be sorted by magnitude to arrive at CCCXXXXXXXXXXXXXIIII.

This can then be rewritten to yield CDXXXIV (434).

(For a bit of fun, try adding MCMLXXXIV and MMXI. First person to comment with the correct answer and their method gets a jelly bean.)

Thus, while possible, calculation with Roman numerals is significantly more time-consuming and error prone than our decimal system (although it is harder to alter the amount payable on a Roman cheque).

History lesson

Although decimal arithmetic was known in the Arab world by the 9th century, it took many centuries to make its way to Europe.

Italian mathematician Leonardo Fibonacci travelled the Mediterranean world in the 13th century, learning from the best Arab mathematicians of the time. Even then, it was several more centuries until decimal arithmetic was fully established in Europe.

Johannes Kepler and Isaac Newton – both giants in the world of physics – relied heavily on extensive decimal calculations (by hand) to devise their theories of planetary motion.

In a similar way, present-day scientists rely on massive computer calculations to test hypotheses and design products. Even our mobile phones do surprisingly sophisticated calculations to process voice and video.

But let us indulge in some alternate history of mathematics. What if decimal arithmetic had been discovered in India even earlier, say 300 BCE? (There are indications it was known by this date, just not well documented.)

And what if a cultural connection along the silk-road had been made between Indian mathematicians and Greek mathematicians at the time?

Such an exchange would have greatly enhanced both worlds, resulting in advances beyond the reach of each system on its own.

For example, a fusion of Indian arithmetic and Greek geometry might well have led to full-fledged trigonometry and calculus, thus enabling ancient astronomers to deduce the laws of motion and gravitation nearly two millennia before Newton.

In fact, the combination of mathematics, efficient arithmetic and physics might have accelerated the development of modern technology by more than two millennia.

It is clear from history that without mathematics, real progress in science and technology is not possible (try building a mobile phone without mathematics). But it’s also clear that mathematics alone is not sufficient.

The prodigious computational skills of ancient Indian mathematicians never flowered into advanced technology, nor did the great mathematical achievements of the Greeks, or many developments in China.

On the other hand, the Romans, who were not known for their mathematics, still managed to develop some impressive technology.

But a combination of advanced mathematics, computation, and technology makes a huge difference.

Our bodies and our brains today are virtually indistinguishable from those of ancient times.

With the earlier adoption of Indo-Arabic decimal arithmetic, the modern technological world of today might – for better or worse – have been achieved centuries ago.

And that’s something worth thinking about next time you’re out grocery shopping.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis


Getting Projections Right: Predicting Future Climate

Region by region projections of how climate is likely to change over the coming decades help to make the prospect of global warming more tangible and relevant.

Picturing the climate we are likely to have with unabated increases in greenhouse gas concentrations in, say, Melbourne, Sydney, or the Murray Darling, lets us weigh up the costs and benefits of actions to reduce greenhouse gas emissions.

Regional projections also let us plan how to adapt to any unavoidable changes in our climate. Planning changes to farming practices, water supply or natural ecosystem management, for example, requires some idea of what our future regional climate is likely to be.

Here in Australia we have had a long history of national climate change projections. Since 1990, CSIRO has released five updates of projected changes in temperature, rainfall, extreme events and many other key aspects of our climate system.

CSIRO’s last release was done with the Bureau of Meteorology in 2007. It provided the most detailed product available up to that time.

This release included the innovation (a world first amongst national projections at the time) of providing probabilities for the projected changes.

Why modelling?

The complexity of the climate system means that we cannot simply extrapolate past trends to forecast future conditions. Instead, we use climate models developed and utilised extensively over recent decades.

These are mathematical representations of the climate systems based on the laws of physics.

Results from all of the climate modelling centres around the world are considered in preparing Australian projections. We place greatest weight on the models that are best in representing our historical climate.

Global climate modelling has continued to develop over recent years. Most of the modelling centres are now running improved versions of their models compared to what was available in 2007.

As part of an international coordinated effort, a new database of the latest climate model output is being assembled for researchers to use ahead of the next report of the Intergovernmental Panel on Climate Change (IPCC). It is many times richer than any previously available.

Analysing this massive resource will be a focus of research of a large number of scientists in CSIRO, BoM and the universities over the next few years.

Putting the models to good use

While the science has been developing, so have the demands of users of this projection information. Policymakers at all levels of government, natural resource planners, industry, non-government organisations and individuals all are placing demands on climate projection science. These are growing in volume and complexity.

For example, researchers want regionally specific scenarios for changes in the frequency of hot days, extreme rainfall, fire, drought, cyclones, hail, evaporation, sunshine, coral bleaching temperatures, ocean acidification and sea level rise.

This type of information is particularly useful for risk assessments that can inform policy development and implementation.

For example, assessing future climate risks to infrastructure can place quite different demands on climate projection science compared to, say, assessing risks to agricultural enterprises.

Given these developments, the time is coming for the Australian climate research community to update and expand their projections. Planning has begun for a release in 2014. This will be just after the completion of the next IPCC assessment.

At that time, Australians will have the latest climate projections for the 21st century for a range of factors, including sea levels, seasonal-average temperatures and rainfall, as well as extreme weather events.

Resources permitting, these new projections will also include online services which will enable users to generate climate scenarios to suit the specific needs of many risk assessments.

Finding out more about summer rainfall

As climate scientists start to analyse these new model data, a major focus of attention will be simulated changes to summer rainfall over Australia.

Models have consistently indicated a drying trend for the winter rainfall regions in southern Australia and this is a result which also aligns with other evidence such as observed trends.

On the other hand, models give inconsistent projections for summer rainfall change, ranging from large increase to large decrease. Researchers will be hoping to reduce this key uncertainty as they begin to analyse the results.

However, when it comes to projecting our future climate, there will always be some uncertainty to deal with.

Dealing with uncertainty

Climate projection scientists have to clearly convey the uncertainties while not letting these overwhelm the robust findings about regional climate change that the science provides.

Climate projection uncertainties can be presented in many different ways, such as through ranges of plausible change, as probabilistic estimates, or as alternative scenarios.

We shouldn’t necessarily be most interested in the most likely future. In some cases, it may be more prudent to plan for less likely, but higher risk, future climates.

It can be difficult to make a complex message as relevant as possible to a wide range of decision-makers. CSIRO climate scientists are tackling this by working with social scientists to help develop new and more effective communication methods. These should be ready in time for the next projections release.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Penny Whetton*


How linguists are unlocking the meanings of Shakespeare’s words using numbers

Today it would seem odd to describe a flower with the word “bastard”—why apply a term of personal abuse to a flower? But in Shakespeare’s time, “bastard” was a technical term describing certain plants.

Similarly, associating the word “bad” with success and talking of a “bad success” would be decidedly odd today. But it was not unusual then, when success meant outcome, which could be good or bad.

Corpus linguistics is a branch of linguistics which uses computers to explore the use of words in huge collections of language. It can spot nuances that might be overlooked by linguists working manually, or large patterns that a lifetime of studying may not reveal. And numbers, counts of words and keeping track of where the words are occurring, are key.

In my experience at conferences and the like, talk of numbers is not unanimously well received in the world of literary studies. Numbers are sometimes perceived as being reductive, or inappropriate when discussing creative works, or only accessible to specialists.

Yet, describing any pattern involves numbers. In the first paragraph above, I used the words “normal,” “odd” and “unusual” as soft ways of describing frequencies—the numbers of occurrences (think also of, for example, “unique,” “rare,” “common”).

Even talking about “associations” involves numbers. Often associations evolve from an unusually high number of encounters among two or more things. And numbers help us to see things.

Changing meanings

Along with my team at Lancaster University, I have used computers to examine some 20,000 words gleaned from a million-word corpus (a collection of written texts) of Shakespeare’s plays, resulting in a new kind of dictionary.

People have created Shakespeare dictionaries before, but this is the first to use the full armory of corpus techniques and the first to be comparative. It not only looks at words inside Shakespeare’s plays, but also compares them with a matching million-word corpus of contemporary early modern plays, along with huge corpus of 320 million words of various writings of the period.

Of course, words in early modern England had lives outside Shakespeare. “Bastard” was generally a term for a hybrid plant, occurring in technical texts on horticulture.

It could be, and very occasionally was, used for personal abuse, as in King Lear, where Edmund is referred to as a “bastard.” But this is no general term of abuse, let alone banter, as you might see it used today. It is a pointed attack on him being of illegitimate parentage, genetically hybrid, suspect at his core.

The word “bad” is not now associated with the word “success,” yet 400 years ago it was, as were other negative words, including “disastrous,” “unfortunate,” “ill,” “unhappy” and “unlucky.”

We can tap into a word’s associations by examining its collocates, that is, words with which it tends to occur (rather like we make judgements about people partly on the basis of the company they keep). In this way we can see that the meaning of “success” was “outcome” and that outcome, given its collocates, could be good or bad.

Highly frequent words

We can use intuition to guess some word patterns. It’s no surprise that in early modern English, the word “wicked” occurred very frequently in religious texts of the time. But less intuitively, so did “ourselves,” a word associated with sermons and plays, both of which have in common a habit of making statements about people on earth.

Highly frequent words, so often excluded by historical dictionaries and reference works, are often short words that seem insignificant. They have a wood-for-trees problem.

Yet corpus techniques highlight the interesting patterns. It turns out that a frequent sense of the humble preposition “by” is religious: to reinforce the sincerity of a statement by invoking the divine (for example, “by God”).

Numbers can also reveal what is happening inside Shakespeare’s works. Frequent words such as “alas” or “ah” are revealed to be heavily used by Shakespeare’s female characters, showing that they do the emotional work of lamentation in the plays, especially his histories.

Infrequent words

What of the infrequent? Words that occur only once in Shakespeare—so-called hapax legomena—are nuggets of interest. The single case of “bone-ache” in Troilus and Cressida evokes the horrifying torture that syphilis, which it applies to, would have been. In contrast, “ear-kissing” in King Lear is Shakespeare’s rather more pleasant and creative metaphor for whispering (interestingly, other writers used it for the notion of flattering).

Another group of interesting infrequent words concerns words that seem to have their earliest occurrence in Shakespeare. Corpus techniques allowed us to navigate the troubled waters of spelling variation. Before spelling standardization, searching for the word “sweet,” for instance, would miss cases spelt “sweete,” “swete” or “svveet.”

In this way, we can better establish whether a word written by a writer really is the earliest instance. Shakespearean firsts include the rather boring “branchless” (Antony and Cleopatra), a word probably not coined by Shakespeare but merely first recorded in his text. But there is also the more creative “ear-piercing” (Othello) and the distinctly modern-sounding “self-harming” (The Comedy of Errors and Richard II).

Why are these advances in historical corpus linguistics happening now? Much of the technology to produce these findings was not in place until relatively recently.

Programs to deal with spelling variation (such as Vard) or to analyse vast collections of electronic texts in sophisticated ways (such as CQPweb), to say nothing of the vast quantities of computer-readable early modern language data (such as EEBO-TCP), have only been widely used in the last 10 or so years. We are therefore on the cusp of a significant increase in our understanding and appreciation of major writers such as Shakespeare.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Culpeper, The Conversation

 

 


Explainer: the point of pure mathematics

What is pure mathematics? What do pure mathematicians do? Why is pure mathematics important?

These are questions I’m often confronted with when people discover I do pure mathematics.

I always manage to provide an answer, but it never seems to fully satisfy.

So, I’ll attempt to give a more formulated and mature response to these three questions. I apologise ahead of time for the oversimplifications I’ve had to make in order to be concise.

Broadly speaking, there are two different types of mathematics (and I can already hear protests) – pure and applied. Philosophers such as Bertrand Russell attempted to give rigorous definitions of this classification.

I capture the distinction in the following somewhat cryptic statement: pure mathematicians prove theorems, and applied mathematicians construct theories.

What this means is that the paradigm in which the two groups of people do mathematics are different.

Pure mathematicians are often driven by abstract problems. To make the abstract concrete, here are a couple of examples: “are there infinitely many twin primes” or “does every true mathematical statement have a proof?”

To be more precise, mathematics built out of axioms, and the nature of mathematical truth is governed by predicate logic.

A mathematical theorem is a true statement that is accompanied by a proof that illustrates its truth beyond all doubt by deduction using logic.

Unlike an empirical theory, it is not enough to simply construct an explanation that may change as exceptions arise.

Something a mathematician suspects of being true due to evidence, but not proof, is simply conjecture.

Applied

Applied mathematicians are typically motivated by problems arising from the physical world. They use mathematics to model and solve these problems.

These models are really theories and, as with any science, they are subject to testifiability and falsifiability. As the amount of information regarding the problem increases, these models will possibly change.

Pure and applied are not necessarily mutually exclusive. There are many great mathematicians who tread both grounds.

Pure

There are many problems pursued by pure mathematicians that have their roots in concrete physical problems – particularly those that arise from relativity or quantum mechanics.

Typically, in a deeper understanding of such phenomena, various “technicalities” arise (believe me when I tell you these technicalities are very difficult to explain). These become abstracted away into purely mathematical statements that pure mathematicians can attack.

Solving these mathematical problems then can have important applications.

Ok computer

Let me give a concrete example of how abstract thought lead to the development of a device that underpins the functions of modern society: the computer.

The earliest computers were fixed program – i.e. they were purpose-built to perform only one task. Changing the program was a very costly and tedious affair.

The modern remnants of such a dinosaur would be a pocket calculator, which is built to only perform basic arithmetic. In contrast, a modern computer allows one to load a calculator program, or word-processing program, and you don’t have to switch machines to do it.

This paradigm shift occurred in the mid 1940s and is called the stored-program or the von Neumann architecture.

The widely accessible, but lesser-known, story is that this concept has its roots in the investigation of an abstract mathematical problem called the Entscheidungsproblem (decision problem).

The Entscheidungsproblem was formulated in in 1928 by the famous mathematician David Hilbert.

It approximately translates to this: “does there exist a procedure that can decide the truth or falsehood of mathematical statement in a finite number of steps?”

This was answered in the negative by Alonzo Church and Alan Turing independently in 1936 and 1937. In his paper, Turing formulates an abstract machine, which we now call the Turing machine.

The machine possesses an infinitely long tape (memory), a head that can move a step at a time, read from and write to the tape, a finite instruction table which gives instructions to the head, and a finite set of states (such as “accept”, or “deny”). One initiates the machine with input on the tape.

Such a machine cannot exist outside of the realm of mathematics since it has an infinitely long tape.

But it is the tool used to define the notion of computability. That is, we say a problem is computable if we can encode it using a Turing machine.

One can then see the parallels of a Turing machine with a fixed-program machine.

Now, suppose that there is a Turing machine U that can take the instruction table and states of an arbitrary Turing machine T (appropriately encoded), and on the same tape input I to T, and run the Turing machine T on the input I.

Such a machine is called a Universal Turing Machine.

In his 1937 paper, Turing proves an important existence theorem: there exists a universal Turing machine. This is now the parallel of the store-program concept, the basis of the modern programmable computer.

It is remarkable that an abstract problem concerning the foundations of mathematics laid the foundations to the advent of the modern computer.

It is perhaps a feature of pure mathematics that the mathematician is not constrained by the limitations of the physical world and can appeal to the imagination to create and construct abstract objects.

That is not to say the pure mathematician does not formalise physical concepts such as energy, entropy etcetera, to do abstract mathematics.

In any case, this example should illustrate that the pursuit of purely mathematical problems is a worthwhile cause that can be of tremendous value to society.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Lashi Bandara, Australian National University


Mathematics: Why We Need More Qualified Teachers

There is a crisis in the education system, and it’s affecting the life chances of many young Australians. The number of secondary teaching graduates with adequate qualifications to teach mathematics is well below what it should be, and children’s education is suffering.

A report completed for the Australian Council of Deans of Science in 2006 documented the problem, but the situation has deteriorated since. The percentage of Year 12 students completing the more advanced mathematics courses continues to decline. This affects mathematics enrolments in the universities and a number no longer offer a major in mathematics, worsening an already inadequate supply of qualified teachers.

Changing qualifications

To exacerbate an already serious problem, the Australian Institute for Teaching and School Leadership (AITSL) currently proposes that graduate entry secondary programs must comprise at least two years of full-time equivalent professional studies in education.

There will be no DipEd pathway, which allows graduates to enter the profession within a year. Forcing them to spend more time in education will lead to increased debt. You couldn’t blame people for changing their mind about becoming a teacher.

I believe the changes in qualifications will lead to a disaster, denying even more young people access to a quality mathematics education that gives them real opportunities in the modern world.

An unequal opportunity

This is a social justice issue because access to a decent mathematics education in Australia is now largely determined by where you live and parental income.

In the past there have been concerns regarding the participation of girls in mathematics and the effect on their careers and life chances.

Australia now seems incapable of responding to a situation where only the privileged have access to well-qualified teachers of mathematics.

The Northern Territory is a prime example. The contraction of mathematics at Charles Darwin University means the NT is now totally dependent on the rest of Australia for its secondary mathematics teachers. And how can talented mathematics students in the NT be encouraged to pursue mathematical careers when it means moving away?

Elsewhere most of regional Australia is largely dependent on mathematics teachers who complete their mathematics in the capital or large regional cities.

Examine the policy

In what is supposed to be a research-driven policy environment, has anyone considered the consequences of the AITSL proposal? And whether this will actually give teachers the skills they need for the positions they subsequently occupy?

In my own case I came to Melbourne with a BSc (Hons) from the University of Adelaide. In the early 1970s I completed a DipEd at La Trobe. The only real cost was some childcare. If I remember correctly the government was so keen to get professional women into the workforce they even helped with the cost of books. Would I have committed to a two-year course? I’m not sure but I had no HECS debt and ongoing employment was just about guaranteed.

My first school had a very high percentage of students from a non-English speaking background. Many of the Year 7s had very poor achievement in mathematics and I turned my attention to finding out what could be done to help them reach a more appropriate standard.

In the course of this I met Associate Professor John Munro who stressed the importance of language in the learning of mathematics. To be a better mathematics teacher, I completed another degree in teaching English as a second language.

Later I coordinated a DipEd program. Many of our better students were of a mature age and struggling with money, family, jobs and a host of other things. They managed for a year. Requiring them to complete two would have seen many of them not enrol in the first place or drop out when it became too much.

Learn on the job

A two-year teaching qualification does not necessarily equip you for the teaching situation you find yourself in. If AITSL wants all teachers to have a second year, let that be achieved in work-related learning over, for example, 5-7 years.

Australia can’t afford to lose a single prospective teacher who is an articulate, well-qualified graduate in mathematics. If the one-year DipEd goes, many will be lost. They have too many options. The new graduates will think about other courses, the career change, mature-age graduates will decide it is all too hard.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jan Thomas*


Explainer: The Point of Pure Mathematics

What is pure mathematics? What do pure mathematicians do? Why is pure mathematics important?

These are questions I’m often confronted with when people discover I do pure mathematics.

I always manage to provide an answer but it never seems to fully satisfy.

So I’ll attempt to give a more formulated and mature response to these three questions. I apologise ahead of time for the oversimplifications I’ve had to make in order to be concise.

Broadly speaking, there are two different types of mathematics (and I can already hear protests) – pure and applied. Philosophers such as Bertrand Russell attempted to give rigorous definitions of this classification.

I capture the distinction in the following, somewhat cryptic, statement: pure mathematicians prove theorems and applied mathematicians construct theories.

What this means is that the paradigm in which mathematics is done by the two groups of people are different.

Pure mathematicians are often driven by abstract problems. To make the abstract concrete, here are a couple of examples: “are there infinitely many twin primes” or “does every true mathematical statement have a proof?”

To be more precise, mathematics built out of axioms, and the nature of mathematical truth is governed by predicate logic.

A mathematical theorem is a true statement that is accompanied by a proof that illustrates its truth beyond all doubt by deduction using logic.

Unlike an empirical theory, it is not enough to simply construct an explanation that may change as exceptions arise.

Something a mathematician suspects of being true due to evidence, but not proof, is simply conjecture.

Applied

Applied mathematicians are typically motivated by problems arising from the physical world. They use mathematics to model and solve these problems.

These models are really theories and, as with any science, they are subject to testifiability and falsifiability. As the amount of information regarding the problem increases, these models will possibly change.

Pure and applied are not necessarily mutually exclusive. There are many great mathematicians who tread both grounds.

Pure

There are many problems pursued by pure mathematicians that have their roots in concrete physical problems – particularly those that arise from relativity or quantum mechanics.

Typically, in a deeper understanding of such phenomena, various “technicalities” arise (believe me when I tell you these technicalities are very difficult to explain). These become abstracted away into purely mathematical statements that pure mathematicians can attack.

Solving these mathematical problems then can have important applications.

Ok computer

Let me give a concrete example of how abstract thought lead to the development of a device that underpins the functions of modern society: the computer.

The earliest computers were fixed program – i.e. they were purpose-built to perform only one task. Changing the program was a very costly and tedious affair.

The modern remnants of such a dinosaur would be a pocket calculator, which is built to only perform basic arithmetic. In contrast, a modern computer allows one to load a calculator program, or word-processing program, and you don’t have to switch machines to do it.

This paradigm shift occurred in the mid 1940s and is called the stored-program or the von Neumann architecture.

The widely accessible, but lesser-known, story is that this concept has its roots in the investigation of an abstract mathematical problem called the Entscheidungsproblem (decision problem).

The Entscheidungsproblem was formulated in in 1928 by the famous mathematician David Hilbert.

It approximately translates to this: “does there exist a procedure that can decide the truth or falsehood of mathematical statement in a finite number of steps?

This was answered in the negative by Alonzo Church and Alan Turing independently in 1936 and 1937. In his paper, Turing formulates an abstract machine, which we now call the Turing machine.

The machine possesses an infinitely long tape (memory), a head that can move a step at a time, read from and write to the tape, a finite instruction table which gives instructions to the head, and a finite set of states (such as “accept”, or “deny”). One initiates the machine with input on the tape.

Such a machine cannot exist outside of the realm of mathematics since it has an infinitely long tape.

But it is the tool used to define the notion of computability. That is, we say a problem is computable if we can encode it using a Turing machine.

One can then see the parallels of a Turing machine with a fixed-program machine.

Now, suppose that there is a Turing machine U that can take the instruction table and states of an arbitrary Turing machine T (appropriately encoded), and on the same tape input I to T, and run the Turing machine T on the input I.

Such a machine is called a Universal Turing Machine.

In his 1937 paper, Turing proves an important existence theorem: there exists a universal Turing machine. This is now the parallel of the store-program concept, the basis of the modern programmable computer.

It is remarkable that an abstract problem concerning the foundations of mathematics laid the foundations to the advent of the modern computer.

It is perhaps a feature of pure mathematics that the mathematician is not constrained by the limitations of the physical world and can appeal to the imagination to create and construct abstract objects.

That is not to say the pure mathematician does not formalise physical concepts such as energy, entropy etcetera, to do abstract mathematics.

In any case, this example should illustrate that the pursuit of purely mathematical problems is a worthwhile cause that can be of tremendous value to society.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Lashi Bandara*


Mathematics: Why we need more qualified teachers

There is a crisis in the education system, and it’s affecting the life chances of many young Australians. The number of secondary teaching graduates with adequate qualifications to teach mathematics is well below what it should be, and children’s education is suffering.

A report completed for the Australian Council of Deans of Science in 2006 documented the problem, but the situation has deteriorated since. The percentage of Year 12 students completing the more advanced mathematics courses continues to decline. This affects mathematics enrolments in the universities and a number no longer offer a major in mathematics, worsening an already inadequate supply of qualified teachers.

Changing qualifications

To exacerbate an already serious problem, the Australian Institute for Teaching and School Leadership (AITSL) currently proposes that graduate entry secondary programs must comprise at least two years of full-time equivalent professional studies in education.

There will be no DipEd pathway, which allows graduates to enter the profession within a year. Forcing them to spend more time in education will lead to increased debt. You couldn’t blame people for changing their mind about becoming a teacher.

I believe the changes in qualifications will lead to a disaster, denying even more young people access to a quality mathematics education that gives them real opportunities in the modern world.

An unequal opportunity

This is a social justice issue because access to a decent mathematics education in Australia is now largely determined by where you live and parental income.

In the past there have been concerns regarding the participation of girls in mathematics and the effect on their careers and life chances.

Australia now seems incapable of responding to a situation where only the privileged have access to well-qualified teachers of mathematics.

The Northern Territory is a prime example. The contraction of mathematics at Charles Darwin University means the NT is now totally dependent on the rest of Australia for its secondary mathematics teachers. And how can talented mathematics students in the NT be encouraged to pursue mathematical careers when it means moving away?

Elsewhere most of regional Australia is largely dependent on mathematics teachers who complete their mathematics in the capital or large regional cities.

Examine the policy

In what is supposed to be a research-driven policy environment, has anyone considered the consequences of the AITSL proposal? And whether this will actually give teachers the skills they need for the positions they subsequently occupy?

In my own case I came to Melbourne with a BSc (Hons) from the University of Adelaide. In the early 1970s I completed a DipEd at La Trobe. The only real cost was some childcare. If I remember correctly the government was so keen to get professional women into the workforce, they even helped with the cost of books. Would I have committed to a two-year course? I’m not sure but I had no HECS debt and ongoing employment was just about guaranteed.

My first school had a very high percentage of students from a non-English speaking background. Many of the Year 7s had very poor achievement in mathematics and I turned my attention to finding out what could be done to help them reach a more appropriate standard.

In the course of this I met Associate Professor John Munro who stressed the importance of language in the learning of mathematics. To be a better mathematics teacher, I completed another degree in teaching English as a second language.

Later I coordinated a DipEd program. Many of our better students were of a mature age and struggling with money, family, jobs and a host of other things. They managed for a year.

Requiring them to complete two would have seen many of them not enrol in the first place or drop out when it became too much.

Learn on the job

A two-year teaching qualification does not necessarily equip you for the teaching situation you find yourself in. If AITSL wants all teachers to have a second year, let that be achieved in work-related learning over, for example, 5-7 years.

Australia can’t afford to lose a single prospective teacher who is an articulate, well-qualified graduate in mathematics. If the one-year DipEd goes, many will be lost. They have too many options. The new graduates will think about other courses, the career change, mature-age graduates will decide it is all too hard.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jan Thomas, Senior Fellow, Australian Mathematical Sciences Institute