Hot And Bothered: The Uncertain Mathematics Of Global Warming

Uncertainty exists – but that’s no excuse for a lack of action.

These are painful times for those hoping to see an international consensus and substantive action on global warming.

In the US, Republican presidential front-runner Mitt Romney said in June 2011: “The world is getting warmer” and “humans have contributed” but in October 2011 he backtracked to: “My view is that we don’t know what’s causing climate change on this planet.”

His Republican challenger Rick Santorum added: “We have learned to be sceptical of ‘scientific’ claims, particularly those at war with our common sense” and Rick Perry, who suspended his campaign to become the Republican presidential candidate last month, stated flatly: “It’s all one contrived phony mess that is falling apart under its own weight.”

Meanwhile, the scientific consensus has moved in the opposite direction. In a study published in October 2011, 97% of climate scientists surveyed agreed global temperatures have risen over the past 100 years. Only 5% disagreed that human activity is a significant cause of global warming.

The study concluded in the following way: “We found disagreement over the future effects of climate change, but not over the existence of anthropogenic global warming.

“Indeed, it is possible that the growing public perception of scientific disagreement over the existence of anthropocentric warming, which was stimulated by press accounts of [the UK’s] ”Climategate“ is actually a misperception of the normal range of disagreements that may persist within a broad scientific consensus.”

More progress has been made in Europe, where the EU has established targets to reduce emissions by 20% (from 1990 levels) by 2020. The UK, which has been beset by similar denial movements, was nonetheless able to establish, as a legally binding target, an 80% reduction by 2050 and is a world leader on abatement.

In Australia, any prospect for consensus was lost when Tony Abbott used opposition to the Labor government’s proposed carbon market to replace Malcolm Turnbull as leader of the Federal Opposition in late 2009.

It used to be possible to hear right-wing politicians in Australia or the USA echo the Democratic congressman Henry Waxman who said last year:

“If my doctor told me I had cancer, I wouldn’t scour the country to find someone to tell me that I don’t need to worry about it.”

But such rationality has largely left the debate in both the US and Oz. In Australia, a reformulated carbon tax policy was enacted in November only after a highly partisan debate.

In Canada, the debate is a tad more balanced. The centre-right Liberal government in British Columbia passed the first carbon tax in North America in 2008, but the governing Federal Conservative party now offers a reliable “anti-Kyoto” partnership with Washington.

Overviews of the evidence for global warming, together with responses to common questions, are available from various sources, including:

  • Seven Answers to Climate Contrarian Nonsense, in Scientific American
  • Climate change: A Guide for the Perplexed, in New Scientist
  • Cooling the Warming Debate: Major New Analysis Confirms That Global Warming Is Real, in Science Daily
  • Remind me again: how does climate change work?, on The Conversation

It should be acknowledged in these analyses that all projections are based on mathematical models with a significant level of uncertainty regarding highly complex and only partially understood systems.

As 2011 Australian Nobel-Prize-winner Brian Schmidt explained while addressing a National Forum on Mathematical Education:

“Climate models have uncertainty and the earth has natural variation … which not only varies year to year, but correlates decade to decade and even century to century. It is really hard to design a figure that shows this in a fair way — our brain cannot deal with the correlations easily.

“But we do have mathematical ways of dealing with this problem. The Australian academy reports currently indicate that the models with the effects of CO₂ are with 90% statistical certainty better at explaining the data than those without.

“Most of us who work with uncertainty know that 90% statistical uncertainty cannot be easily shown within a figure — it is too hard to see …”

“ … Since predicting the exact effects of climate change is not yet possible, we have to live with uncertainty and take the consensus view that warming can cover a wide range of possibilities, and that the view might change as we learn more.”

But uncertainty is no excuse for inaction. The proposed counter-measures (e.g. infrastructure renewal and modernisation, large-scale solar and wind power, better soil remediation and water management, not to mention carbon taxation) are affordable and most can be justified on their own merits, while the worst-case scenario — do nothing while the oceans rise and the climate changes wildly — is unthinkable.

Some in the first world protest that any green energy efforts are dwarfed by expanding energy consumption in China and elsewhere. Sure, China’s future energy needs are prodigious, but China also now leads the world in green energy investment.

By blaiming others and focusing the debate on the level of human responsibility for warming and about the accuracy of predictions, the deniers have managed to derail long-term action in favour of short-term economic policies.

Who in the scientific community is promoting the denial of global warming? As it turns out, the leading figures in this movement have ties to conservative research institutes funded mostly by large corporations, and have a history of opposing the scientific consensus on issues such as tobacco and acid rain.

What’s more, those who lead the global warming denial movement – along with creationists, intelligent design writers and the “mathematicians” who flood our email inboxes with claims that pi is rational or other similar nonsense – are operating well outside the established boundaries of peer-reviewed science.

Austrian-born American physicist Fred Singer, arguably the leading figure of the denial movement, has only six peer-reviewed publications in the climate science field, and none since 1997.

After all, when issues such as these are “debated” in any setting other than a peer-reviewed journal or conference, one must ask: “If the author really has a solid argument, why isn’t he or she back in the office furiously writing up this material for submission to a leading journal, thereby assuring worldwide fame and glory, not to mention influence?”

In most cases, those who attempt to grab public attention through other means are themselves aware they are short-circuiting the normal process, and that they do not yet have the sort of solid data and airtight arguments that could withstand the withering scrutiny of scientific peer review.

When they press their views in public to a populace that does not understand how the scientific enterprise operates, they are being disingenuous.

With regards to claims scientists are engaged in a “conspiracy” to hide the “truth” on an issue such as global warming or evolution, one should ask how a secret “conspiracy” could be maintained in a worldwide, multicultural community of hundreds of thousands of competitive researchers.

As Benjamin Franklin wrote in his Poor Richard’s Almanac: “Three can keep a secret, provided two of them are dead.” Or as one of your present authors quipped, tongue-in-cheek, in response to a state legislator who was skeptical of evolution: “You have no idea how humiliating this is to me — there is a secret conspiracy among leading scientists, but no-one deemed me important enough to be included!”

There’s another way to think about such claims: we have tens-of-thousands of senior scientists in their late-fifties or early-sixties who have seen their retirement savings decimated by the recent stock market plunge. These are scientists who now wonder if the day will ever come when they are financially well-off-enough to do their research without the constant stress and distraction of applying for grants (the majority of which are never funded).

All one of these scientists has to do to garner both worldwide fame and considerable fortune (through book contracts, the lecture circuit and TV deals) is to call a news conference and expose “the truth”. So why isn’t this happening?

The system of peer-reviewed journals and conferences sponsored by major professional societies is the only proper forum for the presentation and debate of new ideas, in any field of science or mathematics.

It has been stunningly successful: errors have been uncovered, fraud has been rooted out and bogus scientific claims (such as the 1903 N-ray claim, the 1989 cold fusion claim, and the more-recent assertion of an autism-vaccination link) have been debunked.

This all occurs with a level of reliability and at a speed that is hard to imagine in other human endeavours. Those who attempt to short-circuit this system are doing potentially irreparable harm to the integrity of the system.

They may enrich themselves or their friends, but they are doing grievous damage to society at large.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon) and David H. Bailey*

 


Hot and bothered: the uncertain mathematics of global warming

These are painful times for those hoping to see an international consensus and substantive action on global warming.

In the US, Republican presidential front-runner Mitt Romney said in June 2011: “The world is getting warmer” and “humans have contributed” but in October 2011 he backtracked to: “My view is that we don’t know what’s causing climate change on this planet.”

His Republican challenger Rick Santorum added: “We have learned to be sceptical of ‘scientific’ claims, particularly those at war with our common sense” and Rick Perry, who suspended his campaign to become the Republican presidential candidate last month, stated flatly: “It’s all one contrived phony mess that is falling apart under its own weight.”

Meanwhile, the scientific consensus has moved in the opposite direction. In a study published in October 2011, 97% of climate scientists surveyed agreed global temperatures have risen over the past 100 years. Only 5% disagreed that human activity is a significant cause of global warming.

The study concluded in the following way: “We found disagreement over the future effects of climate change, but not over the existence of anthropogenic global warming.

“Indeed, it is possible that the growing public perception of scientific disagreement over the existence of anthropocentric warming, which was stimulated by press accounts of [the UK’s] ”Climategate“ is actually a misperception of the normal range of disagreements that may persist within a broad scientific consensus.”

More progress has been made in Europe, where the EU has established targets to reduce emissions by 20% (from 1990 levels) by 2020. The UK, which has been beset by similar denial movements, was nonetheless able to establish, as a legally binding target, an 80% reduction by 2050 and is a world leader on abatement.

In Australia, any prospect for consensus was lost when Tony Abbott used opposition to the Labor government’s proposed carbon market to replace Malcolm Turnbull as leader of the Federal Opposition in late 2009.

It used to be possible to hear right-wing politicians in Australia or the USA echo the Democratic congressman Henry Waxman who said last year:

“If my doctor told me I had cancer, I wouldn’t scour the country to find someone to tell me that I don’t need to worry about it.”

But such rationality has largely left the debate in both the US and Oz. In Australia, a reformulated carbon tax policy was enacted in November only after a highly partisan debate.

In Canada, the debate is a tad more balanced. The centre-right Liberal government in British Columbia passed the first carbon tax in North America in 2008, but the governing Federal Conservative party now offers a reliable “anti-Kyoto” partnership with Washington.

Overviews of the evidence for global warming, together with responses to common questions, are available from various sources, including:

  • Seven Answers to Climate Contrarian Nonsense, in Scientific American
  • Climate change: A Guide for the Perplexed, in New Scientist
  • Cooling the Warming Debate: Major New Analysis Confirms That Global Warming Is Real, in Science Daily
  • Remind me again: how does climate change work? on The Conversation

It should be acknowledged in these analyses that all projections are based on mathematical models with a significant level of uncertainty regarding highly complex and only partially understood systems.

As 2011 Australian Nobel-Prize-winner Brian Schmidt explained while addressing a National Forum on Mathematical Education:

“Climate models have uncertainty and the earth has natural variation … which not only varies year to year, but correlates decade to decade and even century to century. It is really hard to design a figure that shows this in a fair way — our brain cannot deal with the correlations easily.

“But we do have mathematical ways of dealing with this problem. The Australian academy reports currently indicate that the models with the effects of CO₂ are with 90% statistical certainty better at explaining the data than those without.

“Most of us who work with uncertainty know that 90% statistical uncertainty cannot be easily shown within a figure — it is too hard to see …”

“ … Since predicting the exact effects of climate change is not yet possible, we have to live with uncertainty and take the consensus view that warming can cover a wide range of possibilities, and that the view might change as we learn more.”

But uncertainty is no excuse for inaction. The proposed counter-measures (e.g. infrastructure renewal and modernisation, large-scale solar and wind power, better soil remediation and water management, not to mention carbon taxation) are affordable and most can be justified on their own merits, while the worst-case scenario — do nothing while the oceans rise and the climate changes wildly — is unthinkable.

Some in the first world protest that any green energy efforts are dwarfed by expanding energy consumption in China and elsewhere. Sure, China’s future energy needs are prodigious, but China also now leads the world in green energy investment.

By blaiming others and focusing the debate on the level of human responsibility for warming and about the accuracy of predictions, the deniers have managed to derail long-term action in favour of short-term economic policies.

Who in the scientific community is promoting the denial of global warming? As it turns out, the leading figures in this movement have ties to conservative research institutes funded mostly by large corporations, and have a history of opposing the scientific consensus on issues such as tobacco and acid rain.

What’s more, those who lead the global warming denial movement – along with creationists, intelligent design writers and the “mathematicians” who flood our email inboxes with claims that pi is rational or other similar nonsense – are operating well outside the established boundaries of peer-reviewed science.

Austrian-born American physicist Fred Singer, arguably the leading figure of the denial movement, has only six peer-reviewed publications in the climate science field, and none since 1997.

After all, when issues such as these are “debated” in any setting other than a peer-reviewed journal or conference, one must ask: “If the author really has a solid argument, why isn’t he or she back in the office furiously writing up this material for submission to a leading journal, thereby assuring worldwide fame and glory, not to mention influence?”

In most cases, those who attempt to grab public attention through other means are themselves aware they are short-circuiting the normal process, and that they do not yet have the sort of solid data and airtight arguments that could withstand the withering scrutiny of scientific peer review.

When they press their views in public to a populace that does not understand how the scientific enterprise operates, they are being disingenuous.

With regards to claims scientists are engaged in a “conspiracy” to hide the “truth” on an issue such as global warming or evolution, one should ask how a secret “conspiracy” could be maintained in a worldwide, multicultural community of hundreds of thousands of competitive researchers.

As Benjamin Franklin wrote in his Poor Richard’s Almanac: “Three can keep a secret, provided two of them are dead.” Or as one of your present authors quipped, tongue-in-cheek, in response to a state legislator who was skeptical of evolution: “You have no idea how humiliating this is to me — there is a secret conspiracy among leading scientists, but no-one deemed me important enough to be included!”

There’s another way to think about such claims: we have tens-of-thousands of senior scientists in their late-fifties or early-sixties who have seen their retirement savings decimated by the recent stock market plunge. These are scientists who now wonder if the day will ever come when they are financially well-off-enough to do their research without the constant stress and distraction of applying for grants (the majority of which are never funded).

All one of these scientists has to do to garner both worldwide fame and considerable fortune (through book contracts, the lecture circuit and TV deals) is to call a news conference and expose “the truth”. So why isn’t this happening?

The system of peer-reviewed journals and conferences sponsored by major professional societies is the only proper forum for the presentation and debate of new ideas, in any field of science or mathematics.

It has been stunningly successful: errors have been uncovered, fraud has been rooted out and bogus scientific claims (such as the 1903 N-ray claim, the 1989 cold fusion claim, and the more-recent assertion of an autism-vaccination link) have been debunked.

This all occurs with a level of reliability and at a speed that is hard to imagine in other human endeavours. Those who attempt to short-circuit this system are doing potentially irreparable harm to the integrity of the system.

They may enrich themselves or their friends, but they are doing grievous damage to society at large.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis

 


Applying math to design new materials and processes for drug manufacturing

Richard Braatz. Photo: Dominick Reuter

Trial-and-error experimentation underlies many biomedical innovations. This classic method — define a problem, test a proposed solution, learn from failure and try again — is the main route by which scientists discover new biomaterials and drugs today. This approach is also used to design ways of manufacturing these new materials, but the process is immensely time-consuming, producing a successful therapeutic product and its manufacturing process only after years of experiments, at considerable expense.

Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT, applies mathematics to streamline the development of pharmaceuticals. Trained as an applied mathematician, Braatz is developing mathematical models to help scientists quickly and accurately design processes for manufacturing drug compounds with desired characteristics. Through mathematical simulations, Braatz has designed a system that significantly speeds the design of drug-manufacturing processes; he is now looking to apply the same mathematical approach to designing new biomaterials and nanoscale devices.

“Nanotechnology is very heavily experimental,” Braatz says. “There are researchers who do computations to gain insights into the physics or chemistry of nanoscale systems, but do not apply these computations for their design or manufacture. I want to push systematic design methods to the nanoscale, and to other areas where such methods aren’t really developed yet, such as biomaterials.”

From farm to formulas

Braatz’s own academic path was anything but systematic. He spent most of his childhood on an Oregon farm owned by his grandfather. Braatz says he absorbed an engineer’s way of thinking early on from his father, an electrician, by examining his father’s handiwork on the farm and reading his electrical manuals.

Braatz also developed a serious work ethic. From the age of 10, he awoke early every morning — even on school days — to work on the farm. In high school, he picked up a night job at the local newspaper, processing and delivering thousands of newspapers to stores and the post office, sometimes until just before dawn.

After graduating from high school in 1984, Braatz headed to Alaska for the summer. A neighbour had told him that work paid well up north, and Braatz took a job at a fish-processing facility, driving forklifts and hauling 100-pound bags of fishmeal 16 hours a day. He returned each summer for four years, eventually working his way up to plant operator, saving enough money each summer to pay for the next year’s tuition at Oregon State University.

As an undergraduate, Braatz first planned to major in electrical engineering. But finding the introductory coursework unstimulating — given the knowledge he’d absorbed from his father — he cast about for another major.

“There was no Internet back then, so you couldn’t Google; web searches didn’t exist,” Braatz says. “So I went to the library and opened an encyclopedia, and said, ‘OK, what other engineering [is] there?’”

Chemical engineering caught his eye; he had always liked and excelled at chemistry in high school. While pursuing a degree in chemical engineering, Braatz filled the rest of his schedule with courses in mathematics.

After graduation, Braatz went on to the California Institute of Technology, where he earned both a master’s and a PhD in chemical engineering. In addition to his research, Braatz took numerous math and math-heavy courses in electrical engineering, applied mechanics, chemical engineering and chemistry. The combination of real applications and mathematical theory revealed a field of study Braatz had not previously considered: applied mathematics.

“This training was a very good background for learning how to derive mathematical solutions to research problems,” Braatz says.

A systems approach

Soon after receiving his PhD, Braatz accepted an assistant professorship at the University of Illinois at Urbana-Champaign (UIUC). There, as an applied mathematician, he worked with researchers to tackle problems in a variety of fields: computer science, materials science, and electrical, chemical and mechanical engineering.

He spent eight years on a project spurred by a talk he attended at UIUC. In that talk, a representative of Merck described a major challenge in the pharmaceutical industry: controlling the size of crystals in the manufacture of any given drug. (The size and consistency of crystals determine, in part, a drug’s properties and overall efficacy.)

Braatz learned that while drug-manufacturing machinery was often monitored by sensors, much of the resulting data went unanalysed. He pored over the sensors’ data, and developed mathematical models to gain an understanding of what the sensors reveal about each aspect of the drug-crystallization process. Over the years, his team devised an integrated series of algorithms that combined efficiently designed experiments with mathematical models to yield a desired crystal size from a given drug solution. They worked the algorithms into a system that automatically adjusts settings at each phase of the manufacturing process to produce an optimal crystal size, based on a “recipe” given by the algorithms.

“Sometimes the recipes are very weird,” Braatz says. “It might be a strange path you have to follow to manufacture the right crystals.”

The automated system, which has since been adopted by Merck and other pharmaceutical companies, provides a big improvement in efficiency, Braatz says, avoiding the time-consuming trial-and-error approach many drug manufacturers had relied on to design a crystallization process for a new drug.

In 2010, Braatz moved to MIT, where he is exploring mathematical applications in nanotechnology and tissue engineering — in particular, models to help design new drug-releasing materials. Such materials have the potential to deliver controlled, continuous therapies, but designing them currently takes years of trial-and-error experiments.

Braatz’s group is designing mathematical models to give researchers instructions, for example, on how to design materials that locally release drugs into a body’s cells at a desired rate. Braatz says approaching such a problem from a systematic perspective could potentially save years of time in the development of a biomedical material of high efficacy.

“Anything is a win if you could reduce those experiments from 10 years to several years,” Braatz says. “We’re talking hundreds of millions, billions of dollars. And the effect on people’s lives, you can’t put a price tag on that.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jennifer Chu, Massachusetts Institute of Technology


Good at Sudoku? Here’s Some You’ll Never Complete

There’s far more to the popular maths puzzle than putting numbers in a box.

Last month, a team led by Gary McGuire from University College Dublin in Ireland made an announcement: they had proven you can’t have a solvable Sudoku puzzle with less than 17 numbers already filled in.

Unlike most mathematical announcements, this was quickly picked up by the popular scientific media. Within a few days, the new finding had been announced in Nature and other outlets.

So where did this problem come from and why is its resolution interesting?

As you probably know, the aim of a Sudoku puzzle is to complete a partially-filled nine-by-nine grid of numbers. There are some guidelines: the numbers one to nine must appear exactly once each in every row, column and three-by-three sub-grid.

As with a crossword, a valid Sudoku puzzle must have a unique solution. There’s only one way to go from the initial configuration (with some numbers already filled in) to a completed grid.

Newspapers often grade their puzzles as easy, medium or hard, which will depend on how easy it is at every stage of solving the puzzle to fill in the “next” number. While a puzzle with a huge number of initial clues will usually be easy, it is not necessarily the case that a puzzle with few initial clues is difficult.

Reckon you can complete a 17-clue Sudoku puzzle? (answer below) Gordon Royle

When Sudoku-mania swept the globe in the mid-2000s, many mathematicians, programmers and computer scientists – amateur and professional – started to investigate Sudoku itself. They were less interested in solving individual puzzles, and more focused on asking and answering mathematical and/or computational questions about the entire universe of Sudoku puzzles and solutions.

As a mathematician specialising in the area of combinatorics (which can very loosely be defined as the mathematics of counting configurations and patterns), I was drawn to combinatorial questions about Sudoku.

I was particularly interested in the question of the smallest number of clues possible in a valid puzzle (that is, a puzzle with a unique solution).

In early 2005, I found a handful of 17-clue puzzles on a long-since forgotten Japanese-language website. By slightly altering these initial puzzles, I found a few more, then more, and gradually built up a “library” of 17-clue Sudoku puzzles which I made available online at the time.

Other people started to send me their 17-clue puzzles and I added any new ones to the list until, after a few years, I had collected more than 49,000 different 17-clue Sudoku puzzles.

By this time, new ones were few and far between, and I was convinced we had found almost all of the 17-clue puzzles. I was also convinced there was no 16-clue puzzle. I thought that demonstrating this would either require some new theoretical insight or clever programming combined with massive computational power, or both.

Either way, I thought proving the non-existence of a 16-clue puzzle was likely to be too difficult a challenge.

They key to McGuire’s approach was to tackle the problem indirectly. The total number of completed puzzles (that is, completely filled-in grids) is astronomical – 5,472,730,538 – and trying to test each of these to see if any choice of 16 cells from the completed grid forms a valid puzzle is far too time-consuming.

Instead, McGuire and colleagues used a different, indirect approach.

An “unavoidable set” in a completed Sudoku grid is a subset of the clues whose entries can be rearranged to leave another valid completed Sudoku grid. For a puzzle to be uniquely completable, it must contain at least one entry from every unavoidable set.

If a completed grid contains the ten-clue configuration in the left picture, then any valid Sudoku puzzle must contain at least one of those ten clues. If it did not, then in any completed puzzle, those ten positions could either contain the left-hand configuration or the right-hand configuration and so the solution would not be unique.

Gordon Royle

While finding all the unavoidable sets in a given grid is difficult, it’s only necessary to find enough unavoidable sets to show that no 16 clues can “hit” them all. In the process of resolving this question, McGuire’s team developed new techniques for solving the “hitting set” problem.

It’s a problem that has many other applications – any situation in which a small set of resources must be allocated while still ensuring that all needs are met by at least one of the selected resources (i.e. “hit”) can be modelled as a hitting set problem.

Once the theory and software was in place, it was then a matter of running the programs for each of the 5.5 billion completed grids. As you can imagine, this required substantial computing power.

After 7 million core-CPU hours on a supercomputer (the equivalent of a single computer running for 7 million hours) and a year of actual elapsed time, the result was announced a few weeks ago, on New Year’s Day.

So is it correct?

The results of any huge computation should be evaluated with some caution, if not outright suspicion, especially when the answer is simply “no, doesn’t exist”, because there are many possible sources of error.

But in this case, I feel the result is far more likely to be correct than otherwise, and I expect it to be independently-verified before too long. In addition, McGuire’s team built on many different ideas, discussions and computer programs that were thrashed out between interested contributors to various online forums devoted to the mathematics of Sudoku. In this respect, many of the basic components of their work have already been thoroughly tested.

Solution to the 17-clue Sudoku puzzle, above. Gordon Royle

And so back to the question: why is the resolution of this problem interesting? And is it important?

Certainly, knowing that the smallest Sudoku puzzles have 17 clues is not in itself important. But the immense popularity of Sudoku meant that this question was popularised in a way that many similar questions have never been, and so it took on a special role as a “challenge question” testing the limits of human knowledge.

The school students to whom I often give outreach talks have no real concept of the limitations of computers and mathematics. In my past talks, these students were almost always astonished to know that the answer to such a simple question was just not known.

And now, in my future outreach talks, I will describe how online collaboration, theoretical development and significant computational power were combined to solve this problem, and how this process promises to play an increasing role in the future development of mathematics.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gordon Royle*

 


Researchers find classical musical compositions adhere to power law

A team of researchers, led by Daniel Levitin of McGill University, has found after analysing over two thousand pieces of classical music that span four hundred years of history, that virtually all of them follow a one-over-f (1/f) power distribution equation. He and his team have published the results of their work in the Proceedings of the National Academy of Sciences.

One-over-f equations describe the relative frequency of things that happen over time and can be used to describe such naturally occurring events as annual river flooding or the beating of a human heart. They have been used to describe the way pitch is used in music as well, but until now, no one has thought to test the idea that they could be used to describe the rhythm of the music too.

To find out if this is the case, Levitin and his team analysed (by measuring note length line by line) close to 2000 pieces of classical music from a wide group of noted composers. In so doing, they found that virtually every piece studied conformed to the power law. They also found that by adding another variable to the equation, called a beta, which was used to describe just how predictable a given piece was compared to other pieces, they could solve for beta and find a unique number for each composer.

After looking at the results as a whole, they found that works written by some classical composers were far more predictable than others, and that certain genres in general were more predictable than others too. Beethoven was the most predictable of the group studied, while Mozart was the least of the bunch. And symphonies are generally far more predictable than Ragtimes with other types falling somewhere in-between. In solving for beta, the team discovered that they had inadvertently developed a means for calculating a composer’s unique individual rhythm signature. In speaking with the university news group at McGill, Levitin said, “this was one of the most unanticipated and exciting findings of our research.”

Another interesting aspect of the research is that because the patterns are based on the power law, the music the team studied shares the same sorts of patterns as fractals, i.e. elements in the rhythm that occur the second most often happen only half as often, the third, just a third as often and so forth. Thus, it’s not difficult to imagine music in a fractal patterns that are unique to individual composers.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bob Yirka , Phys.org