On the hunt for mathematical beauty

Alexei Borodin

For anyone who has ever taken a commercial flight, it’s an all-too-familiar scene: Hundreds of passengers sit around waiting for boarding to begin, then rush to be at the front of the line as soon as it does.

Boarding an aircraft can be a frustrating experience, with passengers often wondering if they will ever make it to their seats. But Alexei Borodin, a professor of mathematics at MIT, can predict how long it will take for you to board an airplane, no matter how long the line. That’s because Borodin studies difficult probability problems, using sophisticated mathematical tools to extract precise information from seemingly random groups.

“Imagine an airplane in which each row has one seat, and there are 100 seats,” Borodin says. “People line up in random order to fill the plane, and each person has a carry-on suitcase in their hand, which it takes them one minute to put into the overhead compartment.”

If the passengers all board the plane in an orderly fashion, starting from the rear seats and working their way forwards, it would be a very quick process, Borodin says. But in reality, people queue up in a random order, significantly slowing things down.

So how long would it take to board the aircraft? “It’s not an easy problem to solve, but it is possible,” Borodin says. “It turns out that it is approximately equal to twice the square root of the number of people in the queue.” So with a 100-seat airplane, boarding would take 20 minutes, he says.

Borodin says he has enjoyed solving these kinds of tricky problems since he was a child growing up in the former Soviet Union. Born in the industrial city of Donetsk in eastern Ukraine, Borodin regularly took part in mathematical Olympiads in his home state. Held all over the world, these Olympiads set unusual problems for children to solve, requiring them to come up with imaginative solutions while working against the clock.

It is perhaps no surprise that Borodin had an interest in math from an early age: His father, Mikhail Borodin, is a professor of mathematics at Donetsk State University. “He was heavily involved in research while I was growing up,” Borodin says. “I guess children always look up to their parents, and it gave me an understanding that mathematics could be an occupation.”

In 1992, Borodin moved to Russia to study at Moscow State University. The dissolution of the USSR meant that, arriving in Moscow, Borodin found himself faced with a choice of whether to take Ukrainian citizenship, like his parents back in Donetsk, or Russian. It was a difficult decision, but for practical reasons Borodin opted for Russian citizenship.

Times were tough while Borodin was studying in Moscow. Politically there was a great deal of unrest in the city, including a coup attempt in 1993. Many scientists began leaving Russia, in search of a more stable life elsewhere.

Financially things were not easy for Borodin either, as he had just $15 each month to spend on food and accommodation. “But I still remember the times fondly,” he says. “I didn’t pay much attention to politics at the time, I was working too hard. And I had my friends, and my $15 per month to live on.”

After Borodin graduated from Moscow State University in 1997, a former adviser who had moved to the United States invited Borodin over to join him. So he began splitting his time between Moscow and Philadelphia, where he studied for his PhD at the University of Pennsylvania.

He then spent seven years at the California Institute of Technology before moving to MIT in 2010, where he has continued his research into probabilities in large random objects.

Borodin says there are no big mathematical problems he is desperate to solve. Instead, his greatest motivation is the pursuit of what he calls the beauty of the subject. While it may seem strange to talk about finding beauty in abstract mathematical constructions, many mathematicians view their work as an artistic endeavor.

“If one asks 100 mathematicians to describe this beauty, one is likely to get 100 different answers,” he says.

And yet all mathematicians tend to agree that something is beautiful when they see it, he adds, saying, “It is this search for new instances of mathematical beauty that largely drives my research.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Helen Knight, Massachusetts Institute of Technology


Cambridge University puts Newton’s papers online

In a project that has long been overdue, Cambridge University, thanks to a hefty gift from the Polonsky Foundation (supporter of education and arts) and a grant from Britain’s Joint Information Services Committee (JISC), has put some of Isaac Newton’s original papers online for any and all to see. Of particular interest to most will be Newton’s own annotated copy of Philosophiae Naturalis Principia Mathematica, considered by many to be one of the greatest published works by any scientist ever. For those looking for a little behind the scenes work, the University has also published Newton’s so-called “Waste Book,” a diary of sorts that Newton inherited from his step-father which he took along with him and used for jotting notes about such things as his ideas on calculus while away from school due to the Great Plague in 1665.

In viewing the material, which can be paged through in a PDF type format, by clicking arrows, it’s easy to see that the digitization of Newton’s papers have come none too soon, as many of the pages are tattered, smeared and even burned-looking in some places. Thus, not only has putting the papers online made them accessible to anyone with a computer and an Internet connection, it has also caused them to be saved for posterity in an electronic form that will ensure they will be accessible to all those who may wish to view them in the future as well.

It was in Principia Mathematica that Newton laid out his theories on the laws of motion and universal gravitation which some suggest laid the groundwork for Einstein’s theories on relativity. And if that weren’t enough, Newton is also widely credited with “inventing” calculus, a mathematical science without which the modern world would simply not exist.

In all there are more than 4,000 pages of Newton’s work displayed on the site, which took a team of photo copyists the better part of this past summer to capture, though it’s obvious in looking at the results that there were many slow-downs as pages had to have some restorative efforts made in order to present them. Those working on the project are to be commended as the results show great care and dedication to a single purpose; namely showcasing one of history’s brightest minds.

It’s intriguing to see the notes Newton himself made on the first edition of Principia Mathematica, in preparing for the second, and happily, the University has announced that they will be adding translations for all of the text and notes as early as next year.

The University has also announced plans to make the works of other famous scientists available as the future unfolds and hopefully will continue to add more of the Newton library too, as thus far only about 20% of their collection has been made available online.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bob Yirka , Phys.org


Mathematical methods help predict movement of oil and ash following environmental disasters

When oil started gushing into the Gulf of Mexico in late April 2010, friends asked George Haller whether he was tracking its movement. That’s because the McGill engineering professor has been working for years on ways to better understand patterns in the seemingly chaotic motion of oceans and air. Meanwhile, colleagues of Josefina Olascoaga in Miami were asking the geophysicist a similar question. Fortunately, she was.

For those involved in managing the fallout from environmental disasters like the Deepwater Horizon oil spill, it is essential to have tools that predict how the oil will move, so that they make the best possible use of resources to control the spill. Thanks to work done by Haller and Olascoaga, such tools now appear to be within reach. Olascoaga’s computational techniques and Haller’s theory for predicting the movement of oil in water are equally applicable to the spread of ash in the air, following a volcanic explosion.

“In complex systems such as oceans and the atmosphere, there are a lot of features that we can’t understand offhand,” Haller explains. “People used to attribute these to randomness or chaos. But it turns out, when you look at data sets, you can find hidden patterns in the way that the air and water move.” Over the past decade, the team has developed mathematical methods to describe these hidden structures that are now broadly called Lagrangian Coherent Structures (LCSs), after the French mathematician Joseph-Louis Lagrange.

“Everyone knows about the Gulf Stream, and about the winds that blow from the West to the East in Canada,” says Haller, “but within these larger movements of air or water, there are intriguing local patterns that guide individual particle motion.” Olascoaga adds, “Though invisible, if you can imagine standing in a lake or ocean with one foot in warm water and the other in the colder water right beside it, then you have experienced an LCS running somewhere between your feet.”

“Ocean flow is like a busy city with a network of roads,” Haller says, “except that roads in the ocean are invisible, in motion, and transient.” The method Haller and Olascoaga have developed allows them to detect the cores of LCSs. In the complex network of ocean flows, these are the equivalent of “traffic intersections” and they are crucial to understanding how the oil in a spill will move. These intersections unite incoming flow from opposite directions and eject the resulting mass of water. When such an LCS core emerges and builds momentum inside the spill, we know that oil is bound to seep out within the next four to six days. This means that the researchers are now able to forecast dramatic changes in pollution patterns that have previously been considered unpredictable.

So, although Haller wasn’t tracking the spread of oil during the Deepwater Horizon disaster, he and Olascoaga were able to join forces to develop a method that does not simply track: it actually forecasts major changes in the way that oil spills will move. The two researchers are confident that this new mathematical method will help those engaged in trying to control pollution make well-informed decisions about what to do.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Miami

 


Bridging traditional economics and econophysics

In a new study, researchers of the Complexity Science Hub highlight the connecting elements between traditional financial market research and econophysics. “We want to create an overview of the models that exist in financial economics and those that researchers in physics and mathematics have developed so that everybody can benefit from it,” explains Matthias Raddant from the Complexity Science Hub and the University for Continuing Education Krems.

Scientists from both fields try to classify or even predict how the market will behave. They aim to create a large-scale correlation matrix describing the correlation of one stock to all other stocks. “Progress, however, is often barely noticed, if at all, by researchers in other disciplines. Researchers in finance hardly know that physicists are researching similar topics and just call it something different. That’s why we want to build a bridge,” says Raddant.

What are the differences?

Experts in the traditional financial markets field are very concerned with accurately describing how volatile stocks are statistically. However, their fine-grained models no longer work adequately when the data set becomes too large and includes tens of thousands of stocks.

Physicists, on the other hand, can handle large amounts of data very well. Their motto is: “The more data I have, the nicer it is because then I can see certain regularities better,” explains Raddant. They also work based on correlations, but they model financial markets as evolving complex networks.

These networks describe dependencies that can reveal asset comovement, i.e., which stocks behave fundamentally similarly and therefore group together. However, physicists and mathematicians may not know what insights already exist in the finance literature and what factors need to be considered.

Different language

In their study, Raddant and his co-author, CSH external faculty member Tiziana Di Matteo of King’s College London, note that the mechanical parts that go into these models are often relatively similar, but their language is different. On the one hand, researchers in finance try to discover companies’ connecting features.

On the other hand, physicists and mathematicians are working on creating order out of many time series of stocks, where certain regularities occur. “What physicists and mathematicians call regularities, economists call properties of companies, for example,” says Raddant.

Avoiding research that gets lost

“Through this study, we wish to sensitize young scientists, in particular, who are working on an interdisciplinary basis in financial markets, to the connecting elements between the disciplines,” says Raddant. So that researchers who do not come from financial economics know what the vocabulary is and what the essential research questions are that they have to address. Otherwise, there is a risk of producing research that is of no interest to anyone in finance and financial economics.

On the other hand, scientists from the disciplines traditionally involved with financial markets must understand how to describe large data sets and statistical regularities with methods from physics and network science.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Complexity Science Hub Vienna.

 


Time for a change. Scholars say the calendar needs a serious overhaul!

Researchers at The Johns Hopkins University have discovered a way to make time stand still — at least when it comes to the yearly calendar.

Using computer programs and mathematical formulas, Richard Conn Henry, an astrophysicist in the Krieger School of Arts and Sciences, and Steve H. Hanke, an applied economist in the Whiting School of Engineering, have created a new calendar in which each new 12-month period is identical to the one which came before, and remains that way from one year to the next in perpetuity.

Under the Hanke-Henry Permanent Calendar, for instance, if Christmas fell on a Sunday in 2012 (and it would), it would also fall on a Sunday in 2013, 2014 and beyond. In addition, under the new calendar, the rhyme “30 days hath September, April, June and November,” would no longer apply, because September would have 31 days, as would March, June and December. All the rest would have 30. (Try creating a rhyme using that.)

“Our plan offers a stable calendar that is absolutely identical from year to year and which allows the permanent, rational planning of annual activities, from school to work holidays,” says Henry, who is also director of the Maryland Space Grant Consortium. “Think about how much time and effort are expended each year in redesigning the calendar of every single organization in the world and it becomes obvious that our calendar would make life much simpler and would have noteworthy benefits.”

Among the practical advantages would be the convenience afforded by birthdays and holidays (as well as work holidays) falling on the same day of the week every year. But the economic benefits are even more profound, according to Hanke, an expert in international economics, including monetary policy.

“Our calendar would simplify financial calculations and eliminate what we call the ‘rip off’ factor,'” explains Hanke. “Determining how much interest accrues on mortgages, bonds, forward rate agreements, swaps and others, day counts are required. Our current calendar is full of anomalies that have led to the establishment of a wide range of conventions that attempt to simplify interest calculations. Our proposed permanent calendar has a predictable 91-day quarterly pattern of two months of 30 days and a third month of 31 days, which does away with the need for artificial day count conventions.”

According to Hanke and Henry, their calendar is an improvement on the dozens of rival reform calendars proffered by individuals and institutions over the last century.

“Attempts at reform have failed in the past because all of the major ones have involved breaking the seven-day cycle of the week, which is not acceptable to many people because it violates the Fourth Commandment about keeping the Sabbath Day,” Henry explains. “Our version never breaks that cycle.”

Henry posits that his team’s version is far more convenient, sensible and easier to use than the current Gregorian calendar, which has been in place for four centuries – ever since 1582, when Pope Gregory altered a calendar that was instituted in 46 BC by Julius Caesar.

In an effort to bring Caesar’s calendar in synch with the seasons, the pope’s team removed 11 days from the calendar in October, so that Oct. 4 was followed immediately by Oct. 15. This adjustment was necessary in order to deal with the same knotty problem that makes designing an effective and practical new calendar such a challenge: the fact that each Earth year is 365.2422 days long.

Hanke and Henry deal with those extra “pieces” of days by dropping leap years entirely in favour of an extra week added at the end of December every five or six years. This brings the calendar in sync with the seasonal changes as the Earth circles the sun.

In addition to advocating the adoption of this new calendar, Hanke and Henry encourage the abolition of world time zones and the adoption of “Universal Time” (formerly known as Greenwich Mean Time) in order to synchronize dates and times worldwide, streamlining international business.

“One time throughout the world, one date throughout the world,” they write in a January 2012 Global Asia article about their proposals. “Business meetings, sports schedules and school calendars would be identical every year. Today’s cacophony of time zones, daylight savings times and calendar fluctuations, year after year, would be over. The economy — that’s all of us — would receive a permanent ‘harmonization’ dividend.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Johns Hopkins University


Are Pi’s Days Numbered?

Pi defines the relationship between a circle’s radius and its area.

Some people have argued that Pi’s days are numbered and that other tools, such as tau, could do its job more efficiently. As someone who has studied Pi throughout his entire working life, my response to such challenges is unwavering: Pi is the gift that keeps on giving.

People call me Doctor Pi. I have played with Pi since I was a child and have studied it seriously for 30 years. Each year I discover new, unexpected and amusing things about Pi, its history and its computation. I never tire of it.

Erm, what is Pi?

Pi, written with the Greek letter π, has the value of 3.14159 …, is the most important number in mathematics. The area of a circle of radius r is πr2 while the perimeter has length 2πr.

Some Pi facts? OK

  • Without Pi there is no theory of motion, no understanding of geometry or space/time.
  • Pi occurs in important fields of applied mathematics.
  • Pi is used throughout engineering, science and medicine and is studied for its own sake in number theory.
  • It fascinates specialists and hobbyists alike.

The history of Pi is a history of mathematics

The most famous names in mathematics – Leibniz, Euler, Gauss, Riemann – all play their part in Pi’s illustrious history. In approximately 250BCE Archimedes of Syracuse rigorously showed that the area of a circle is Pi times the square of its radius.

Isaac Newton computed Pi to at least 15 digits in 1666 and a raft of new formulas for calculating Pi in the intervening years have vastly expanded our understanding of this irrational, irreplaceable number.

In my capacity as Doctor Pi – an affectionate name given to me by my students and colleagues – I have met Nobel Prize winners, pop stars and variety of colourful characters, many of whom go potty for this number.

So why the broad attraction? What is the secret of Pi’s enduring appeal? It appears in The Simpsons (doh!), in Star Trek (beam me up!), and in British singer-songwriter Kate Bush’s lovely 2005 song Pi:

“Sweet and gentle and sensitive man With an obsessive nature and deep fascination for numbers And a complete infatuation with the calculation of Pi.”

In the song’s refrain, Bush recites the first 160 digits of Pi (but messes up after 50!) Pi shows up in the movie The Matrix, episodes of Law and Order, and Yann Martel’s Mann-Booker prize winning 2001 novel Life of Pi. No other piece of mathematics can command such attention.

Memorising Pi

The current Guinness World Record for reciting these by rote is well in excess of 60,000 digits.

This is particularly impressive when you consider that Pi, having been proven irrational in the 18th century, has no known repetition or pattern within its infinite decimal representation.

A former colleague of mine, Simon Plouffe, was a Guinness World Record-holder a generation ago, after reciting Pi to approximately 4,700 digits.

Not surprisingly, there is a trend towards building mnemonics whereby the number of letters in a given word represents a digit in the series. For example “How I need a drink, alcoholic of course” represents 3.1415926. This mnemonic formed the basis of a Final Jeopardy! question in 2005.

Some mnemonics are as long as 4,000 digits, but my current favourite is a 33-digit self-referrent mnemonic published in New Scientist on Pi Day (March 14) last year.

Is Pi really infinite?

In a word: yes. So far, it has been calculated to five trillion (5,000,000,000,000) digits. This record was set in August 2010 on Shigeru Kondo’s US$18,000 homemade computer using software written by American university student Alex Yee.

Each such computation is a tour-de-force of computing science.

Estimates suggest that within the next ten to 15 years a quadrillion (1,000,000,000,000,000) digits of Pi will probably be computed. As relatively-recently as 1961, Daniel Shanks, who himself calculated Pi to over 100,000 digits, declared that computing one billion digits would be “forever impossible”. As it transpired, this feat was achieved in 1989 by Yasumasa Kanada of Japan.

It’s a kind of magic

Although it is very likely we will learn nothing new mathematically about Pi from computations to come, we just may discover something truly startling. Pi has seen off attacks in the past. It will see off attacks in the future. Pi, like its inherent magic, is infinite.

The battle continues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*


Hot and bothered: the uncertain mathematics of global warming

These are painful times for those hoping to see an international consensus and substantive action on global warming.

In the US, Republican presidential front-runner Mitt Romney said in June 2011: “The world is getting warmer” and “humans have contributed” but in October 2011 he backtracked to: “My view is that we don’t know what’s causing climate change on this planet.”

His Republican challenger Rick Santorum added: “We have learned to be sceptical of ‘scientific’ claims, particularly those at war with our common sense” and Rick Perry, who suspended his campaign to become the Republican presidential candidate last month, stated flatly: “It’s all one contrived phony mess that is falling apart under its own weight.”

Meanwhile, the scientific consensus has moved in the opposite direction. In a study published in October 2011, 97% of climate scientists surveyed agreed global temperatures have risen over the past 100 years. Only 5% disagreed that human activity is a significant cause of global warming.

The study concluded in the following way: “We found disagreement over the future effects of climate change, but not over the existence of anthropogenic global warming.

“Indeed, it is possible that the growing public perception of scientific disagreement over the existence of anthropocentric warming, which was stimulated by press accounts of [the UK’s] ”Climategate“ is actually a misperception of the normal range of disagreements that may persist within a broad scientific consensus.”

More progress has been made in Europe, where the EU has established targets to reduce emissions by 20% (from 1990 levels) by 2020. The UK, which has been beset by similar denial movements, was nonetheless able to establish, as a legally binding target, an 80% reduction by 2050 and is a world leader on abatement.

In Australia, any prospect for consensus was lost when Tony Abbott used opposition to the Labor government’s proposed carbon market to replace Malcolm Turnbull as leader of the Federal Opposition in late 2009.

It used to be possible to hear right-wing politicians in Australia or the USA echo the Democratic congressman Henry Waxman who said last year:

“If my doctor told me I had cancer, I wouldn’t scour the country to find someone to tell me that I don’t need to worry about it.”

But such rationality has largely left the debate in both the US and Oz. In Australia, a reformulated carbon tax policy was enacted in November only after a highly partisan debate.

In Canada, the debate is a tad more balanced. The centre-right Liberal government in British Columbia passed the first carbon tax in North America in 2008, but the governing Federal Conservative party now offers a reliable “anti-Kyoto” partnership with Washington.

Overviews of the evidence for global warming, together with responses to common questions, are available from various sources, including:

  • Seven Answers to Climate Contrarian Nonsense, in Scientific American
  • Climate change: A Guide for the Perplexed, in New Scientist
  • Cooling the Warming Debate: Major New Analysis Confirms That Global Warming Is Real, in Science Daily
  • Remind me again: how does climate change work? on The Conversation

It should be acknowledged in these analyses that all projections are based on mathematical models with a significant level of uncertainty regarding highly complex and only partially understood systems.

As 2011 Australian Nobel-Prize-winner Brian Schmidt explained while addressing a National Forum on Mathematical Education:

“Climate models have uncertainty and the earth has natural variation … which not only varies year to year, but correlates decade to decade and even century to century. It is really hard to design a figure that shows this in a fair way — our brain cannot deal with the correlations easily.

“But we do have mathematical ways of dealing with this problem. The Australian academy reports currently indicate that the models with the effects of CO₂ are with 90% statistical certainty better at explaining the data than those without.

“Most of us who work with uncertainty know that 90% statistical uncertainty cannot be easily shown within a figure — it is too hard to see …”

“ … Since predicting the exact effects of climate change is not yet possible, we have to live with uncertainty and take the consensus view that warming can cover a wide range of possibilities, and that the view might change as we learn more.”

But uncertainty is no excuse for inaction. The proposed counter-measures (e.g. infrastructure renewal and modernisation, large-scale solar and wind power, better soil remediation and water management, not to mention carbon taxation) are affordable and most can be justified on their own merits, while the worst-case scenario — do nothing while the oceans rise and the climate changes wildly — is unthinkable.

Some in the first world protest that any green energy efforts are dwarfed by expanding energy consumption in China and elsewhere. Sure, China’s future energy needs are prodigious, but China also now leads the world in green energy investment.

By blaiming others and focusing the debate on the level of human responsibility for warming and about the accuracy of predictions, the deniers have managed to derail long-term action in favour of short-term economic policies.

Who in the scientific community is promoting the denial of global warming? As it turns out, the leading figures in this movement have ties to conservative research institutes funded mostly by large corporations, and have a history of opposing the scientific consensus on issues such as tobacco and acid rain.

What’s more, those who lead the global warming denial movement – along with creationists, intelligent design writers and the “mathematicians” who flood our email inboxes with claims that pi is rational or other similar nonsense – are operating well outside the established boundaries of peer-reviewed science.

Austrian-born American physicist Fred Singer, arguably the leading figure of the denial movement, has only six peer-reviewed publications in the climate science field, and none since 1997.

After all, when issues such as these are “debated” in any setting other than a peer-reviewed journal or conference, one must ask: “If the author really has a solid argument, why isn’t he or she back in the office furiously writing up this material for submission to a leading journal, thereby assuring worldwide fame and glory, not to mention influence?”

In most cases, those who attempt to grab public attention through other means are themselves aware they are short-circuiting the normal process, and that they do not yet have the sort of solid data and airtight arguments that could withstand the withering scrutiny of scientific peer review.

When they press their views in public to a populace that does not understand how the scientific enterprise operates, they are being disingenuous.

With regards to claims scientists are engaged in a “conspiracy” to hide the “truth” on an issue such as global warming or evolution, one should ask how a secret “conspiracy” could be maintained in a worldwide, multicultural community of hundreds of thousands of competitive researchers.

As Benjamin Franklin wrote in his Poor Richard’s Almanac: “Three can keep a secret, provided two of them are dead.” Or as one of your present authors quipped, tongue-in-cheek, in response to a state legislator who was skeptical of evolution: “You have no idea how humiliating this is to me — there is a secret conspiracy among leading scientists, but no-one deemed me important enough to be included!”

There’s another way to think about such claims: we have tens-of-thousands of senior scientists in their late-fifties or early-sixties who have seen their retirement savings decimated by the recent stock market plunge. These are scientists who now wonder if the day will ever come when they are financially well-off-enough to do their research without the constant stress and distraction of applying for grants (the majority of which are never funded).

All one of these scientists has to do to garner both worldwide fame and considerable fortune (through book contracts, the lecture circuit and TV deals) is to call a news conference and expose “the truth”. So why isn’t this happening?

The system of peer-reviewed journals and conferences sponsored by major professional societies is the only proper forum for the presentation and debate of new ideas, in any field of science or mathematics.

It has been stunningly successful: errors have been uncovered, fraud has been rooted out and bogus scientific claims (such as the 1903 N-ray claim, the 1989 cold fusion claim, and the more-recent assertion of an autism-vaccination link) have been debunked.

This all occurs with a level of reliability and at a speed that is hard to imagine in other human endeavours. Those who attempt to short-circuit this system are doing potentially irreparable harm to the integrity of the system.

They may enrich themselves or their friends, but they are doing grievous damage to society at large.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis

 


Applying math to design new materials and processes for drug manufacturing

Richard Braatz. Photo: Dominick Reuter

Trial-and-error experimentation underlies many biomedical innovations. This classic method — define a problem, test a proposed solution, learn from failure and try again — is the main route by which scientists discover new biomaterials and drugs today. This approach is also used to design ways of manufacturing these new materials, but the process is immensely time-consuming, producing a successful therapeutic product and its manufacturing process only after years of experiments, at considerable expense.

Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT, applies mathematics to streamline the development of pharmaceuticals. Trained as an applied mathematician, Braatz is developing mathematical models to help scientists quickly and accurately design processes for manufacturing drug compounds with desired characteristics. Through mathematical simulations, Braatz has designed a system that significantly speeds the design of drug-manufacturing processes; he is now looking to apply the same mathematical approach to designing new biomaterials and nanoscale devices.

“Nanotechnology is very heavily experimental,” Braatz says. “There are researchers who do computations to gain insights into the physics or chemistry of nanoscale systems, but do not apply these computations for their design or manufacture. I want to push systematic design methods to the nanoscale, and to other areas where such methods aren’t really developed yet, such as biomaterials.”

From farm to formulas

Braatz’s own academic path was anything but systematic. He spent most of his childhood on an Oregon farm owned by his grandfather. Braatz says he absorbed an engineer’s way of thinking early on from his father, an electrician, by examining his father’s handiwork on the farm and reading his electrical manuals.

Braatz also developed a serious work ethic. From the age of 10, he awoke early every morning — even on school days — to work on the farm. In high school, he picked up a night job at the local newspaper, processing and delivering thousands of newspapers to stores and the post office, sometimes until just before dawn.

After graduating from high school in 1984, Braatz headed to Alaska for the summer. A neighbour had told him that work paid well up north, and Braatz took a job at a fish-processing facility, driving forklifts and hauling 100-pound bags of fishmeal 16 hours a day. He returned each summer for four years, eventually working his way up to plant operator, saving enough money each summer to pay for the next year’s tuition at Oregon State University.

As an undergraduate, Braatz first planned to major in electrical engineering. But finding the introductory coursework unstimulating — given the knowledge he’d absorbed from his father — he cast about for another major.

“There was no Internet back then, so you couldn’t Google; web searches didn’t exist,” Braatz says. “So I went to the library and opened an encyclopedia, and said, ‘OK, what other engineering [is] there?’”

Chemical engineering caught his eye; he had always liked and excelled at chemistry in high school. While pursuing a degree in chemical engineering, Braatz filled the rest of his schedule with courses in mathematics.

After graduation, Braatz went on to the California Institute of Technology, where he earned both a master’s and a PhD in chemical engineering. In addition to his research, Braatz took numerous math and math-heavy courses in electrical engineering, applied mechanics, chemical engineering and chemistry. The combination of real applications and mathematical theory revealed a field of study Braatz had not previously considered: applied mathematics.

“This training was a very good background for learning how to derive mathematical solutions to research problems,” Braatz says.

A systems approach

Soon after receiving his PhD, Braatz accepted an assistant professorship at the University of Illinois at Urbana-Champaign (UIUC). There, as an applied mathematician, he worked with researchers to tackle problems in a variety of fields: computer science, materials science, and electrical, chemical and mechanical engineering.

He spent eight years on a project spurred by a talk he attended at UIUC. In that talk, a representative of Merck described a major challenge in the pharmaceutical industry: controlling the size of crystals in the manufacture of any given drug. (The size and consistency of crystals determine, in part, a drug’s properties and overall efficacy.)

Braatz learned that while drug-manufacturing machinery was often monitored by sensors, much of the resulting data went unanalysed. He pored over the sensors’ data, and developed mathematical models to gain an understanding of what the sensors reveal about each aspect of the drug-crystallization process. Over the years, his team devised an integrated series of algorithms that combined efficiently designed experiments with mathematical models to yield a desired crystal size from a given drug solution. They worked the algorithms into a system that automatically adjusts settings at each phase of the manufacturing process to produce an optimal crystal size, based on a “recipe” given by the algorithms.

“Sometimes the recipes are very weird,” Braatz says. “It might be a strange path you have to follow to manufacture the right crystals.”

The automated system, which has since been adopted by Merck and other pharmaceutical companies, provides a big improvement in efficiency, Braatz says, avoiding the time-consuming trial-and-error approach many drug manufacturers had relied on to design a crystallization process for a new drug.

In 2010, Braatz moved to MIT, where he is exploring mathematical applications in nanotechnology and tissue engineering — in particular, models to help design new drug-releasing materials. Such materials have the potential to deliver controlled, continuous therapies, but designing them currently takes years of trial-and-error experiments.

Braatz’s group is designing mathematical models to give researchers instructions, for example, on how to design materials that locally release drugs into a body’s cells at a desired rate. Braatz says approaching such a problem from a systematic perspective could potentially save years of time in the development of a biomedical material of high efficacy.

“Anything is a win if you could reduce those experiments from 10 years to several years,” Braatz says. “We’re talking hundreds of millions, billions of dollars. And the effect on people’s lives, you can’t put a price tag on that.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jennifer Chu, Massachusetts Institute of Technology


Driving the best possible bargain now isn’t the best long-term strategy, according to game theory

Conventional wisdom says that you should never leave money on the table when negotiating. But research in my field suggests this could be exactly the wrong approach.

There’s mounting evidence that a short-term win at the bargaining table can mean a loss in terms of overall trust and cooperation. That can leave everyone—including the “winner”—worse off.

As a former executive, I’ve managed large contracts as both a buyer and a seller. Now, as a business professor, I study these trading partner relationships, exploring what works in practice. My work supports what economic theorists and social scientists have been arguing for years: The best results come when people collaborate to create long-term value instead of fighting for short-term wins.

What game are you playing?

Research into art, science and practice of collaborative approaches dates back to the 1940s when the mathematician John von Neumann and economist Oskar Morgenstern used mathematical analysis to model competition and cooperation in living things.

Interest in collaborative approaches grew when researchers John Nash, John C. Harsanyi and Reinhard Selten won a Nobel Memorial Prize in Economic Sciences in 1994. Their work inspired academics around the world to delve deeper into what’s known as game theory.

Game theory is the study of the outcome of strategic interactions among decision makers. By using rigorous statistical methods, researchers can model what happens when people choose to cooperate or choose to take an aggressive, power-based approach to negotiation.

Many business leaders are taught strategies focusing on using their power and playing to win—often at the other party’s expense. In game theory, this is known as a zero-sum game, and it’s an easy trap to fall into.

But not every game has a clear winner or loser. In economics, a win-win game is called a nonzero-sum game. In this sort of situation, people aren’t fighting over whose slice of a pie will be larger. They’re working to grow the pie for everyone.

Kate Vitasek lays out five rules for developing a value creation strategy.

A second dimension of game theory is whether people are playing a one-shot or a repeated game. Think of a one-shot game as like going to the flea market: You probably won’t see your trading partner again, so if you’re a jerk to them, the risk of facing the consequences is low.

An interesting twist uncovered by studying repeated games is that when one party uses their power in a negotiation, it creates the urge for the other party to retaliate.

The University of Michigan’s Robert Axelrod, a mathematician turned game theorist, coined this a “tit-for-tat” strategy. His research, perhaps best known in the book “The Evolution of Cooperation,” uses statistics to show that when individuals cooperate, they come out better than when they don’t.

The case for leaving money on the table

Another Nobel laureate, American economist Oliver Williamson, has offered negotiating advice that most would call a paradigm shift—and some, a heresy.

That advice? Always leave money on the table—especially when you’ll be returning to the same “game” again. Why? According to Williamson, it sends a powerful signal of trustworthiness and credibility to one’s negotiating partner when someone consciously chooses to cooperate and build trust.

The opposite approach leads to lost trust and what the Nobel laureate economist Oliver Hart calls “shading.” This is a retaliatory behaviour that happens when a party isn’t getting the outcome it expected from a deal and feels the other party is to blame.

Simply put, noncollaborative approaches causes distrust and create friction, which adds transaction costs and inefficiencies.

The million-dollar question is whether collaborative approaches work in practice. And from my vantage point as a scholar, the answer is yes. In fields as diverse as health care to high-tech, I see growing real-world evidence backing up the insights of game theory.

The lessons are simple yet profound: Playing a game together to achieve mutual interests is better than playing exclusively with self-interest in mind.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Kate Vitasek, The Conversation

 


A revolution in knot theory

This knot has Gauss code O1U2O3U1O2U3. Credit: Graphic by Sam Nelson.

In the 19th century, Lord Kelvin made the inspired guess that elements are knots in the “ether”. Hydrogen would be one kind of knot, oxygen a different kind of knot—and so forth throughout the periodic table of elements. This idea led Peter Guthrie Tait to prepare meticulous and quite beautiful tables of knots, in an effort to elucidate when two knots are truly different. From the point of view of physics, Kelvin and Tait were on the wrong track: the atomic viewpoint soon made the theory of ether obsolete. But from the mathematical viewpoint, a gold mine had been discovered: The branch of mathematics now known as “knot theory” has been burgeoning ever since.

In his article “The Combinatorial Revolution in Knot Theory”, to appear in the December 2011 issue of the Notices of the AMS, Sam Nelson describes a novel approach to knot theory that has gained currency in the past several years and the mysterious new knot-like objects discovered in the process.

As sailors have long known, many different kinds of knots are possible; in fact, the variety is infinite. A *mathematical* knot can be imagined as a knotted circle: Think of a pretzel, which is a knotted circle of dough, or a rubber band, which is the “un-knot” because it is not knotted. Mathematicians study the patterns, symmetries, and asymmetries in knots and develop methods for distinguishing when two knots are truly different.

Mathematically, one thinks of the string out of which a knot is formed as being a one-dimensional object, and the knot itself lives in three-dimensional space. Drawings of knots, like the ones done by Tait, are projections of the knot onto a two-dimensional plane. In such drawings, it is customary to draw over-and-under crossings of the string as broken and unbroken lines. If three or more strands of the knot are on top of each other at single point, we can move the strands slightly without changing the knot so that every point on the plane sits below at most two strands of the knot. A planar knot diagram is a picture of a knot, drawn in a two-dimensional plane, in which every point of the diagram represents at most two points in the knot. Planar knot diagrams have long been used in mathematics as a way to represent and study knots.

As Nelson reports in his article, mathematicians have devised various ways to represent the information contained in knot diagrams. One example is the Gauss code, which is a sequence of letters and numbers wherein each crossing in the knot is assigned a number and the letter O or U, depending on whether the crossing goes over or under. The Gauss code for a simple knot might look like this: O1U2O3U1O2U3.

In the mid-1990s, mathematicians discovered something strange. There are Gauss codes for which it is impossible to draw planar knot diagrams but which nevertheless behave like knots in certain ways. In particular, those codes, which Nelson calls *nonplanar Gauss codes*, work perfectly well in certain formulas that are used to investigate properties of knots. Nelson writes: “A planar Gauss code always describes a [knot] in three-space; what kind of thing could a nonplanar Gauss code be describing?” As it turns out, there are “virtual knots” that have legitimate Gauss codes but do not correspond to knots in three-dimensional space. These virtual knots can be investigated by applying combinatorial techniques to knot diagrams.

Just as new horizons opened when people dared to consider what would happen if -1 had a square root—and thereby discovered complex numbers, which have since been thoroughly explored by mathematicians and have become ubiquitous in physics and engineering—mathematicians are finding that the equations they used to investigate regular knots give rise to a whole universe of “generalized knots” that have their own peculiar qualities. Although they seem esoteric at first, these generalized knots turn out to have interpretations as familiar objects in mathematics. “Moreover,” Nelson writes, “classical knot theory emerges as a special case of the new generalized knot theory.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to American Mathematical Society