Tiny Balls Fit Best Inside A Sausage, Physicists Confirm

Mathematicians have long been fascinated by the most efficient way of packing spheres in a space, and now physicists have confirmed that the best place to put them is into a sausage shape, at least for small numbers of balls.

Simulations show microscopic plastic balls within a cell membrane

What is the most space-efficient way to pack tennis balls or oranges? Mathematicians have studied this “sphere-packing” problem for centuries, but surprisingly little attention has been paid to replicating the results in the real world. Now, physical experiments involving microscopic plastic balls have confirmed what mathematicians had long suspected – with a small number of balls, it is best to stick them in a sausage.

Kepler was the first person to tackle sphere packing, suggesting in 1611 that a pyramid would be the best way to pack cannonballs for long voyages, but this answer was only fully proven by mathematicians in 2014.

This proof only considers the best way of arranging an infinite number of spheres, however. For finite sphere packings, simply placing the balls in a line, or sausage, is more efficient until there are around 56 spheres. At this point, the balls experience what mathematicians call the “sausage catastrophe” and something closer to pyramid packing becomes more efficient.

But what about back in the real world? Sphere-packing theories assume that the balls are perfectly hard and don’t attract or repel each other, but this is rarely true in real life – think of the squish of a tennis ball or an orange.

One exception is microscopic polystyrene balls, which are very hard and basically inert. Hanumantha Rao Vutukuri at the University of Twente in the Netherlands and his team, who were unaware of mathematical sphere-packing theories, were experimenting with inserting these balls into empty cell membranes and were surprised to find them forming sausages.

“One of my students observed a linear packing, but it was quite puzzling,” says Vutukuri. “We thought that there was some fluke, so he repeated it a couple of times and every time he observed similar results,” says Vutukuri. “I was wondering, ‘why is this happening?’ It’s a bit counterintuitive.”

After reading up on sphere packing, Vutukuri and his team decided to investigate and carried out simulations for different numbers of polystyrene balls in a bag. They then compared their predictions with experiments using up to nine real polystyrene balls that had been squeezed into cell membranes immersed in a liquid solution. They could then shrink-wrap the balls by changing the concentration of the solution, causing the membranes to tighten, and see what formation the balls settled in using a microscope.

“For up to nine spheres, we showed, both experimentally and in simulations, that the sausage is the best packed,” says team member Marjolein Dijkstra at Utrecht University, the Netherlands. With more than nine balls, the membrane became deformed by the pressure of the balls. The team ran simulations for up to 150 balls and reproduced the sausage catastrophe, where it suddenly becomes more efficient to pack things in polyhedrons, with between 56  and 70 balls.

The sausage formation for a small number of balls is unintuitive, says Erich Müller at Imperial College London, but makes sense because of the large surface area of the membrane with respect to the balls at low numbers. “When dimensions become really, really small, then the wall effects become very important,” he says.

The findings could have applications in drug delivery, such as how to most efficiently fit hard antibiotic molecules, like gold, inside cell-like membranes, but the work doesn’t obviously translate at this point, says Müller.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


What The Mathematics of Knots Reveals About The Shape of The Universe

Knot theory is linked to many other branches of science, including those that tell us about the cosmos.

The mathematical study of knots started with a mistake. In the 1800s, mathematician and physicist William Thomson, also known as Lord Kelvin, suggested that the elemental building blocks of matter were knotted vortices in the ether: invisible microscopic currents in the background material of the universe. His theory dropped by the wayside fairly quickly, but this first attempt to classify how curves could be knotted grew into the modern mathematical field of knot theory. Today, knot theory is not only connected to many branches of theoretical mathematics but also to other parts of science, like physics and molecular biology. It’s not obvious what your shoelace has to do with the shape of the universe, but the two may be more closely related than you think.

As it turns out, a tangled necklace offers a better model of a knot than a shoelace: to a mathematician, a knot is a loop in three-dimensional space rather than a string with loose ends. Just as a physical loop of string can stretch and twist and rotate, so can a mathematical knot – these loops are floppy rather than fixed. If we studied strings with free ends, they could wiggle around and untie themselves, but a loop stays knotted unless it’s cut.

Most questions in knot theory come in two varieties: sorting knots into classes and using knots to study other mathematical objects. I’ll try to give a flavour of both, starting with the simplest possible example: the unknot.

Draw a circle on a piece of paper. Congratulations, you’ve just constructed an unknot! This is the name for any loop in three-dimensional space that is the boundary of a disc. When you draw a circle on a piece of paper, you can see this disc as the space inside the circle, and your curve continues to be an unknot if you crumple the paper up, toss it through the air, flatten it out and then do some origami. As long as the disc is intact, no matter how distorted, the boundary is always an unknot.

Things get more interesting when you start with just the curve. How can you tell if it’s an unknot? There may secretly be a disc that can fill in the loop, but with no limits on how deformed the disc could be, it’s not clear how you can figure this out.

Two unknots

It turns out that this question is both hard and important: the first step in studying complicated objects is distinguishing them from simple ones. It’s also a question that gets answered inside certain bacterial cells each time they replicate. In the nuclei of these cells, the DNA forms a loop, rather than a strand with loose ends, and sometimes these loops end up knotted. However, the DNA can replicate only when the loop is an unknot, so the basic life processes of the cell require a process for turning a potentially complicated loop into an unknotted one.

A class of proteins called topoisomerases unknot tangled loops of DNA by cutting a strand, moving the free ends and then reattaching them. In a mathematical context, this operation is called a “crossing change”, and it’s known that any loop can be turned into the unknot by some number of crossing changes. However, there’s a puzzle in this process, since random crossing changes are unlikely to simplify a knot. Each topoisomerase operates locally, but collectively they’re able to reliably unknot the DNA for replication. Topoisomerases were discovered more than 50 years ago, but biologists are still studying how they unknot DNA so effectively.

When mathematicians want to identify a knot, they don’t turn to a protein to unknot it for them.  Instead, they rely on invariants, mathematical objects associated with knots. Some invariants are familiar things like numbers, while others are elaborate algebraic structures. The best invariants have two properties: they’re practical to compute, given the input of a specific knot, and they distinguish many different classes of knots from each other. It’s easy to define an invariant with only one of these properties, but a computable and effective knot invariant is a rare find.

The modern era of knot theory began with the introduction of an invariant called the Jones Polynomial in the 1980s. Vaughan Jones was studying statistical mechanics when he discovered a process that assigns a polynomial – a type of simple algebraic expression – to any knot. The method he used was technical, but the essential feature is that no amount of wiggling, stretching or twisting changes the output. The Jones Polynomial of an unknot is 1, no matter how complicated the associated disc might be.

Jones’s discovery caught the attention of other researchers, who found simpler techniques for computing the same polynomial. The result was an invariant that satisfies both the conditions listed above: the Jones Polynomial can be computed from a drawing of a knot on paper, and many thousands of knots can be distinguished by the fact that they have different Jones Polynomials.

However, there are still many things we don’t know about the Jones Polynomial, and one of the most tantalising questions is which knots it can detect. Most invariants distinguish some knots while lumping others together, and we say an invariant detects a knot if all the examples sharing a certain value are actually deformations of each other. There are certainly pairs of distinct knots with the same Jones Polynomial, but after decades of study, we still don’t know whether any knot besides the unknot has the polynomial 1. With computer assistance, experts have examined nearly 60 trillion examples of distinct knots without finding any new knots whose Jones Polynomials equal 1.

The Jones Polynomial has applications beyond knot detection. To see this, let’s return to the definition of an unknot as a loop that bounds a disc. In fact, every knot is the boundary of some surface – what distinguishes an unknot is that this surface is particularly simple. There’s a precise way to rank the complexity of surfaces, and we can use this to rank the complexity of knots. In this classification, the simplest knot is the unknot, and the second simplest is the trefoil, which is shown below.

Trefoil knot

To build a surface with a trefoil boundary, start with a strip of paper. Twist it three times and then glue the ends together. This is more complicated than a disc, but still pretty simple. It also gives us a new question to investigate: given an arbitrary knot, where does it fit in the ranking of knot complexity? What’s the simplest surface it can bound? Starting with a curve and then hunting for a surface may seem backwards, but in some settings, the Jones Polynomial answers this question: the coefficients of the knot polynomial can be used to estimate the complexity of the surfaces it bounds.

Joan Licata

Knots also help us classify other mathematical objects. We can visually distinguish the two-dimensional surface of sphere from the surface a torus (the shape of a ring donut), but an ant walking on one of these surfaces might need knot theory to tell them apart. On the surface of a torus, there are loops that can’t be pulled any tighter, while any loop lying on a sphere can contract to a point.

We live inside a universe of three physical dimensions, so like the ant on a surface, we lack a bird’s eye view that could help us identify its global shape. However, we can ask the analogous question: can each loop we encounter shrink without breaking, or is there a shortest representative? Mathematicians can classify three-dimensional spaces by the existence of the shortest knots they contain. Presently, we don’t know if some knots twisting through the universe are unfathomably long or if every knot can be made as small as one of Lord Kelvin’s knotted vortices.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Joan Licata*


How Mathematics Can Help You Divide Anything Up Fairly

Whether you are sharing a cake or a coastline, maths can help make sure everyone is happy with their cut, says Katie Steckles.

One big challenge in life is dividing things fairly. From sharing a tasty snack to allocating resources between nations, having a strategy to divvy things up equitably will make everyone a little happier.

But it gets complicated when the thing you are dividing isn’t an indistinguishable substance: maybe the cake you are sharing has a cherry on top, and the piece with the cherry (or the area of coastline with good fish stocks) is more desirable. Luckily, maths – specifically game theory, which deals with strategy and decision-making when people interact – has some ideas.

When splitting between two parties, you might know a simple rule, proven to be mathematically optimal: I cut, you choose. One person divides the cake (or whatever it is) and the other gets to pick which piece they prefer.

Since the person cutting the cake doesn’t choose which piece they get, they are incentivised to cut the cake fairly. Then when the other person chooses, everyone is satisfied – the cutter would be equally happy with either piece, and the chooser gets their favourite of the two options.

This results in what is called an envy-free allocation – neither participant can claim they would rather have the other person’s share. This also takes care of the problem of non-homogeneous objects: if some parts of the cake are more desirable, the cutter can position their cut so the two pieces are equal in value to them.

What if there are more people? It is more complicated, but still possible, to produce an envy-free allocation with several so-called fair-sharing algorithms.

Let’s say Ali, Blake and Chris are sharing a cake three ways. Ali cuts the cake into three pieces, equal in value to her. Then Blake judges if there are at least two pieces he would be happy with. If Blake says yes, Chris chooses a piece (happily, since he gets free choice); Blake chooses next, pleased to get one of the two pieces he liked, followed by Ali, who would be satisfied with any of the pieces. If Blake doesn’t think Ali’s split was equitable, Chris looks to see if there are two pieces he would take. If yes, Blake picks first, then Chris, then Ali.

If both Blake and Chris reject Ali’s initial chop, then there must be at least one piece they both thought was no good. This piece goes to Ali – who is still happy, because she thought the pieces were all fine – and the remaining two pieces get smooshed back together (that is a mathematical term) to create one piece of cake for Blake and Chris to perform “I cut, you choose” on.

While this seems long-winded, it ensures mathematically optimal sharing – and while it does get even more complicated, it can be extended to larger groups. So whether you are sharing a treat or a divorce settlement, maths can help prevent arguments.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Katie Steckles*


Mathematicians Find 27 Tickets That Guarantee UK National Lottery Win

Buying a specific set of 27 tickets for the UK National Lottery will mathematically guarantee that you win something.

Buying 27 tickets ensures a win in the UK National Lottery

You can guarantee a win in every draw of the UK National Lottery by buying just 27 tickets, say a pair of mathematicians – but you won’t necessarily make a profit.

While there are many variations of lottery in the UK, players in the standard “Lotto” choose six numbers from 1 to 59, paying £2 per ticket. Six numbers are randomly drawn and prizes are awarded for tickets matching two or more.

David Cushing and David Stewart at the University of Manchester, UK, claim that despite there being 45,057,474 combinations of draws, it is possible to guarantee a win with just 27 specific tickets. They say this is the optimal number, as the same can’t be guaranteed with 26.

The proof of their idea relies on a mathematical field called finite geometry and involves placing each of the numbers from 1 to 59 in pairs or triplets on a point within one of five geometrical shapes, then using these to generate lottery tickets based on the lines within the shapes. The five shapes offer 27 such lines, meaning that 27 tickets bought using those numbers, at a cost of £54, will hit every possible winning combination of two numbers.

The 27 tickets that guarantee a win on the UK National Lottery

Their research yielded a specific list of 27 tickets (see above), but they say subsequent work has shown that there are two other combinations of 27 tickets that will also guarantee a win.

“We’ve been thinking about this problem for a few months. I can’t really explain the thought process behind it,” says Cushing. “I was on a train to Manchester and saw this [shape] and that’s the best logical [explanation] I can give.”

Looking at the winning numbers from the 21 June Lotto draw, the pair found their method would have won £1810. But the same numbers played on 1 July would have matched just two balls on three of the tickets – still a technical win, but giving a prize of just three “lucky dip” tries on a subsequent lottery, each of which came to nothing.

Stewart says proving that 27 tickets could guarantee a win was the easiest part of the research, while proving it is impossible to guarantee a win with 26 was far trickier. He estimates that the number of calculations needed to verify that would be 10165, far more than the number of atoms in the universe. “There’d be absolutely no way to brute force this,” he says.

The solution was a computer programming language called Prolog, developed in France in 1971, which Stewart says is the “hero of the story”. Unlike traditional computer languages where a coder sets out precisely what a machine should do, step by step, Prolog instead takes a list of known facts surrounding a problem and works on its own to deduce whether or not a solution is possible. It takes these facts and builds on them or combines them in order to slowly understand the problem and whittle down the array of possible solutions.

“You end up with very, very elegant-looking programs,” says Stewart. “But they are quite temperamental.”

Cushing says the research shouldn’t be taken as a reason to gamble more, particularly as it doesn’t guarantee a profit, but hopes instead that it encourages other researchers to delve into using Prolog on thorny mathematical problems.

A spokesperson from Camelot, the company that operates the lottery, told New Scientist that the paper made for “interesting reading”.

“Our approach has always been to have lots of people playing a little, with players individually spending small amounts on our games,” they say. “It’s also important to bear in mind that, ultimately, Lotto is a lottery. Like all other National Lottery draw-based games, all of the winning Lotto numbers are chosen at random – any one number has the same and equal chance of being drawn as any other, and every line of numbers entered into a draw has the same and equal chance of winning as any other.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Matthew Sparkes*


Make Mine a Double: Moore’s Law And The Future of Mathematics

Our present achievements will look like child’s play in a few years.

What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law.

Moore’s Law stems from a seminal 1965 article by Intel founder Gordon Moore. He wrote:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”

Moore noted that in 1965 engineering advances were enabling a doubling in semiconductor density every 12 months, but this rate was later modified to roughly 18 months. Informally, we may think of this as doubling computer performance.

In any event, Moore’s Law has now continued unabated for 45 years, defying several confident predictions it would soon come to a halt, and represents a sustained exponential rate of progress that is without peer in the history of human technology. Here is a graph of Moore’s Law, shown with the transistor count of various computer processors:

Where we’re at with Moore’s Law

At the present time, researchers are struggling to keep Moore’s Law on track. Processor clock rates have stalled, as chip designers have struggled to control energy costs and heat dissipation, but the industry’s response has been straightforward — simply increase the number of processor “cores” on a single chip, together with associated cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections.

The capacity of leading-edge DRAM main memory chips continues to advance apace with Moore’s Law. The current state of the art in computer memory devices is a 3D design, which will be jointly produced by IBM and Micron Technology, according to a December 2011 announcement by IBM representatives.

As things stand, the best bet for the future of Moore’s Law are nanotubes — submicroscopic tubes of carbon atoms that have remarkable properties.

According to a recent New York Times article, Stanford researchers have created prototype electronic devices by first growing billions of carbon nanotubes on a quartz surface, then coating them with an extremely fine layer of gold atoms. They then used a piece of tape (literally!) to pick the gold atoms up and transfer them to a silicon wafer. The researchers believe that commercial devices could be made with these components as early as 2017.

Moore’s Law in science and maths

So what does this mean for researchers in science and mathematics?

Plenty, as it turns out. A scientific laboratory typically uses hundreds of high-precision devices that rely crucially on electronic designs, and with each step of Moore’s Law, these devices become ever cheaper and more powerful. One prominent case is DNA sequencers. When scientists first completed sequencing a human genome in 2001, at a cost of several hundred million US dollars, observers were jubilant at the advances in equipment that had made this possible.

Now, only ten years later, researchers expect to reduce this cost to only US$1,000 within two years and genome sequencing may well become a standard part of medical practice. This astounding improvement is even faster than Moore’s Law!

Applied mathematicians have benefited from Moore’s Law in the form of scientific supercomputers, which typically employ hundreds of thousands of state-of-the-art components. These systems are used for tasks such as climate modelling, product design and biological structure calculations.

Today, the world’s most powerful system is a Japanese supercomputer that recently ran the industry-standard Linpack benchmark test at more than ten “petaflops,” or, in other words, 10 quadrillion floating-point operations per second.

Below is a graph of the Linpack performance of the world’s leading-edge systems over the time period 1993-2011, courtesy of the website Top 500. Note that over this 18-year period, the performance of the world’s number one system has advanced more than five orders of magnitude. The current number one system is more powerful than the sum of the world’s top 500 supercomputers just four years ago.

 

Linpack performance over time.

Pure mathematicians have been a relative latecomer to the world of high-performance computing. The present authors well remember the era, just a decade or two ago, when the prevailing opinion in the community was that “real mathematicians don’t compute.”

But thanks to a new generation of mathematical software tools, not to mention the ingenuity of thousands of young, computer-savvy mathematicians worldwide, remarkable progress has been achieved in this arena as well (see our 2011 AMS Notices article on exploratory experimentation in mathematics).

In 1963 Daniel Shanks, who had calculated pi to 100,000 digits, declared that computing one billion digits would be “forever impossible.” Yet this level was reached in 1989. In 1989, famous British physicist Roger Penrose, in the first edition of his best-selling book The Emperor’s New Mind, declared that humankind would likely never know whether a string of ten consecutive sevens occurs in the decimal expansion of pi. Yet this was found just eight years later, in 1997.

Computers are certainly being used for more than just computing and analysing digits of pi. In 2003, the American mathematician Thomas Hales completed a computer-based proof of Kepler’s conjecture, namely the long-hypothesised fact that the simple way the grocer stacks oranges is in fact the optimal packing for equal-diameter spheres. Many other examples could be cited.

Future prospects

So what does the future hold? Assuming that Moore’s Law continues unabated at approximately the same rate as the present, and that obstacles in areas such as power management and system software can be overcome, we will see, by the year 2021, large-scale supercomputers that are 1,000 times more powerful and capacious than today’s state-of-the-art systems — “exaflops” computers (see NAS Report). Applied mathematicians eagerly await these systems for calculations, such as advanced climate models, that cannot be done on today’s systems.

Pure mathematicians will use these systems as well to intuit patterns, compute integrals, search the space of mathematical identities, and solve intricate symbolic equations. If, as one of us discussed in a recent Conversation article, such facilities can be combined with machine intelligence, such as a variation of the hardware and software that enabled an IBM system to defeat the top human contestants in the North American TV game show Jeopardy!, we may see a qualitative advance in mathematical discovery and even theory formation.

It is not a big leap to imagine that within the next ten years tailored and massively more powerful versions of Siri (Apple’s new iPhone assistant) will be an integral part of mathematics, not to mention medicine, law and just about every other part of human life.

Some observers, such as those in the Singularity movement, are even more expansive, predicting a time just a few decades hence when technology will advance so fast that at the present time we cannot possibly conceive or predict the outcome.

Your present authors do not subscribe to such optimistic projections, but even if more conservative predictions are realised, it is clear that the digital future looks very bright indeed. We will likely look back at the present day with the same technological disdain with which we currently view the 1960s.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon) and David H. Bailey*

 


Commutative Diagrams Explained

Have you ever come across the words “commutative diagram” before? Perhaps you’ve read or heard someone utter a sentence that went something like

“For every [bla bla] there existsa [yadda yadda] such that the following diagram commutes.”

and perhaps it left you wondering what it all meant. I was certainly mystified when I first came across a commutative diagram. And it didn’t take long to realize that the phrase “the following diagram commutes” was (and is) quite ubiquitous. It appears in theorems, propositions, lemmas and corollaries almost everywhere!

So what’s the big deal with diagrams? And what does commute mean anyway?? It turns out the answer is quite simple.

Do you know what a composition of two functions is?

Then you know what a commutative diagram is!

A commutative diagram is simply the picture behind function composition.

Truly, it is that simple. To see this, suppose AA and BB are sets and ff is a function from AA to BB. Since ff maps (i.e. assigns) elements in AA to elements in BB, it is often helpful to denote that process by an arrow.

And there you go. That’s an example of a diagram. But suppose we have another function gg from sets BB to CC, and suppose ff and gg are composable. Let’s denote their composition by h=g∘fh=g∘f. Then both gg and hh can be depicted as arrows, too.

But what is the arrow A→CA→C really? I mean, really? Really it’s just the arrows ff and gg lined up side-by-side.

But maybe we think that drawing hh’s arrow curved upwards like that takes up too much space, so let’s bend the diagram a bit and redraw it like this:

This little triangle is the paradigm example of a commutative diagram. It’s a diagram because it’s a schematic picture of arrows that represent functions. And it commutes because the diagonal function IS EQUAL TO the composition of the vertical and horizontal functions, i.e. h(a)=g(f(a))h(a)=g(f(a)) for every a∈Aa∈A. So a diagram “commutes” if all paths that share a common starting and ending point are the same. In other words, your diagram commutes if it doesn’t matter how you commute from one location to another in the diagram.

But be careful.

Not every diagram is a commutative diagram.

The picture on the right is a bona fide diagram of real-valued functions, but it is defintitely not commutative. If we trace the number 11 around the diagram, it maps to 00 along the diagonal arrow, but it maps to 11 itself if we take the horizontal-then-vertical route. And 0≠10≠1. So to indicate if/when a given diagram is commutative, we have to say it explicitly. Or sometimes folks will use the symbols shown below to indicate commutativity:

I think now is a good time to decode another phrase that often accompanies the commutative-diagram parlance. Returning to our f,g,hf,g,h example, we assumed that f,gf,g and h=g∘fh=g∘f already existed. But suppose we only knew about the existence of f:A→Bf:A→B and some other map, say, z:A→Cz:A→C. Then we might like to know, “Does there exist a map g:B→Cg:B→C such that z=g∘fz=g∘f? Perhaps the answer is no. Or perhaps the answer is yes, but only under certain hypotheses.* Well, if such a gg does exists, then we’ll say “…there exists a map gg such that the following diagram commutes:

but folks might also say

“…there exists a map gg such that zz factors through gg”

The word “factors” means just what you think it means. The diagram commutes if and only if z=g∘fz=g∘f, and that notation suggests that we may think of gg as a factor of zz, analogous to how 22 is a factor of 66 since 6=2⋅36=2⋅3.

By the way, we’ve only chatted about sets and functions so far, but diagrams make sense in any context in which you have mathematical objects and arrows. So we can talk about diagrams of groups and group homomorphisms, or vector spaces and linear transformations, or topological spaces and continuous maps, or smooth manifolds and smooth maps, or functors and natural transformations and so on. Diagrams make sense in any category. And as you can imagine, there are more complicated diagrams than triangular ones. For instance, suppose we have two more maps i:A→Di:A→D and j:D→Cj:D→C such that hh is equal to not only g∘fg∘f but also j∘ij∘i. Then we can express the equality g∘f=h=j∘ig∘f=h=j∘i by a square:

Again, commutativity simply tells us that the three ways of getting from AA to CC are all equivalent. And diagrams can get really crazy and involve other shapes too. They can even be three-dimensional! Here are some possibilities where I’ve used bullets in lieu of letters for the source and target of the arrows.

No matter the shape, the idea is the same: Any map can be thought of as a path or a process from AA to BB, from start to finish. And we use diagrams to capitalize on that by literally writing down “AA” and “BB” (or “∙∙” and “∙∙”) and by literally drawing a path—in the form of an arrow—between them.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


Everything You Need To Know About Statistics (But Were Afraid To Ask)

Does the thought of p-values and regressions make you break out in a cold sweat? Never fear – read on for answers to some of those burning statistical questions that keep you up 87.9% of the night.

  • What are my hypotheses?

There are two types of hypothesis you need to get your head around: null and alternative. The null hypothesis always states the status quo: there is no difference between two populations, there is no effect of adding fertiliser, there is no relationship between weather and growth rates.

Basically, nothing interesting is happening. Generally, scientists conduct an experiment seeking to disprove the null hypothesis. We build up evidence, through data collection, against the null, and if the evidence is sufficient we can say with a degree of probability that the null hypothesis is not true.

We then accept the alternative hypothesis. This hypothesis states the opposite of the null: there is a difference, there is an effect, there is a relationship.

  • What’s so special about 5%?

One of the most common numbers you stumble across in statistics is alpha = 0.05 (or in some fields 0.01 or 0.10). Alpha denotes the fixed significance level for a given hypothesis test. Before starting any statistical analyses, along with stating hypotheses, you choose a significance level you’re testing at.

This states the threshold at which you are prepared to accept the possibility of a Type I Error – otherwise known as a false positive – rejecting a null hypothesis that is actually true.

  • Type what error?

Most often we are concerned primarily with reducing the chance of a Type I Error over its counterpart (Type II Error – accepting a false null hypothesis). It all depends on what the impact of either error will be.

Take a pharmaceutical company testing a new drug; if the drug actually doesn’t work (a true null hypothesis) then rejecting this null and asserting that the drug does work could have huge repercussions – particularly if patients are given this drug over one that actually does work. The pharmaceutical company would be concerned primarily with reducing the likelihood of a Type I Error.

Sometimes, a Type II Error could be more important. Environmental testing is one such example; if the effect of toxins on water quality is examined, and in truth the null hypothesis is false (that is, the presence of toxins does affect water quality) a Type II Error would mean accepting a false null hypothesis, and concluding there is no effect of toxins.

The down-stream issues could be dire, if toxin levels are allowed to remain high and there is some health effect on people using that water.

Do you know the difference between continuous and categorical variables?

  • What is a p-value, really?

Because p-values are thrown about in science like confetti, it’s important to understand what they do and don’t mean. A p-value expresses the probability of getting a given result from a hypothesis test, or a more extreme result, if the null hypothesis were true.

Given we are trying to reject the null hypothesis, what this tells us is the odds of getting our experimental data if the null hypothesis is correct. If the odds are sufficiently low we feel confident in rejecting the null and accepting the alternative hypothesis.

What is sufficiently low? As mentioned above, the typical fixed significance level is 0.05. So if the probability portrayed by the p-value is less than 5% you reject the null hypothesis. But a fixed significance level can be deceiving: if 5% is significant, why is 6% not?

It pays to remember that such probabilities are continuous, and any given significance level is arbitrary. In other words, don’t throw your data away simply because you get a p-value of 6-10%.

  • How much replication do I have?

This is probably the biggest issue when it comes to experimental design, in which the focus is on ensuring the right type of data, in large enough quantities, is available to answer given questions as clearly and efficiently as possible.

Pseudoreplication refers to the over-inflation of degrees of freedom (a mathematical restriction put in place when we calculate a parameter – e.g. a mean – from a sample). How would this work in practice?

Say you’re researching cholesterol levels by taking blood from 20 male participants.

Each male is tested twice, giving 40 test results. But the level of replication is not 40, it’s actually only 20 – a requisite for replication is that each replicate is independent of all others. In this case, two blood tests from the same person are intricately linked.

If you were to analyse the data with a sample size of 40, you would be committing the sin of pseudoreplication: inflating your degrees of freedom (which incidentally helps to create a significant test result). Thus, if you start an experiment understanding the concept of independent replication, you can avoid this pitfall.

  • How do I know what analysis to do?

There is a key piece of prior knowledge that will help you determine how to analyse your data. What kind of variable are you dealing with? There are two most common types of variable:

1) Continuous variables. These can take any value. Were you to you measure the time until a reaction was complete, the results might be 30 seconds, two minutes and 13 seconds, or three minutes and 50 seconds.

2) Categorical variables. These fit into – you guessed it – categories. For instance, you might have three different field sites, or four brands of fertiliser. All continuous variables can be converted into categorical variables.

With the above example we could categorise the results into less than one minute, one to three minutes, and greater than three minutes. Categorical variables cannot be converted back to continuous variables, so it’s generally best to record data as “continuous” where possible to give yourself more options for analysis.

Deciding which to use between the two main types of analysis is easy once you know what variables you have:

ANOVA (Analysis of Variance) is used to compare a categorical variable with a continuous variable – for instance, fertiliser treatment versus plant growth in centimetres.

Linear Regression is used when comparing two continuous variables – for instance, time versus growth in centimetres.

Though there are many analysis tools available, ANOVA and linear regression will get you a long way in looking at your data. So if you can start by working out what variables you have, it’s an easy second step to choose the relevant analysis.

Ok, so perhaps that’s not everything you need to know about statistics, but it’s a start. Go forth and analyse!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Sarah-Jane O’Connor*

 


The Easy Tricks That Can Maximise Your Lottery Winnings

From avoiding the number seven to picking numbers over 31, mathematician Peter Rowlett has a few psychological strategies for improving your chances when playing the lottery.

Would you think I was daft if I bought a lottery ticket for the numbers 1, 2, 3, 4, 5 and 6? There is no way those are going to be drawn, right? That feeling should – and, mathematically, does – actually apply to any set of six numbers you could pick.

Lotteries are ancient. Emperor Augustus, for example, organised one to fund repairs to Rome. Early lotteries involved selling tickets and drawing lots, but the idea of people guessing which numbers would be drawn from a machine comes from Renaissance Genoa. A common format is a game that draws six balls from 49, studied by mathematician Leonhard Euler in the 18th century.

The probabilities Euler investigated are found by counting the number of possible draws. There are 49 balls that could be drawn first. For each of these, there are 48 balls that can be drawn next, so there are 49×48 ways to draw two balls. This continues, so there are 49×48×47×46×45×44 ways to draw six balls. But this number counts all the different arrangements of any six balls as a unique solution.

How many ways can we rearrange six balls? Well, we have six choices for which to put first, then for each of these, five choices for which to put second, and so on. So the number of ways of arranging six balls is 6×5×4×3×2×1, a number called 6! (six factorial). We divide 49×48×47×46×45×44 by 6! to get 13,983,816, so the odds of a win are near 1 in 14 million.

Since all combinations of numbers are equally likely, how can you maximise your winnings? Here is where maths meets psychology: you win more if fewer people share the prize, so choose numbers others don’t. Because people often use dates, numbers over 31 are chosen less often, as well as “unlucky” numbers like 13. A lot of people think of 7 as their favourite number, so perhaps avoid it. People tend to avoid patterns so are less likely to pick consecutive or regularly spaced numbers as they feel less random.

In July, David Cushing and David Stewart at the University of Manchester, UK, published a list of 27 lottery tickets that guarantee a win in the UK National Lottery, which uses 59 balls and offers a prize for matching two or more. But a win doesn’t always mean a profit – for almost 99 per cent of possible draws, their tickets match at most three balls, earning prizes that may not exceed the cost of the tickets!

So, is a lottery worth playing? Since less than half the proceeds are given out in prizes, you would probably be better off saving your weekly ticket money. But a lecturer of mine made an interesting cost-benefit argument. He was paid enough that he could lose the cost of a ticket each week without really noticing. But if he won the jackpot, his life would be changed. So, given that lottery profit is often used to support charitable causes, it might just be worth splurging.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Peter Rowlett*


Mathematicians Have Finally Proved That Bach was a Great Composer

Converting hundreds of compositions by Johann Sebastian Bach into mathematical networks reveals that they store lots of information and convey it very effectively.

Johann Sebastian Bach is considered one of the great composers of Western classical music. Now, researchers are trying to figure out why – by analysing his music with information theory.

Suman Kulkarni at the University of Pennsylvania and her colleagues wanted to understand how the ability to recall or anticipate a piece of music relates to its structure. They chose to analyse Bach’s opus because he produced an enormous number of pieces with many different structures, including religious hymns called chorales and fast-paced, virtuosic toccatas.

First, the researchers translated each composition into an information network by representing each note as a node and each transition between notes as an edge, connecting them. Using this network, they compared the quantity of information in each composition. Toccatas, which were meant to entertain and surprise, contained more information than chorales, which were composed for more meditative settings like churches.

Kulkarni and her colleagues also used information networks to compare Bach’s music with listeners’ perception of it. They started with an existing computer model based on experiments in which participants reacted to a sequence of images on a screen. The researchers then measured how surprising an element of the sequence was. They adapted information networks based on this model to the music, with the links between each node representing how probable a listener thought it would be for two connected notes to play successively – or how surprised they would be if that happened. Because humans do not learn information perfectly, networks showing people’s presumed note changes for a composition rarely line up exactly with the network based directly on that composition. Researchers can then quantify that mismatch.

In this case, the mismatch was low, suggesting Bach’s pieces convey information rather effectively. However, Kulkarni hopes to fine-tune the computer model of human perception to better match real brain scans of people listening to the music.

“There is a missing link in neuroscience between complicated structures like music and how our brains respond to it, beyond just knowing the frequencies [of sounds]. This work could provide some nice inroads into that,” says Randy McIntosh at Simon Fraser University in Canada. However, there are many more factors that affect how someone perceives music – for example, how long a person listens to a piece and whether or not they have musical training. These still need to be accounted for, he says.

Information theory also has yet to reveal whether Bach’s composition style was exceptional compared with other types of music. McIntosh says his past work found some general similarities between musicians as different from Bach as the rock guitarist Eddie Van Halen, but more detailed analyses are needed.

“I would love to perform the same analysis for different composers and non-Western music,” says Kulkarni.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Karmela Padavic-Callaghan*


Peer Review: The Fallacy of Fine-Tuning

We are a product of evolution, and are not surprised that our bodies seem to be well-suited to the environment.

Our leg bones are strong enough to allow for Earth’s gravitational pull – not too weak to shatter, not so massively over-engineered as to be wasteful.

But it could also be claimed we are special and the environment was formed and shaped for us.

This, as we know, is the basis of many religious ideas.

In recent years, such ideas have been expanded beyond Earth to look at the entire universe and our place within it.

The so-called Fine-Tuning Argument – that the laws of physics have been specially-tuned, potentially by some Supreme Being, to allow human life to arise – is the focus of Victor J. Stenger’s book.

Stenger presents the mathematics underpinning cosmic evolution, the lifetime of stars, the quantum nature of atoms and so on. His central is that “fine-tuning” claims are fatally flawed.

He points out that some key areas of physics – such as the equality of the charges on the electron and proton – are set by conservation laws determined by symmetries in the universe, and so are not free to play with.

Some flaws in the theory, he argues, run deeper.

A key component of the fine-tuning argument is that there are many parameters governing our universe, and that changing any one of these would likely produce a sterile universe unlike our own.

But think of baking a cake. Arbitrarily doubling only the flour, or sugar or vanilla essence may end in a cooking disaster, but doubling all the ingredients results in a perfectly tasty cake.

The interrelationships between the laws of physics are somewhat more complicated, but the idea is the same.

A hypothetical universe in which gravity was stronger, the masses of the fundamental particles smaller and electomagnetic force weaker may well result in the following: a universe that appears a little different to our own, but is still capable of producing long-lived stars and heavy chemical elements, the basic requirements for complex life.

Stenger backs up such points with his own research, and provides access to a web-based program he wrote called MonkeyGod.

The program allows you to conjure up universes with differing underlying physics. And, as Stenger shows, randomly plucking universe parameters from thin air can still produce universes quite capable of harbouring life.

This book is a good read for those wanting to understand the fine-tuning issues in cosmology, and it’s clear Stenger really understands the science.

But while many of the discussions are robust, I felt that in places some elements of the fine-tuning argument were brushed aside with little real justification.

As a case in point, Stenger falls back on multiverse theory and the anthropic principle, whereby we occupy but one of an almost infinite sea of different universes, each with a different law of physics.

In multiverse theory, most universes would be sterile (though we should not be surprised to find ourselves in a habitable universe).

While such a multiverse – the staple of superstring and brane ideas of the cosmos – is often sold as science fact, it actually lies much closer to the world of science speculation (or, to many, fiction).

We are not out of the fine-tuning waters yet, but Stenger’s book is a good place to start getting to grips with the issues.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Geraint Lewis*