Eleven games and activities for parents to encourage maths in early learning

How can parents best help their children with their schooling without actually doing it for them? This article is part of our series on Parents’ Role in Education, focusing on how best to support learning from early childhood to Year 12.

Before beginning official schooling, parents can give their young children a boost in learning mathematics by noticing, exploring and talking about maths during everyday activities at home or out and about.

New research shows that parents play a key role in helping their children learn mathematics concepts involving time, shape, measurement and number. This mathematical knowledge developed before school is predictive of literacy and numeracy achievements in later grades.

One successful approach for strengthening the role of parents in mathematics learning is Let’s Count, implemented by The Smith Family. This builds on parents’ strengths and capabilities as the first mathematics educators of their children.

The Let’s Count longitudinal evaluation findings show that when early years educators encourage parents and families to confidently notice, explore and talk about mathematics in everyday activities, their young children’s learning flourishes.

Indeed, children whose families had taken part in Let’s Count showed greater mathematical skills than those in a comparison group whose families had not participated. For example, they were more successful with correctly making a group of seven (89% versus 63%); continuing patterns (56% versus 34%); and counting collections of 20 objects (58% versus 37%).

These findings, among many others, are a strong endorsement of the power of families helping their children to learn about mathematics in everyday contexts.

What parents can do to promote maths every day

Discussing and exploring mathematics with children requires no special resources. Instead, what is needed is awareness and confidence for parents about how to engage.

However, our research shows that one of the biggest barriers to this is parents’ lack of confidence in leading maths education at home.

Through examining international research, we identified the type of activities that are important for early maths learning which are easy for parents to use. These include:

  1. Comparing objects and describing which is longer, shorter, heavier, or holds less.
  2. Playing with and describing 2D shapes and 3D objects.
  3. Describing where things are positioned, for example, north, outside, behind, opposite.
  4. Describing, copying, and extending patterns found in everyday situations.
  5. Using time-words to describe points in time, events and routines (including days, months, seasons and celebrations).
  6. Comparing and talking about the duration of everyday events and the sequence in which they occur.
  7. Saying number names forward in sequence to ten (and eventually to 20 and beyond).
  8. Using numbers to describe and compare collections.
  9. Using perceptual and conceptual subitising (recognising quantities based on visual patterns), counting and matching to compare the number of items in one collection with another.
  10. Showing different ways to make a total (at first with models and small numbers).
  11. Matching number names, symbols and quantities up to ten.

Games to play using everyday situations

Neuroscience research has provided crucial evidence about the importance of early nurturing and support for learning, brain development, and the development of positive dispositions for learning.

Early brain development or “learning” is all about the quality of children’s sensory and motor experiences within positive and nurturing relationships and environments. This explains why programs such as Let’s Count are successful.

Sometimes it can be difficult to come up with activities and games to play that boost children’s mathematics learning, but there are plenty. For example, talk with your children as you prepare meals together. Talk about measuring and comparing ingredients and amounts.

You can play children’s card games and games involving dice, such as Snakes and Ladders, or maps, shapes and money. You can also read stories and notice the mathematics – the sequence of events, and the descriptions of characters and settings.

Although these activities may seem simple and informal, they build on what children notice and question, give families the chance to talk about mathematical ideas and language, and show children that maths is used throughout the day.

Parents are encouraged to provide learning opportunities that are engaging and relevant to their children. www.shutterstock.com

Make it relevant to them

Most importantly, encouraging maths and numeracy in young children relies on making it appealing and relevant to them.

For example, when you take your child for a walk down the street, in the park or on the beach, bring their attention to the objects around them – houses, cars, trees, signs.

Talk about the shapes and sizes of the objects, talk about and look for similarities and differences (for example: let’s find a taller tree or a heavier rock), count the number of cars parked in the street or time how long it takes to reach the next corner.

Discuss the temperature or the speed of your walking pace.

Collect leaves or shells, and make repeating patterns on the sand or grass, or play Mathematical I Spy (I spy with my little eye, something that’s taller than mum).

It is never too soon to begin these activities. Babies who are only weeks old notice differences in shapes and the number of objects in their line of sight.

So, from the earliest of ages, talk with your child about the world around them, being descriptive and using mathematical words. As they grow, build on what they notice about shapes, numbers, and measures. This is how you teach them mathematics.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Sivanes Phillipson, Ann Gervasoni


Maths: why many great discoveries would be impossible without it

There are some great uses. Shutterstock

Despite the fact that mathematics is often described as the underpinning science, it is often not given enough credit when scientific discoveries are presented. But the contribution of mathematics and statistics is essential and has transformed entire areas of research – many discoveries would not have been possible without it. In fact, as a mathematician, I have contributed to scientific discoveries and provided solutions to problems that biology was yet to solve.

Seven years ago, I attended a lecture on some biological research that was taking place at Heriot-Watt University. My colleagues had an unsolved problem which related to the movement of bag-like structures called vesicles which move hormones and neurotransmitters such as insulin or serotonin around cells and the body.

Their problem lay in that vesicles were known to follow specific tracks along the cell skeleton which lead to special molecules which then caused the vesicle to release its contents into the cell. However, when the biologists themselves tried to find these tracks, they were not in the expected places.

A bag that carries hormones to their location. OpenStaxCC BY

It is important to understand how vesicles behave, or in fact misbehave, as they have been linked to conditions such diabetes and neurological disorders. The biologists were struggling to find a way to understand the vesicles – but I had a solution in my mathematical toolkit.

Maths can beat biology

After two years of collaboration I told my colleagues: “my model and computer experiments are better than your microscope!”

What I meant by this rather confident statement was that by using mathematics to model how molecules move in a cell we could predict and run multiple experiments on a computer at a smaller scale and faster rate than a microscope. It could allow us to uncover things that the biologist’s resources could not, and might even point us in the direction of target molecules for future treatments of diabetes and neurological disorders.

The mathematical model allowed us to recognise that the movement of vesicles requires energy – and the maths models it through an energy landscape. It imagined a vesicle to be like a cyclist riding a bicycle – the landscape may have easy level sections but also hills that require more energy input to get over them, and so we wanted to test whether they actually avoided these hills.

After seven years of working in partnership with the biologists, my colleagues and I proved our hypothesis was correct. Vesicles do follow lower energy “valleys” in the landscape, avoiding molecules which create the high energy hills in the energy landscape – taking the easiest path. The overall result is just the same as the biologists had found – the vesicles end up in the same end location and they reuse similar routes over and over again. But the difference lies in the way in which they do it, and it was not by following the cell skeleton as biologists had first believed – they take an easier route. It really shows the power of maths and how it can change the way we see things.

Mathematical models allow you to capture many gigabytes of raw data in a compact form in a way that is impossible for a biologist with a microscope. You can make modifications to the model easily and show how vesicle behaviour may change during disease, when they are disrupted or mutated. It could then reveal which molecules to target in future treatment studies – and lay the groundwork for larger and more thorough modelling of complex biological processes.

A modelled energy landscape. Shutterstock

The integration of cutting-edge microscopy with cell biology and mathematical modelling could be applied to many other problems in bio-medicine and will accelerate discovery in the years to come. The movement of molecules and other cell components is just one example of where the power of mathematics is unrivalled, but it is by no means its limit.

Useful is an understatement

Maths is often criticised by the public for lacking in “real-world” applications, but it is being applied to many real-world problems all the time. Groundwater contaminationfinancial and economic forecastingplume heights in volcanic eruptions, the modelling of biological processes and drug delivery are just a few places where maths is making a huge difference.

I’m proud to say that I co-authored a paper with my biology colleagues, and I hope to see more mathematicians coming to the fore for science research in the future. Mathematics plays a central role in so many of the world’s scientific breakthroughs and deserves a headline role in more academic publications. Power to the mathematician – they’re behind more discoveries than you think.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Gabriel Lord


Unraveling the Mathematics of Smell

Credit: Monty Rakusen Getty Images

Scientists have created a “map” of odor molecules, which could ultimately be used to predict new scent combinations 

The human nose finds it simple to distinguish the aroma of fresh coffee from the stink of rotten eggs, but the underlying biochemistry is complicated. Researchers have now created an olfactory “map”—a geometric model of how molecules combine to produce various scents. This map could inspire a way to predict how people might perceive certain odor combinations and help to drive the development of new fragrances, scientists say.

Researchers have been trying for years to tame the elaborate landscape of odor molecules. Neuroscientists want to better understand how we process scents; perfume and food manufacturers want better ways to synthesize familiar aromas for their products. The new approach may appeal to both camps.

One earlier strategy for mapping the olfactory system involves grouping odor molecules that have similar molecular structures and using those similarities to predict the scents of novel combinations. But that avenue often leads to a dead end. “It’s not necessary that chemicals with the same chemical structures will be perceived similarly,” says Tatyana Sharpee, a neurobiologist at the Salk Institute for Biological Studies in La Jolla, Calif., and lead author of the study, which appeared in August in Science Advances.

Sharpee and her colleagues analyzed odor molecules found in four familiar and unmistakable scents: strawberries, tomatoes, blueberries and mouse urine. The researchers calculated how often and in what concentrations certain molecules turned up together in these scents. They then created a mathematical model in which molecules that occurred together frequently were represented as closer in space and molecules that rarely did so were farther apart. The result was a “saddle”-shaped surface—a hallmark of a field called hyperbolic geometry, which obeys different rules from the geometry most people learn in school.

The researchers envision an algorithm, trained on this hyperbolic geometry model, that can predict the scents of new odor combinations—or even help to synthesize them. One of Sharpee’s collaborators, behavioral neuroscientist Brian Smith of Arizona State University, wants to use this method to create olfactory environments in places devoid of natural scents.

Such a tool would be useful to scientists and odor manufacturers alike, says olfactory neuroscientist Joel Mainland of the Monell Chemical Senses Center in Philadelphia, who was not involved in the study. The ultimate goal is to know enough about how odors work to replicate natural smells without the natural sources, Mainland says: “We want to identify a strawberry flavor without worrying about replicating the ingredients that are in a strawberry.”

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Stephen Ornes


There’s a mathematical formula for choosing the fastest queue

It seems obvious. You arrive at the checkouts and see one queue is much longer than the other, so you join the shorter one. But, before long, the people in the bigger line zoom past you and you’ve barely moved towards the exit.

When it comes to queuing, the intuitive choice is often not the fastest one. Why do queues feel like they slow down as soon as you join them? And is there a way to decide beforehand which line is really the best one to join? Mathematicians have been studying these questions for years. So can they help us spend less time waiting in line?

The intuitive strategy seems to be to join the shortest queue. After all, a short queue could indicate it has an efficient server, and a long queue could imply it has an inexperienced server or customers who need a lot of time. But generally this isn’t true.

Without the right information, it could even be disadvantageous to join the shortest queue. For example, if the short queue at the supermarket has two very full trolleys and the long queue has four relatively empty baskets, many people would actually join the longer queue. If the servers are equally efficient, the important quantity here is the number of total items in the queue, not the number of customers. But if the trolleys weren’t very full but the hand baskets were, it wouldn’t be so easy to estimate and the choice wouldn’t be so clear.

This simple example introduces the concept of service time distribution. This is a random variable that measures how long it will take a customer to be served. It contains information about the average (mean) service time and about the standard deviation from the mean, which represents how the service time fluctuates depending on how long different customers need.

The other important variable is how often customers join the queue (the arrival rate). This depends on the average amount of time that passes between two consecutive customers entering the shop. The more people that arrive to use a service at a specific time, the longer the queues will be.

Never mind the queue, I picked the wrong shop. Shutterstock

Depending on what these variables are, the shortest queue might be the best one to join – or it might not. For example, in a fish and chip shop you might have two servers both taking orders and accepting money. Then it is most often better to join the shortest queue since the time the servers’ tasks take doesn’t vary much.

Unfortunately, in practice, it’s hard to know exactly what the relevant variables are when you enter a shop. So you can still only guess what the fastest queue to join will be, or rely on tricks of human psychology, such as joining the leftmost queue because most right-handed people automatically turn right.

Did you get it right?

Once you’re in the queue, you’ll want to know whether you made the right choice. For example, is your server the fastest? It is easy to observe the actual queue length and you can try to compare it to the average. This is directly related to the mean and standard deviation of the service time via something called the Pollaczek-Khinchine formula, first established in 1930. This also uses the mean inter-arrival time between customers.

Unfortunately, if you try to measure the time the first person in the queue takes to get served, you’ll likely end up feeling like you chose the wrong line. This is known as Feller’s paradox or the inspection paradox. Technically, this isn’t an actual logical paradox but it does go against our intuition. If you start measuring the time between customers when you join a queue, it is more likely that the first customer you see will take longer than average to be served. This will make you feel like you were unlucky and chose the wrong queue.

The inspection paradox works like this: suppose a bank offers two services. One service takes either zero or five minutes, with equal probability. The other service takes either ten or 20 minutes, again with equal probability. It is equally likely for a customer to choose either service and so the bank’s average service time is 8.75 minutes.

If you join the queue when a customer is in the middle of being served then their service can’t take zero minutes. They must be using either the five, ten or 20 minute service. This pushes the time that customer will take to be served to more than 11 minutes on average, more than the true average for the of 8.75 minutes. In fact, two out of three times you encounter the same situation, the customer will want either the 10 or 20 minute service. This will make it seem like the line is moving more slowly than it should, all because a customer is already there and you have extra information.

So while you can use maths to try to determine the fastest queue, in the absence of accurate data – and for your own peace of mind – you’re often better just taking a gamble and not looking at the other options once you’ve made your mind up.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Enrico Scalas, Nicos Georgiou


The Unforgiving Math That Stops Epidemics

Credit: Peter Dazeley Getty Images

Not getting a flu shot could endanger more than just one’s own health, herd immunity calculations show   

As the annual flu season approaches, medical professionals are again encouraging people to get flu shots. Perhaps you are among those who rationalize skipping the shot on the grounds that “I never get the flu” or “if I get sick, I get sick” or “I’m healthy, so I’ll get over it.” What you might not realize is that these vaccination campaigns for flu and other diseases are about much more than your health. They’re about achieving a collective resistance to disease that goes beyond individual well-being—and that is governed by mathematical principles unforgiving of unwise individual choices.

When talking about vaccination and disease control, health authorities often invoke “herd immunity.” This term refers to the level of immunity in a population that’s needed to prevent an outbreak from happening. Low levels of herd immunity are often associated with epidemics, such as the measles outbreak in 2014-2015 that was traced to exposures at Disneyland in California. A study investigating cases from that outbreak demonstrated that measles vaccination rates in the exposed population may have been as low as 50 percent. This number was far below the threshold needed for herd immunity to measles, and it put the population at risk of disease.

The necessary level of immunity in the population isn’t the same for every disease. For measles, a very high level of immunity needs to be maintained to prevent its transmission because the measles virus is possibly the most contagious known organism. If people infected with measles enter a population with no existing immunity to it, they will on average each infect 12 to 18 others. Each of those infections will in turn cause 12 to 18 more, and so on until the number of individuals who are susceptible to the virus but haven’t caught it yet is down to almost zero. The number of people infected by each contagious individual is known as the “basic reproduction number” of a particular microbe (abbreviated R0), and it varies widely among germs. The calculated R0 of the West African Ebola outbreak was found to be around 2 in a 2014 publication, similar to the R0computed for the 1918 influenza pandemic based on historical data.

If the Ebola virus’s R0 sounds surprisingly low to you, that’s probably because you have been misled by the often hysterical reporting about the disease. The reality is that the virus is highly infectious only in the late stages of the disease, when people are extremely ill with it. The ones most likely to be infected by an Ebola patient are caregivers, doctors, nurses and burial workers—because they are the ones most likely to be present when the patients are “hottest” and most likely to transmit the disease. The scenario of an infectious Ebola patient boarding an aircraft and passing on the disease to other passengers is extremely unlikely because an infectious patient would be too sick to fly. In fact, we know of cases of travelers who were incubating Ebola virus while flying, and they produced no secondary cases during those flights.

Note that the R0 isn’t related to how severe an infection is, but to how efficiently it spreads. Ebola killed about 40 percent of those infected in West Africa, while the 1918 influenza epidemic had a case-fatality rate of about 2.5 percent. In contrast, polio and smallpox historically spread to about 5 to 7 people each, which puts them in the same range as the modern-day HIV virus and pertussis (the bacterium that causes whooping cough).

Determining the R0 of a particular microbe is a matter of more than academic interest. If you know how many secondary cases to expect from each infected person, you can figure out the level of herd immunity needed in the population to keep the microbe from spreading. This is calculated by taking the reciprocal of R0 and subtracting it from 1. For measles, with an R0 of 12 to 18, you need somewhere between 92 percent (1 – 1/12) and 95 percent (1 – 1/18) of the population to have effective immunity to keep the virus from spreading. For flu, it’s much lower—only around 50 percent. And yet we rarely attain even that level of immunity with vaccination.

Once we understand the concept of R0, so much about patterns of infectious disease makes sense. It explains, for example, why there are childhood diseases—infections that people usually encounter when young, and against which they often acquire lifelong immunity after the infections resolve. These infections include measles, mumps, rubella and (prior to its eradication) smallpox—all of which periodically swept through urban populations in the centuries prior to vaccination, usually affecting children.

Do these viruses have some unusual affinity for children? Before vaccination, did they just go away after each outbreak and only return to cities at approximately five- to 10-year intervals? Not usually. After a large outbreak, viruses linger in the population, but the level of herd immunity is high because most susceptible individuals have been infected and (if they survived) developed immunity. Consequently, the viruses spread slowly: In practice, their R0 is just slightly above 1. This is known as the “effective reproduction number”—the rate at which the microbe is actually transmitted in a population that includes both susceptible and non-susceptible individuals (in other words, a population where some immunity already exists). Meanwhile, new susceptible children are born into the population. Within a few years, the population of young children who have never been exposed to the disease dilutes the herd immunity in the population to a level below what’s needed to keep outbreaks from occurring. The virus can then spread more rapidly, resulting in another epidemic.

An understanding of the basic reproduction number also explains why diseases spread so rapidly in new populations: Because those hosts lack any immunity to the infection, the microbe can achieve its maximum R0. This is why diseases from invading Europeans spread so rapidly and widely among indigenous populations in the Americas and Hawaii during their first encounters. Having never been exposed to these microbes before, the non-European populations had no immunity to slow their spread.

If we further understand what constellation of factors contributes to an infection’s R0, we can begin to develop interventions to interrupt the transmission. One aspect of the R0 is the average number and frequency of contacts that an infected individual has with others susceptible to the infection. Outbreaks happen more frequently in large urban areas because individuals living in crowded cities have more opportunities to spread the infection: They are simply in contact with more people and have a higher likelihood of encountering someone who lacks immunity. To break this chain of transmission during an epidemic, health authorities can use interventions such as isolation (keeping infected individuals away from others) or even quarantine (keeping individuals who have been exposed to infectious individuals—but are not yet sick themselves—away from others).

Other factors that can affect the R0 involve both the host and the microbe. When an infected person has contact with someone who is susceptible, what is the likelihood that the microbe will be transmitted? Frequently, hosts can reduce the probability of transmission through their behaviors: by covering coughs or sneezes for diseases transmitted through the air, by washing their contaminated hands frequently, and by using condoms to contain the spread of sexually transmitted diseases.

These behavioral changes are important, but we know they’re far from perfect and not particularly efficient in the overall scheme of things. Take hand-washing, for example. We’ve known of its importance in preventing the spread of disease for 150 years. Yet studies have shown that hand-washing compliance even by health care professionals is astoundingly low — less than half of doctors and nurses wash their hands when they’re supposed to while caring for patients. It’s exceedingly difficult to get people to change their behavior, which is why public health campaigns built around convincing people to behave differently can sometimes be less effective than vaccination campaigns.

How long a person can actively spread the infection is another factor in the R0. Most infections can be transmitted for only a few days or weeks. Adults with influenza can spread the virus for about a week, for example. Some microbes can linger in the body and be transmitted for months or years. HIV is most infectious in the early stages when concentrations of the virus in the blood are very high, but even after those levels subside, the virus can be transmitted to new partners for many years. Interventions such as drug treatments can decrease the transmissibility of some of these organisms.

The microbes’ properties are also important. While hosts can purposely protect themselves, microbes don’t choose their traits. But over time, evolution can shape them in a manner that increases their chances of transmission, such as by enabling measles to linger longer in the air and allowing smallpox to survive longer in the environment.

By bringing together all these variables (size and dynamics of the host population, levels of immunity in the population, presence of interventions, microbial properties, and more), we can map and predict the spread of infections in a population using mathematical models. Sometimes these models can overestimate the spread of infection, as was the case with the models for the Ebola outbreak in 2014. One model predicted up to 1.4 million cases of Ebola by January 2015; in reality, the outbreak ended in 2016 with only 28,616 cases. On the other hand, models used to predict the transmission of cholera during an outbreak in Yemen have been more accurate.

The difference between the two? By the time the Ebola model was published, interventions to help control the outbreak were already under way. Campaigns had begun to raise awareness of how the virus was transmitted, and international aid had arrived, bringing in money, personnel and supplies to contain the epidemic. These interventions decreased the Ebola virus R0 primarily by isolating the infected and instituting safe burial practices, which reduced the number of susceptible contacts each case had. Shipments of gowns, gloves and soap that health care workers could use to protect themselves while treating patients reduced the chance that the virus would be transmitted. Eventually, those changes meant that the effective R0 fell below 1—and the epidemic ended. (Unfortunately, comparable levels of aid and interventions to stop cholera in Yemen have not been forthcoming.)

Catch-up vaccinations and the use of isolation and quarantine also likely helped to end the Disneyland measles epidemic, as well as a slightly earlier measles epidemic in Ohio. Knowing the factors that contribute to these outbreaks can aid us in stopping epidemics in their early stages. But to prevent them from happening in the first place, a population with a high level of immunity is, mathematically, our best bet for keeping disease at bay.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Tara C. Smith & Quanta Magazine


How to avoid a sucker bet – with a little help from maths

Sitting in a bar, you start chatting to a man who issues you a challenge. He hands you five red and two black cards. After shuffling, you lay them on the bar, face down. He bets you that you cannot turn over three red cards. And to help you, he explains the odds.

When you draw the first card, the odds are 5-2 (five red cards, two black cards) in favour of picking a red card. The second draw is 4-2 (or 2-1) and the third draw is 3-2. Each time you draw a card the odds appear to be in your favour, in that you have more chance of drawing a red card than a black card. So, do you accept the bet?

If you answered yes, perhaps it’s time for you to go over your maths. It’s a foolish bet. The odds given above are only for a perfect draw. The real odds of you being able to carry out this feat are actually 5-2 against you. That is, for every seven times you play, you’ll lose five times.

Odds against you

This type of bet is often called a proposition bet, which is defined as a wager on something that seems like a good idea, but for which the odds are actually against you, often very much against you, perhaps even making it impossible for you to win.

Let’s assume that you took the bet and, almost inevitably, lost money. But this is just for fun, right? So your new “friend” suggests a way that you can get your money back. He takes two more red cards and hands them to you, so you now have seven red cards and two black cards. You shuffle the nine cards and lay them out, face down, in a three by three grid. He bets you even money that you can’t pick out a straight line (vertical, horizontal or diagonal) that has only red cards.

Nine Card Hustle. Graham Kendall created image

Intuitively, this might sound like a better bet and the odds are actually evens if the two black cards are next to each other in a corner (see image). In total there are eight lines to choose from and four contain only red cards, and four contain a black card. But that is as good as it gets.

If the black cards are in opposite corners then you can only win by choosing the centre horizontal or vertical row so the odds are 6-2 (or 3-1) against you winning. Every other layout gives you three winning lines and five losing lines. This bet only has 12 ways of succeeding, against 22 ways of you losing. Hardly an even-chance bet.

Have another go

Try to evaluate the odds for this proposition bet.

You shuffle a pack of cards and cut it into three piles. You are offered even money that one of the cards on top of the piles will be a picture card (a jack, queen or king). That is, if a picture card shows up, you lose. Do you think this is a good bet?

One way of reasoning is that there are only 12 losing cards against 40 winning cards, so the odds look better than evens? But this is the wrong way of looking at it. It is really what’s known as a combinatorics problem. We should also realise that we are just choosing three cards at random.

There are 22,100 ways of choosing three cards from a 52 card deck. Of these, 12,220 will contain at least one picture card – so you lose – meaning that 9,880 will not contain a picture card – when you win. If you translate this to odds, you will lose fives times out of every nine times you play (5-4 against you). The even chance bet you have been offered is not the good value that you thought it was and you will lose money if you play a few times.

 

A Final Example

We can all agree that you have a 50/50 chance of guessing heads or tails in a coin toss. But if you toss the coin ten times, would you expect to see five heads and five tails? If you were offered odds of 2-1 to try this, would you take the bet? You’d be a sucker if you did.

Five heads and five tails will occur more often than any other combination, but there are many other ways that ten flips of a coin can land. In fact, the bet is 5-2 against you.

Another name for a proposition bet is the “sucker” bet, and there is no surprise who the sucker is. But don’t feel too bad. We are all generally very poor at evaluating true odds. A famous example is the Monty Hall Problem. Even mathematicians could not agree on the right answer to this seemingly simple problem.

We have focused on bets where it is difficult, especially when under the pressure of deciding whether to bet or not, to calculate the true odds. But there are many other proposition bets that do not rely on calculating odds. And there are many other sucker bets, with probably the most famous being the Three Card Monty.

If faced with this type of bet, what is the best thing you can do? I’d suggest you simply walk away.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Graham Kendall


The Mathematics of (Hacking) Passwords

Credit: Gaetan Charbonneau Getty Images

The science and art of password setting and cracking continues to evolve, as does the war between password users and abusers

At one time or another, we have all been frustrated by trying to set a password, only to have it rejected as too weak. We are also told to change our choices regularly. Obviously such measures add safety, but how exactly?

I will explain the mathematical rationale for some standard advice, including clarifying why six characters are not enough for a good password and why you should never use only lowercase letters. I will also explain how hackers can uncover passwords even when stolen data sets lack them.

Choose#W!sely@*

Here is the logic behind setting hack-resistant passwords. When you are asked to create a password of a certain length and combination of elements, your choice will fit into the realm of all unique options that conform to that rule—into the “space” of possibilities. For example, if you were told to use six lowercase letters—such as, afzjxd, auntie, secret, wwwwww—the space would contain 266, or 308,915,776, possibilities. In other words, there are 26 possible choices for the first letter, 26 possible choices for the second, and so forth. These choices are independent: you do not have to use different letters, so the size of the password space is the product of the possibilities, or 26 x 26 x 26 x 26 x 26 x 26 = 266.

If you are told to select a 12-character password that can include uppercase and lowercase letters, the 10 digits and 10 symbols (say, !, @, #, $, %, ^, &, ?, / and +), you would have 72 possibilities for each of the 12 characters of the password. The size of the possibility space would then be 7212 (19,408,409,961,765,342,806,016, or close to 19 x 1021).

That is more than 62 trillion times the size of the first space. A computer running through all the possibilities for your 12-character password one by one would take 62 trillion times longer. If your computer spent a second visiting the six-character space, it would have to devote two million years to examining each of the passwords in the 12-character space. The multitude of possibilities makes it impractical for a hacker to carry out a plan of attack that might have been feasible for the six-character space.

Calculating the size of these spaces by computer usually involves counting the number of binary digits in the number of possibilities. That number, N, is derived from this formula: 1 + integer(log2(N)). In the formula, the value of log2(N) is a real number with many decimal places, such as log2(266) = 28.202638…. The “integer” in the formula indicates that the decimal portion of that log value is omitted, rounding down to a whole number—as in integer(28.202638… 28). For the example of six lowercase letters above, the computation results in 29 bits; for the more complex, 12-character example, it is 75 bits. (Mathematicians refer to the possibility spaces as having entropy of 29 and 75 bits, respectively.) The French National Cybersecurity Agency (ANSSI) recommends spaces having a minimum of 100 bits when it comes to passwords or secret keys for encryption systems that absolutely must be secure. Encryption involves representing data in a way that ensures it cannot be retrieved unless a recipient has a secret code-breaking keyIn fact, the agency recommends a possibility space of 128 bits to guarantee security for several years. It considers 64 bits to be very small (very weak); 64 to 80 bits to be small; and 80 to 100 bits to be medium (moderately strong).

Moore’s law (which says that the computer-processing power available at a certain price doubles roughly every two years) explains why a relatively weak password will not suffice for long-term use: over time computers using brute force can find passwords faster. Although the pace of Moore’s law appears to be decreasing, it is wise to take it into account for passwords that you hope will remain secure for a long time.

For a truly strong password as defined by ANSSI, you would need, say, a sequence of 16 characters, each taken from a set of 200 characters. This would make a 123-bit space, which would render the password close to impossible to memorize. Therefore, system designers are generally less demanding and accept low- or medium-strength passwords. They insist on long ones only when the passwords are automatically generated by the system, and users do not have to remember them.

There are other ways to guard against password cracking. The simplest is well known and used by credit cards: after three unsuccessful attempts, access is blocked. Alternative ideas have also been suggested, such as doubling the waiting time after each successive failed attempt but allowing the system to reset after a long period, such as 24 hours. These methods, however, are ineffective when an attacker is able to access the system without being detected or if the system cannot be configured to interrupt and disable failed attempts.

How Long Does It Take to Search All Possible Passwords?

For a password to be difficult to crack, it should be chosen randomly from a large set, or “space,” of possibilities. The size, T, of the possibility space is based on the length, A, of the list of valid characters in the password and the number of characters, N, in the password.

The size of this space (T AN) may vary considerably.

Each of the following examples specifies values of ANT and the number of hours, D, that hackers would have to spend to try every permutation of characters one by one. X is the number of years that will have to pass before the space can be checked in less than one hour, assuming that Moore’s law (the doubling of computing capacity every two years) remains valid. I also assume that in 2019, a computer can explore a billion possibilities per second. I represent this set of assumptions with the following three relationships and consider five possibilities based on values of A and N:

 

Relationships

T = AN
D = T/(109 × 3,600)
X = 2 log2[T/(109 × 3,600)]

Results

_________________________________

If A = 26 and N = 6, then T = 308,915,776
D = 0.0000858 computing hour
X = 0; it is already possible to crack all passwords in the space in under an hour

_________________________________

If A = 26 and N = 12, then T = 9.5 × 1016
D = 26,508 computing hours
X = 29 years before passwords can be cracked in under an hour

_________________________________

If A = 100 and N = 10, then T = 1020
D = 27,777,777 computing hours
X = 49 years before passwords can be cracked in under an hour

_________________________________

If A = 100 and N = 15, then T = 1030
D = 2.7 × 1017 computing hours
X = 115 years before passwords can be cracked in under an hour

________________________________

If A = 200 and N = 20, then T = 1.05 × 1046
D = 2.7 × 1033 computing hours
X = 222 years before passwords can be cracked in under an hour

Weaponizing Dictionaries and Other Hacker Tricks

Quite often an attacker succeeds in obtaining encrypted passwords or password “fingerprints” (which I will discuss more fully later) from a system. If the hack has not been detected, the interloper may have days or even weeks to attempt to derive the actual passwords.

To understand the subtle processes exploited in such cases, take another look at the possibility space. When I spoke earlier of bit size and password space (or entropy), I implicitly assumed that the user consistently chooses passwords at random. But typically the choice is not random: people tend to select a password they can remember (locomotive) rather than an arbitrary string of characters (xdichqewax).

This practice poses a serious problem for security because it makes passwords vulnerable to so-called dictionary attacks. Lists of commonly used passwords have been collected and classified according to how frequently they are used. Attackers attempt to crack passwords by going through these lists systematically. This method works remarkably well because, in the absence of specific constraints, people naturally choose simple words, surnames, first names and short sentences, which considerably limits the possibilities. In other words, the nonrandom selection of passwords essentially reduces possibility space, which decreases the average number of attempts needed to uncover a password.

If you use password or iloveyou, you are not as clever as you thought! Of course, lists differ according to the country where they are collected and the Web sites involved; they also vary over time.

For four-digit passwords (for example, the PIN code of SIM cards on smartphones), the results are even less imaginative. In 2013, based on a collection of 3.4 million passwords each containing four digits, the DataGenetics Web site reported that the most commonly used four-digit sequence (representing 11 percent of choices) was 1234, followed by 1111 (6 percent) and 0000 (2 percent). The least-used four-digit password was 8068. Careful, though, this ranking may no longer be true now that the result has been published. The 8068 choice appeared only 25 times among the 3.4-million four-digit sequences in the database, which is much less than the 340 uses that would have occurred if each four-digit combination had been used with the same frequency. The first 20 series of four digits are: 1234; 1111; 0000; 1212; 7777; 1004; 2000; 4444; 2222; 6969; 9999; 3333; 5555; 6666; 1122; 1313; 8888; 4321; 2001; 1010.

Even without a password dictionary, using differences in frequency of letter use (or double letters) in a language makes it possible to plan an effective attack. Some attack methods also take into account that, to facilitate memorization, people may choose passwords that have a certain structure—such as A1=B2=C3, AwX2AwX2 or O0o.lli. (which I used for a long time)—or that are derived by combining several simple strings, such as password123 or johnABC0000. Exploiting such regularities makes it possible to for hackers to speed up detection.

Advice for Web Sites

Web sites, too, follow various rules of thumb. The National Institute of Standards and Technology recently published a notice recommending the use of dictionaries to filter users’ password choices.

Among the rules that a good Web server designer absolutely must adhere to is, do not store plaintext lists of usernames and passwords on the computer used to operate the Web site.

The reason is obvious: hackers could access the computer containing this list, either because the site is poorly protected or because the system or processor contains a serious flaw unknown to anyone except the attackers (a so-called zero-day flaw), who can exploit it.

One alternative is to encrypt the passwords on the server: use a secret code that transforms them via an encryption key into what will appear to be random character sequences to anyone who does not possess the decryption key. This method works, but it has two disadvantages. First, it requires decrypting the stored password every time to compare it with the user’s entry, which is inconvenient. Second, and more seriously, the decryption necessary for this comparison requires storing the decryption key in the Web site computer’s memory. This key may therefore be detected by an attacker, which brings us back to the original problem.

A better way to store passwords is through what are called hash functions that produce “fingerprints.” For any data in a file—symbolized as F—a hash function generates a fingerprint. (The process is also called condensing or hashing.) The fingerprint—h(F)—is a fairly short word associated with F but produced in such a way that, in practice, it is impossible to deduce F from h(F). Hash functions are said to be one-way: getting from F to h(F) is easy; getting from h(F) to F is practically impossible. In addition, the hash functions used have the characteristic that even if it is possible for two data inputs, F and F’, to have the same fingerprint (known as a collision), in practice for a given F, it is almost impossible to find an F’ with a fingerprint identical to F.

Using such hash functions allows passwords to be securely stored on a computer. Instead of storing the list of paired usernames and passwords, the server stores only the list of username/fingerprint pairs.

When a user wishes to connect, the server will read the individual’s password, compute the fingerprint and determine whether it corresponds to the list of stored username/fingerprint pairs associated with that username. That maneuver frustrates hackers because even if they have managed to access the list, they will be unable to derive the users’ passwords, inasmuch as it is practically impossible to go from fingerprint to password. Nor can they generate another password with an identical fingerprint to fool the server because it is practically impossible to create collisions.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Jean-Paul Delahaye


“The Danger of a Single Story” in Mathematics

Credit: George Coppock Getty Images

The Lathisms podcast shares the varied stories of Hispanic and Latinx mathematicians

Writer Chimamanda Ngozi Adichie’s popular TED talk is called “The danger of a single story.” In it, she talks about the importance of reading and writing many stories of many people rather than putting a person—or an entire continent of people—into one box. “The single story creates stereotypes,” she says, “and the problem with stereotypes is not that they are untrue but that they are incomplete.”

If someone were asked to tell the story of a “typical” mathematician, they might talk about a shy, socially awkward white man who is a “genius,” whatever that means. He was a fast learner in school and can perform feats of calculation almost instantaneously in his head. He thinks about nothing other than his research, often to the detriment of practical tasks required for everyday living. Some mathematicians do fit these descriptions, but many more don’t. When that story becomes the dominant narrative of who mathematicians are, people who don’t fit the mold feel like there’s no place for them in mathematics. One of the great privileges of working as a math writer is getting to hear the stories of so many mathematicians when I talk to them for articles or podcasts. There really is no one kind of person who becomes a mathematician.

This fall, I’m happy to share a project, created by Lathisms and sponsored by a Tensor-SUMMA grant from the Mathematical Association of America, to share more stories of mathematicians. Lathisms was founded in 2016 by four Hispanic mathematicians, Alexander Diaz-Lopez, Pamela Harris, Alicia Prieto Langarica, and Gabriel Sosa. Hispanic and Latinx people are underrepresented in mathematics, and Lathisms aims to increase visibility of Hispanic and Latinx mathematicians. Since 2016, the organizers have created a calendar every Hispanic Heritage month (September 15-October 15) where each day has a different featured Hispanic or Latinx mathematician, including a picture and short biography of each of them.

This year, Lathisms decided to extend the celebration of Hispanic and Latinx mathematicians by adding a podcast, hosted by me, where you can listen to these mathematicians tell their stories in their own words. Starting at the end of August, we have published a new episode every Friday. The episodes feature mathematicians featured in past years’ Lathisms calendars as well as some of this year’s mathematicians. Some of them grew up in the U.S., others in Latin America. Some grew up in poverty, and others were better off. Some knew they wanted to be mathematicians from a young age, and others didn’t know anything about possible mathematics careers until college. Some work in pure math, others in applied. Some focus on research, others outreach.

So far we’ve shared conversations with Carlos Castillo-Chavez, who is one of the most prolific advisors of U.S. Latinx math Ph.D. students; Erika Camacho, who does mathematical modeling of eye diseases; Federico Ardila, who mentioned “the danger of a single story” when we talked and finds inspiration and mentorship from both students and teachers; and Nicolas Garcia Trillos, who just started a new job in the statistics department at the University of Wisconsin Madison and talked about the many ways there are to be a good mathematician and how that helps him get “unstuck” in his works. In the coming weeks, we will share many more stories. Tune in on Fridays to find them.

You can find the podcast at the Lathisms website or on iTunes. Transcripts are available already for some episodes and will be provided for all episodes. I hope these conversations will be helpful for teachers who want to make sure their students are aware of the diversity of mathematicians, for Hispanic and Latinx students and early-career mathematicians who are looking for role models and collaborators, and for anyone who wants to hear about mathematicians’ many different stories.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Evelyn Lamb


Pi in the Sky

Elegant new visualization maps the digits of pi as a star catalogue

The mind of Martin Krzywinski is a rich and dizzying place, teeming with fascinating questions, ideas, and inspiration. Krzywinski is a scientist and data visualizer whose primary line of work involves genome analysis for cancer research. In his spare time, though, he explores his many different interests as a scientific and visual thinker through creative projects. For the past few years, one such project has occupied him on a recurring basis each March: reimagining the digits of pi in a novel, science-based, and visually compelling way.

Today, this delightful March 14th (“Pi Day”) tradition brings us the digits of pi mapped onto the night sky, as a star catalogue. Like the infinitely long sequence of pi, space has no discernible end, but we earthbound observers can only see so far. So Krzywinski places a cap at 12 million digits and groups each successive series of 12 numerals to define a latitude, longitude and brightness, resulting in a field of a million stars, randomly arranged.

Just as humans throughout history have found figures and narratives among the stars, this new array of celestial bodies also yields a story. As a way to honor our evolutionary ancestors, Krzywinski connects the dots to create shapes of extinct animals from around the globe.

Carée projection of “Pi in the Sky” star chart
Credit: Martin Krzywinski

But he couldn’t possibly stop there, so Krzywinski takes the visualization a step further, experimenting with different projections to re-create the map in various spatial iterations.

Azimuthal projections of “Pi in the Sky” star chart
Credit: Martin Krzywinski

Hammer/Aitoff projection of “Pi in the Sky” star chart
Credit: Martin Krzywinski

To read more about the visualization, including descriptions of the animals depicted, and a poem written by the artist’s collaborator Paolo Marcazzan, visit Martin Krzywinski’s website. There, you can also explore his previous Pi Day visualizations and even purchase them as posters.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Amanda Montañez


Peculiar Pattern Found in “Random” Prime Numbers

Credit: ©iStock.com

Last digits of nearby primes have “anti-sameness” bias

Two mathematicians have found a strange pattern in prime numbers—showing that the numbers are not distributed as randomly as theorists often assume.

“Every single person we’ve told this ends up writing their own computer program to check it for themselves,” says Kannan Soundararajan, a mathematician at Stanford University in California, who reported the discovery with his colleague Robert Lemke Oliver in a paper submitted to the arXiv preprint server on March 11. “It is really a surprise,” he says.

Prime numbers near to each other tend to avoid repeating their last digits, the mathematicians say: that is, a prime that ends in 1 is less likely to be followed by another ending in 1 than one might expect from a random sequence. “As soon as I saw the numbers, I could see it was true,” says mathematician James Maynard of the University of Oxford, UK. “It’s a really nice result.”

Although prime numbers are used in a number of applications, such as cryptography, this ‘anti-sameness’ bias has no practical use or even any wider implication for number theory, as far as Soundararajan and Lemke Oliver know. But, for mathematicians, it’s both strange and fascinating.

Not so random

A clear rule determines exactly what makes a prime: it’s a whole number that can’t be exactly divided by anything except 1 and itself. But there’s no discernable pattern in the occurrence of the primes. Beyond the obvious—after the numbers 2 and 5, primes can’t be even or end in 5—there seems to be little structure that can help to predict where the next prime will occur.

As a result, number theorists find it useful to treat the primes as a ‘pseudorandom’ sequence, as if it were created by a random-number generator.

But if the sequence were truly random, then a prime with 1 as its last digit should be followed by another prime ending in 1 one-quarter of the time. That’s because after the number 5, there are only four possibilities—1, 3, 7 and 9—for prime last digits. And these are, on average, equally represented among all primes, according to a theorem proved around the end of the nineteenth century, one of the results that underpin much of our understanding of the distribution of prime numbers. (Another is the prime number theorem, which quantifies how much rarer the primes become as numbers get larger.)

Instead, Lemke Oliver and Soundararajan saw that in the first billion primes, a 1 is followed by a 1 about 18% of the time, by a 3 or a 7 each 30% of the time, and by a 9 22% of the time. They found similar results when they started with primes that ended in 3, 7 or 9: variation, but with repeated last digits the least common. The bias persists but slowly decreases as numbers get larger.

The k-tuple conjecture

The mathematicians were able to show that the pattern they saw holds true for all primes, if a widely accepted but unproven statement called the Hardy–Littlewood k-tuple conjecture is correct. This describes the distributions of pairs, triples and larger prime clusters more precisely than the basic assumption that the primes are evenly distributed.

The idea behind it is that there are some configurations of primes that can’t occur, and that this makes other clusters more likely. For example, consecutive numbers cannot both be prime—one of them is always an even number. So if the number n is prime, it is slightly more likely that n + 2 will be prime than random chance would suggest. The k-tuple conjecture quantifies this observation in a general statement that applies to all kinds of prime clusters. And by playing with the conjecture, the researchers show how it implies that repeated final digits are rarer than chance would suggest.

At first glance, it would seem that this is because gaps between primes of multiples of 10 (20, 30, 100 and so on) multiples of 10 are disfavoured. But the finding gets much more general—and even more peculiar. A prime’s last digit is its remainder when it is divided by 10. But the mathematicians found that the anti-sameness bias holds for any divisor. Take 6, for example. All primes have a remainder of 1 or 5 when divided by 6 (otherwise, they would be divisible by 2 or 3) and the two remainders are on average equally represented among all primes. But the researchers found that a prime that has a remainder of 1 when divided by 6 is more likely to be followed by one that has a remainder of 5 than by another that has a remainder of 1. From a 6-centric point of view, then, gaps of multiples of 6 seem to be disfavoured.

Paradoxically, checking every possible divisor makes it appear that almost all gaps are disfavoured, suggesting that a subtler explanation than a simple accounting of favoured and disfavoured gaps must be at work. “It’s a completely weird thing,” says Soundararajan.

Mystifying phenomenon

The researchers have checked primes up to a few trillion, but they think that they have to invoke the k-tuple conjecture to show that the pattern persists. “I have no idea how you would possibly formulate the right conjecture without assuming it,” says Lemke Oliver.

Without assuming unproven statements such as the k-tuple conjecture and the much-studied Riemann hypothesis, mathematicians’ understanding of the distribution of primes dries up. “What we know is embarrassingly little,” says Lemke Oliver. For example, without assuming the k-tuple conjecture, mathematicians have proved that the last-digit pairs 1–1, 3–3, 7–7 and 9–9 occur infinitely often, but they cannot prove that the other pairs do. “Perversely, given our work, the other pairs should be more common,” says Lemke Oliver.

He and Soundararajan feel that they have a long way to go before they understand the phenomenon on a deep level. Each has a pet theory, but none of them is really satisfying. “It still mystifies us,” says Soundararajan.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Evelyn Lamb & Nature magazine