The Unforgiving Math That Stops Epidemics

Credit: Peter Dazeley Getty Images

Not getting a flu shot could endanger more than just one’s own health, herd immunity calculations show   

As the annual flu season approaches, medical professionals are again encouraging people to get flu shots. Perhaps you are among those who rationalize skipping the shot on the grounds that “I never get the flu” or “if I get sick, I get sick” or “I’m healthy, so I’ll get over it.” What you might not realize is that these vaccination campaigns for flu and other diseases are about much more than your health. They’re about achieving a collective resistance to disease that goes beyond individual well-being—and that is governed by mathematical principles unforgiving of unwise individual choices.

When talking about vaccination and disease control, health authorities often invoke “herd immunity.” This term refers to the level of immunity in a population that’s needed to prevent an outbreak from happening. Low levels of herd immunity are often associated with epidemics, such as the measles outbreak in 2014-2015 that was traced to exposures at Disneyland in California. A study investigating cases from that outbreak demonstrated that measles vaccination rates in the exposed population may have been as low as 50 percent. This number was far below the threshold needed for herd immunity to measles, and it put the population at risk of disease.

The necessary level of immunity in the population isn’t the same for every disease. For measles, a very high level of immunity needs to be maintained to prevent its transmission because the measles virus is possibly the most contagious known organism. If people infected with measles enter a population with no existing immunity to it, they will on average each infect 12 to 18 others. Each of those infections will in turn cause 12 to 18 more, and so on until the number of individuals who are susceptible to the virus but haven’t caught it yet is down to almost zero. The number of people infected by each contagious individual is known as the “basic reproduction number” of a particular microbe (abbreviated R0), and it varies widely among germs. The calculated R0 of the West African Ebola outbreak was found to be around 2 in a 2014 publication, similar to the R0computed for the 1918 influenza pandemic based on historical data.

If the Ebola virus’s R0 sounds surprisingly low to you, that’s probably because you have been misled by the often hysterical reporting about the disease. The reality is that the virus is highly infectious only in the late stages of the disease, when people are extremely ill with it. The ones most likely to be infected by an Ebola patient are caregivers, doctors, nurses and burial workers—because they are the ones most likely to be present when the patients are “hottest” and most likely to transmit the disease. The scenario of an infectious Ebola patient boarding an aircraft and passing on the disease to other passengers is extremely unlikely because an infectious patient would be too sick to fly. In fact, we know of cases of travelers who were incubating Ebola virus while flying, and they produced no secondary cases during those flights.

Note that the R0 isn’t related to how severe an infection is, but to how efficiently it spreads. Ebola killed about 40 percent of those infected in West Africa, while the 1918 influenza epidemic had a case-fatality rate of about 2.5 percent. In contrast, polio and smallpox historically spread to about 5 to 7 people each, which puts them in the same range as the modern-day HIV virus and pertussis (the bacterium that causes whooping cough).

Determining the R0 of a particular microbe is a matter of more than academic interest. If you know how many secondary cases to expect from each infected person, you can figure out the level of herd immunity needed in the population to keep the microbe from spreading. This is calculated by taking the reciprocal of R0 and subtracting it from 1. For measles, with an R0 of 12 to 18, you need somewhere between 92 percent (1 – 1/12) and 95 percent (1 – 1/18) of the population to have effective immunity to keep the virus from spreading. For flu, it’s much lower—only around 50 percent. And yet we rarely attain even that level of immunity with vaccination.

Once we understand the concept of R0, so much about patterns of infectious disease makes sense. It explains, for example, why there are childhood diseases—infections that people usually encounter when young, and against which they often acquire lifelong immunity after the infections resolve. These infections include measles, mumps, rubella and (prior to its eradication) smallpox—all of which periodically swept through urban populations in the centuries prior to vaccination, usually affecting children.

Do these viruses have some unusual affinity for children? Before vaccination, did they just go away after each outbreak and only return to cities at approximately five- to 10-year intervals? Not usually. After a large outbreak, viruses linger in the population, but the level of herd immunity is high because most susceptible individuals have been infected and (if they survived) developed immunity. Consequently, the viruses spread slowly: In practice, their R0 is just slightly above 1. This is known as the “effective reproduction number”—the rate at which the microbe is actually transmitted in a population that includes both susceptible and non-susceptible individuals (in other words, a population where some immunity already exists). Meanwhile, new susceptible children are born into the population. Within a few years, the population of young children who have never been exposed to the disease dilutes the herd immunity in the population to a level below what’s needed to keep outbreaks from occurring. The virus can then spread more rapidly, resulting in another epidemic.

An understanding of the basic reproduction number also explains why diseases spread so rapidly in new populations: Because those hosts lack any immunity to the infection, the microbe can achieve its maximum R0. This is why diseases from invading Europeans spread so rapidly and widely among indigenous populations in the Americas and Hawaii during their first encounters. Having never been exposed to these microbes before, the non-European populations had no immunity to slow their spread.

If we further understand what constellation of factors contributes to an infection’s R0, we can begin to develop interventions to interrupt the transmission. One aspect of the R0 is the average number and frequency of contacts that an infected individual has with others susceptible to the infection. Outbreaks happen more frequently in large urban areas because individuals living in crowded cities have more opportunities to spread the infection: They are simply in contact with more people and have a higher likelihood of encountering someone who lacks immunity. To break this chain of transmission during an epidemic, health authorities can use interventions such as isolation (keeping infected individuals away from others) or even quarantine (keeping individuals who have been exposed to infectious individuals—but are not yet sick themselves—away from others).

Other factors that can affect the R0 involve both the host and the microbe. When an infected person has contact with someone who is susceptible, what is the likelihood that the microbe will be transmitted? Frequently, hosts can reduce the probability of transmission through their behaviors: by covering coughs or sneezes for diseases transmitted through the air, by washing their contaminated hands frequently, and by using condoms to contain the spread of sexually transmitted diseases.

These behavioral changes are important, but we know they’re far from perfect and not particularly efficient in the overall scheme of things. Take hand-washing, for example. We’ve known of its importance in preventing the spread of disease for 150 years. Yet studies have shown that hand-washing compliance even by health care professionals is astoundingly low — less than half of doctors and nurses wash their hands when they’re supposed to while caring for patients. It’s exceedingly difficult to get people to change their behavior, which is why public health campaigns built around convincing people to behave differently can sometimes be less effective than vaccination campaigns.

How long a person can actively spread the infection is another factor in the R0. Most infections can be transmitted for only a few days or weeks. Adults with influenza can spread the virus for about a week, for example. Some microbes can linger in the body and be transmitted for months or years. HIV is most infectious in the early stages when concentrations of the virus in the blood are very high, but even after those levels subside, the virus can be transmitted to new partners for many years. Interventions such as drug treatments can decrease the transmissibility of some of these organisms.

The microbes’ properties are also important. While hosts can purposely protect themselves, microbes don’t choose their traits. But over time, evolution can shape them in a manner that increases their chances of transmission, such as by enabling measles to linger longer in the air and allowing smallpox to survive longer in the environment.

By bringing together all these variables (size and dynamics of the host population, levels of immunity in the population, presence of interventions, microbial properties, and more), we can map and predict the spread of infections in a population using mathematical models. Sometimes these models can overestimate the spread of infection, as was the case with the models for the Ebola outbreak in 2014. One model predicted up to 1.4 million cases of Ebola by January 2015; in reality, the outbreak ended in 2016 with only 28,616 cases. On the other hand, models used to predict the transmission of cholera during an outbreak in Yemen have been more accurate.

The difference between the two? By the time the Ebola model was published, interventions to help control the outbreak were already under way. Campaigns had begun to raise awareness of how the virus was transmitted, and international aid had arrived, bringing in money, personnel and supplies to contain the epidemic. These interventions decreased the Ebola virus R0 primarily by isolating the infected and instituting safe burial practices, which reduced the number of susceptible contacts each case had. Shipments of gowns, gloves and soap that health care workers could use to protect themselves while treating patients reduced the chance that the virus would be transmitted. Eventually, those changes meant that the effective R0 fell below 1—and the epidemic ended. (Unfortunately, comparable levels of aid and interventions to stop cholera in Yemen have not been forthcoming.)

Catch-up vaccinations and the use of isolation and quarantine also likely helped to end the Disneyland measles epidemic, as well as a slightly earlier measles epidemic in Ohio. Knowing the factors that contribute to these outbreaks can aid us in stopping epidemics in their early stages. But to prevent them from happening in the first place, a population with a high level of immunity is, mathematically, our best bet for keeping disease at bay.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Tara C. Smith & Quanta Magazine


How to avoid a sucker bet – with a little help from maths

Sitting in a bar, you start chatting to a man who issues you a challenge. He hands you five red and two black cards. After shuffling, you lay them on the bar, face down. He bets you that you cannot turn over three red cards. And to help you, he explains the odds.

When you draw the first card, the odds are 5-2 (five red cards, two black cards) in favour of picking a red card. The second draw is 4-2 (or 2-1) and the third draw is 3-2. Each time you draw a card the odds appear to be in your favour, in that you have more chance of drawing a red card than a black card. So, do you accept the bet?

If you answered yes, perhaps it’s time for you to go over your maths. It’s a foolish bet. The odds given above are only for a perfect draw. The real odds of you being able to carry out this feat are actually 5-2 against you. That is, for every seven times you play, you’ll lose five times.

Odds against you

This type of bet is often called a proposition bet, which is defined as a wager on something that seems like a good idea, but for which the odds are actually against you, often very much against you, perhaps even making it impossible for you to win.

Let’s assume that you took the bet and, almost inevitably, lost money. But this is just for fun, right? So your new “friend” suggests a way that you can get your money back. He takes two more red cards and hands them to you, so you now have seven red cards and two black cards. You shuffle the nine cards and lay them out, face down, in a three by three grid. He bets you even money that you can’t pick out a straight line (vertical, horizontal or diagonal) that has only red cards.

Nine Card Hustle. Graham Kendall created image

Intuitively, this might sound like a better bet and the odds are actually evens if the two black cards are next to each other in a corner (see image). In total there are eight lines to choose from and four contain only red cards, and four contain a black card. But that is as good as it gets.

If the black cards are in opposite corners then you can only win by choosing the centre horizontal or vertical row so the odds are 6-2 (or 3-1) against you winning. Every other layout gives you three winning lines and five losing lines. This bet only has 12 ways of succeeding, against 22 ways of you losing. Hardly an even-chance bet.

Have another go

Try to evaluate the odds for this proposition bet.

You shuffle a pack of cards and cut it into three piles. You are offered even money that one of the cards on top of the piles will be a picture card (a jack, queen or king). That is, if a picture card shows up, you lose. Do you think this is a good bet?

One way of reasoning is that there are only 12 losing cards against 40 winning cards, so the odds look better than evens? But this is the wrong way of looking at it. It is really what’s known as a combinatorics problem. We should also realise that we are just choosing three cards at random.

There are 22,100 ways of choosing three cards from a 52 card deck. Of these, 12,220 will contain at least one picture card – so you lose – meaning that 9,880 will not contain a picture card – when you win. If you translate this to odds, you will lose fives times out of every nine times you play (5-4 against you). The even chance bet you have been offered is not the good value that you thought it was and you will lose money if you play a few times.

 

A Final Example

We can all agree that you have a 50/50 chance of guessing heads or tails in a coin toss. But if you toss the coin ten times, would you expect to see five heads and five tails? If you were offered odds of 2-1 to try this, would you take the bet? You’d be a sucker if you did.

Five heads and five tails will occur more often than any other combination, but there are many other ways that ten flips of a coin can land. In fact, the bet is 5-2 against you.

Another name for a proposition bet is the “sucker” bet, and there is no surprise who the sucker is. But don’t feel too bad. We are all generally very poor at evaluating true odds. A famous example is the Monty Hall Problem. Even mathematicians could not agree on the right answer to this seemingly simple problem.

We have focused on bets where it is difficult, especially when under the pressure of deciding whether to bet or not, to calculate the true odds. But there are many other proposition bets that do not rely on calculating odds. And there are many other sucker bets, with probably the most famous being the Three Card Monty.

If faced with this type of bet, what is the best thing you can do? I’d suggest you simply walk away.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Graham Kendall


The Mathematics of (Hacking) Passwords

Credit: Gaetan Charbonneau Getty Images

The science and art of password setting and cracking continues to evolve, as does the war between password users and abusers

At one time or another, we have all been frustrated by trying to set a password, only to have it rejected as too weak. We are also told to change our choices regularly. Obviously such measures add safety, but how exactly?

I will explain the mathematical rationale for some standard advice, including clarifying why six characters are not enough for a good password and why you should never use only lowercase letters. I will also explain how hackers can uncover passwords even when stolen data sets lack them.

Choose#W!sely@*

Here is the logic behind setting hack-resistant passwords. When you are asked to create a password of a certain length and combination of elements, your choice will fit into the realm of all unique options that conform to that rule—into the “space” of possibilities. For example, if you were told to use six lowercase letters—such as, afzjxd, auntie, secret, wwwwww—the space would contain 266, or 308,915,776, possibilities. In other words, there are 26 possible choices for the first letter, 26 possible choices for the second, and so forth. These choices are independent: you do not have to use different letters, so the size of the password space is the product of the possibilities, or 26 x 26 x 26 x 26 x 26 x 26 = 266.

If you are told to select a 12-character password that can include uppercase and lowercase letters, the 10 digits and 10 symbols (say, !, @, #, $, %, ^, &, ?, / and +), you would have 72 possibilities for each of the 12 characters of the password. The size of the possibility space would then be 7212 (19,408,409,961,765,342,806,016, or close to 19 x 1021).

That is more than 62 trillion times the size of the first space. A computer running through all the possibilities for your 12-character password one by one would take 62 trillion times longer. If your computer spent a second visiting the six-character space, it would have to devote two million years to examining each of the passwords in the 12-character space. The multitude of possibilities makes it impractical for a hacker to carry out a plan of attack that might have been feasible for the six-character space.

Calculating the size of these spaces by computer usually involves counting the number of binary digits in the number of possibilities. That number, N, is derived from this formula: 1 + integer(log2(N)). In the formula, the value of log2(N) is a real number with many decimal places, such as log2(266) = 28.202638…. The “integer” in the formula indicates that the decimal portion of that log value is omitted, rounding down to a whole number—as in integer(28.202638… 28). For the example of six lowercase letters above, the computation results in 29 bits; for the more complex, 12-character example, it is 75 bits. (Mathematicians refer to the possibility spaces as having entropy of 29 and 75 bits, respectively.) The French National Cybersecurity Agency (ANSSI) recommends spaces having a minimum of 100 bits when it comes to passwords or secret keys for encryption systems that absolutely must be secure. Encryption involves representing data in a way that ensures it cannot be retrieved unless a recipient has a secret code-breaking keyIn fact, the agency recommends a possibility space of 128 bits to guarantee security for several years. It considers 64 bits to be very small (very weak); 64 to 80 bits to be small; and 80 to 100 bits to be medium (moderately strong).

Moore’s law (which says that the computer-processing power available at a certain price doubles roughly every two years) explains why a relatively weak password will not suffice for long-term use: over time computers using brute force can find passwords faster. Although the pace of Moore’s law appears to be decreasing, it is wise to take it into account for passwords that you hope will remain secure for a long time.

For a truly strong password as defined by ANSSI, you would need, say, a sequence of 16 characters, each taken from a set of 200 characters. This would make a 123-bit space, which would render the password close to impossible to memorize. Therefore, system designers are generally less demanding and accept low- or medium-strength passwords. They insist on long ones only when the passwords are automatically generated by the system, and users do not have to remember them.

There are other ways to guard against password cracking. The simplest is well known and used by credit cards: after three unsuccessful attempts, access is blocked. Alternative ideas have also been suggested, such as doubling the waiting time after each successive failed attempt but allowing the system to reset after a long period, such as 24 hours. These methods, however, are ineffective when an attacker is able to access the system without being detected or if the system cannot be configured to interrupt and disable failed attempts.

How Long Does It Take to Search All Possible Passwords?

For a password to be difficult to crack, it should be chosen randomly from a large set, or “space,” of possibilities. The size, T, of the possibility space is based on the length, A, of the list of valid characters in the password and the number of characters, N, in the password.

The size of this space (T AN) may vary considerably.

Each of the following examples specifies values of ANT and the number of hours, D, that hackers would have to spend to try every permutation of characters one by one. X is the number of years that will have to pass before the space can be checked in less than one hour, assuming that Moore’s law (the doubling of computing capacity every two years) remains valid. I also assume that in 2019, a computer can explore a billion possibilities per second. I represent this set of assumptions with the following three relationships and consider five possibilities based on values of A and N:

 

Relationships

T = AN
D = T/(109 × 3,600)
X = 2 log2[T/(109 × 3,600)]

Results

_________________________________

If A = 26 and N = 6, then T = 308,915,776
D = 0.0000858 computing hour
X = 0; it is already possible to crack all passwords in the space in under an hour

_________________________________

If A = 26 and N = 12, then T = 9.5 × 1016
D = 26,508 computing hours
X = 29 years before passwords can be cracked in under an hour

_________________________________

If A = 100 and N = 10, then T = 1020
D = 27,777,777 computing hours
X = 49 years before passwords can be cracked in under an hour

_________________________________

If A = 100 and N = 15, then T = 1030
D = 2.7 × 1017 computing hours
X = 115 years before passwords can be cracked in under an hour

________________________________

If A = 200 and N = 20, then T = 1.05 × 1046
D = 2.7 × 1033 computing hours
X = 222 years before passwords can be cracked in under an hour

Weaponizing Dictionaries and Other Hacker Tricks

Quite often an attacker succeeds in obtaining encrypted passwords or password “fingerprints” (which I will discuss more fully later) from a system. If the hack has not been detected, the interloper may have days or even weeks to attempt to derive the actual passwords.

To understand the subtle processes exploited in such cases, take another look at the possibility space. When I spoke earlier of bit size and password space (or entropy), I implicitly assumed that the user consistently chooses passwords at random. But typically the choice is not random: people tend to select a password they can remember (locomotive) rather than an arbitrary string of characters (xdichqewax).

This practice poses a serious problem for security because it makes passwords vulnerable to so-called dictionary attacks. Lists of commonly used passwords have been collected and classified according to how frequently they are used. Attackers attempt to crack passwords by going through these lists systematically. This method works remarkably well because, in the absence of specific constraints, people naturally choose simple words, surnames, first names and short sentences, which considerably limits the possibilities. In other words, the nonrandom selection of passwords essentially reduces possibility space, which decreases the average number of attempts needed to uncover a password.

If you use password or iloveyou, you are not as clever as you thought! Of course, lists differ according to the country where they are collected and the Web sites involved; they also vary over time.

For four-digit passwords (for example, the PIN code of SIM cards on smartphones), the results are even less imaginative. In 2013, based on a collection of 3.4 million passwords each containing four digits, the DataGenetics Web site reported that the most commonly used four-digit sequence (representing 11 percent of choices) was 1234, followed by 1111 (6 percent) and 0000 (2 percent). The least-used four-digit password was 8068. Careful, though, this ranking may no longer be true now that the result has been published. The 8068 choice appeared only 25 times among the 3.4-million four-digit sequences in the database, which is much less than the 340 uses that would have occurred if each four-digit combination had been used with the same frequency. The first 20 series of four digits are: 1234; 1111; 0000; 1212; 7777; 1004; 2000; 4444; 2222; 6969; 9999; 3333; 5555; 6666; 1122; 1313; 8888; 4321; 2001; 1010.

Even without a password dictionary, using differences in frequency of letter use (or double letters) in a language makes it possible to plan an effective attack. Some attack methods also take into account that, to facilitate memorization, people may choose passwords that have a certain structure—such as A1=B2=C3, AwX2AwX2 or O0o.lli. (which I used for a long time)—or that are derived by combining several simple strings, such as password123 or johnABC0000. Exploiting such regularities makes it possible to for hackers to speed up detection.

Advice for Web Sites

Web sites, too, follow various rules of thumb. The National Institute of Standards and Technology recently published a notice recommending the use of dictionaries to filter users’ password choices.

Among the rules that a good Web server designer absolutely must adhere to is, do not store plaintext lists of usernames and passwords on the computer used to operate the Web site.

The reason is obvious: hackers could access the computer containing this list, either because the site is poorly protected or because the system or processor contains a serious flaw unknown to anyone except the attackers (a so-called zero-day flaw), who can exploit it.

One alternative is to encrypt the passwords on the server: use a secret code that transforms them via an encryption key into what will appear to be random character sequences to anyone who does not possess the decryption key. This method works, but it has two disadvantages. First, it requires decrypting the stored password every time to compare it with the user’s entry, which is inconvenient. Second, and more seriously, the decryption necessary for this comparison requires storing the decryption key in the Web site computer’s memory. This key may therefore be detected by an attacker, which brings us back to the original problem.

A better way to store passwords is through what are called hash functions that produce “fingerprints.” For any data in a file—symbolized as F—a hash function generates a fingerprint. (The process is also called condensing or hashing.) The fingerprint—h(F)—is a fairly short word associated with F but produced in such a way that, in practice, it is impossible to deduce F from h(F). Hash functions are said to be one-way: getting from F to h(F) is easy; getting from h(F) to F is practically impossible. In addition, the hash functions used have the characteristic that even if it is possible for two data inputs, F and F’, to have the same fingerprint (known as a collision), in practice for a given F, it is almost impossible to find an F’ with a fingerprint identical to F.

Using such hash functions allows passwords to be securely stored on a computer. Instead of storing the list of paired usernames and passwords, the server stores only the list of username/fingerprint pairs.

When a user wishes to connect, the server will read the individual’s password, compute the fingerprint and determine whether it corresponds to the list of stored username/fingerprint pairs associated with that username. That maneuver frustrates hackers because even if they have managed to access the list, they will be unable to derive the users’ passwords, inasmuch as it is practically impossible to go from fingerprint to password. Nor can they generate another password with an identical fingerprint to fool the server because it is practically impossible to create collisions.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Jean-Paul Delahaye


“The Danger of a Single Story” in Mathematics

Credit: George Coppock Getty Images

The Lathisms podcast shares the varied stories of Hispanic and Latinx mathematicians

Writer Chimamanda Ngozi Adichie’s popular TED talk is called “The danger of a single story.” In it, she talks about the importance of reading and writing many stories of many people rather than putting a person—or an entire continent of people—into one box. “The single story creates stereotypes,” she says, “and the problem with stereotypes is not that they are untrue but that they are incomplete.”

If someone were asked to tell the story of a “typical” mathematician, they might talk about a shy, socially awkward white man who is a “genius,” whatever that means. He was a fast learner in school and can perform feats of calculation almost instantaneously in his head. He thinks about nothing other than his research, often to the detriment of practical tasks required for everyday living. Some mathematicians do fit these descriptions, but many more don’t. When that story becomes the dominant narrative of who mathematicians are, people who don’t fit the mold feel like there’s no place for them in mathematics. One of the great privileges of working as a math writer is getting to hear the stories of so many mathematicians when I talk to them for articles or podcasts. There really is no one kind of person who becomes a mathematician.

This fall, I’m happy to share a project, created by Lathisms and sponsored by a Tensor-SUMMA grant from the Mathematical Association of America, to share more stories of mathematicians. Lathisms was founded in 2016 by four Hispanic mathematicians, Alexander Diaz-Lopez, Pamela Harris, Alicia Prieto Langarica, and Gabriel Sosa. Hispanic and Latinx people are underrepresented in mathematics, and Lathisms aims to increase visibility of Hispanic and Latinx mathematicians. Since 2016, the organizers have created a calendar every Hispanic Heritage month (September 15-October 15) where each day has a different featured Hispanic or Latinx mathematician, including a picture and short biography of each of them.

This year, Lathisms decided to extend the celebration of Hispanic and Latinx mathematicians by adding a podcast, hosted by me, where you can listen to these mathematicians tell their stories in their own words. Starting at the end of August, we have published a new episode every Friday. The episodes feature mathematicians featured in past years’ Lathisms calendars as well as some of this year’s mathematicians. Some of them grew up in the U.S., others in Latin America. Some grew up in poverty, and others were better off. Some knew they wanted to be mathematicians from a young age, and others didn’t know anything about possible mathematics careers until college. Some work in pure math, others in applied. Some focus on research, others outreach.

So far we’ve shared conversations with Carlos Castillo-Chavez, who is one of the most prolific advisors of U.S. Latinx math Ph.D. students; Erika Camacho, who does mathematical modeling of eye diseases; Federico Ardila, who mentioned “the danger of a single story” when we talked and finds inspiration and mentorship from both students and teachers; and Nicolas Garcia Trillos, who just started a new job in the statistics department at the University of Wisconsin Madison and talked about the many ways there are to be a good mathematician and how that helps him get “unstuck” in his works. In the coming weeks, we will share many more stories. Tune in on Fridays to find them.

You can find the podcast at the Lathisms website or on iTunes. Transcripts are available already for some episodes and will be provided for all episodes. I hope these conversations will be helpful for teachers who want to make sure their students are aware of the diversity of mathematicians, for Hispanic and Latinx students and early-career mathematicians who are looking for role models and collaborators, and for anyone who wants to hear about mathematicians’ many different stories.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Evelyn Lamb


Pi in the Sky

Elegant new visualization maps the digits of pi as a star catalogue

The mind of Martin Krzywinski is a rich and dizzying place, teeming with fascinating questions, ideas, and inspiration. Krzywinski is a scientist and data visualizer whose primary line of work involves genome analysis for cancer research. In his spare time, though, he explores his many different interests as a scientific and visual thinker through creative projects. For the past few years, one such project has occupied him on a recurring basis each March: reimagining the digits of pi in a novel, science-based, and visually compelling way.

Today, this delightful March 14th (“Pi Day”) tradition brings us the digits of pi mapped onto the night sky, as a star catalogue. Like the infinitely long sequence of pi, space has no discernible end, but we earthbound observers can only see so far. So Krzywinski places a cap at 12 million digits and groups each successive series of 12 numerals to define a latitude, longitude and brightness, resulting in a field of a million stars, randomly arranged.

Just as humans throughout history have found figures and narratives among the stars, this new array of celestial bodies also yields a story. As a way to honor our evolutionary ancestors, Krzywinski connects the dots to create shapes of extinct animals from around the globe.

Carée projection of “Pi in the Sky” star chart
Credit: Martin Krzywinski

But he couldn’t possibly stop there, so Krzywinski takes the visualization a step further, experimenting with different projections to re-create the map in various spatial iterations.

Azimuthal projections of “Pi in the Sky” star chart
Credit: Martin Krzywinski

Hammer/Aitoff projection of “Pi in the Sky” star chart
Credit: Martin Krzywinski

To read more about the visualization, including descriptions of the animals depicted, and a poem written by the artist’s collaborator Paolo Marcazzan, visit Martin Krzywinski’s website. There, you can also explore his previous Pi Day visualizations and even purchase them as posters.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Amanda Montañez


Peculiar Pattern Found in “Random” Prime Numbers

Credit: ©iStock.com

Last digits of nearby primes have “anti-sameness” bias

Two mathematicians have found a strange pattern in prime numbers—showing that the numbers are not distributed as randomly as theorists often assume.

“Every single person we’ve told this ends up writing their own computer program to check it for themselves,” says Kannan Soundararajan, a mathematician at Stanford University in California, who reported the discovery with his colleague Robert Lemke Oliver in a paper submitted to the arXiv preprint server on March 11. “It is really a surprise,” he says.

Prime numbers near to each other tend to avoid repeating their last digits, the mathematicians say: that is, a prime that ends in 1 is less likely to be followed by another ending in 1 than one might expect from a random sequence. “As soon as I saw the numbers, I could see it was true,” says mathematician James Maynard of the University of Oxford, UK. “It’s a really nice result.”

Although prime numbers are used in a number of applications, such as cryptography, this ‘anti-sameness’ bias has no practical use or even any wider implication for number theory, as far as Soundararajan and Lemke Oliver know. But, for mathematicians, it’s both strange and fascinating.

Not so random

A clear rule determines exactly what makes a prime: it’s a whole number that can’t be exactly divided by anything except 1 and itself. But there’s no discernable pattern in the occurrence of the primes. Beyond the obvious—after the numbers 2 and 5, primes can’t be even or end in 5—there seems to be little structure that can help to predict where the next prime will occur.

As a result, number theorists find it useful to treat the primes as a ‘pseudorandom’ sequence, as if it were created by a random-number generator.

But if the sequence were truly random, then a prime with 1 as its last digit should be followed by another prime ending in 1 one-quarter of the time. That’s because after the number 5, there are only four possibilities—1, 3, 7 and 9—for prime last digits. And these are, on average, equally represented among all primes, according to a theorem proved around the end of the nineteenth century, one of the results that underpin much of our understanding of the distribution of prime numbers. (Another is the prime number theorem, which quantifies how much rarer the primes become as numbers get larger.)

Instead, Lemke Oliver and Soundararajan saw that in the first billion primes, a 1 is followed by a 1 about 18% of the time, by a 3 or a 7 each 30% of the time, and by a 9 22% of the time. They found similar results when they started with primes that ended in 3, 7 or 9: variation, but with repeated last digits the least common. The bias persists but slowly decreases as numbers get larger.

The k-tuple conjecture

The mathematicians were able to show that the pattern they saw holds true for all primes, if a widely accepted but unproven statement called the Hardy–Littlewood k-tuple conjecture is correct. This describes the distributions of pairs, triples and larger prime clusters more precisely than the basic assumption that the primes are evenly distributed.

The idea behind it is that there are some configurations of primes that can’t occur, and that this makes other clusters more likely. For example, consecutive numbers cannot both be prime—one of them is always an even number. So if the number n is prime, it is slightly more likely that n + 2 will be prime than random chance would suggest. The k-tuple conjecture quantifies this observation in a general statement that applies to all kinds of prime clusters. And by playing with the conjecture, the researchers show how it implies that repeated final digits are rarer than chance would suggest.

At first glance, it would seem that this is because gaps between primes of multiples of 10 (20, 30, 100 and so on) multiples of 10 are disfavoured. But the finding gets much more general—and even more peculiar. A prime’s last digit is its remainder when it is divided by 10. But the mathematicians found that the anti-sameness bias holds for any divisor. Take 6, for example. All primes have a remainder of 1 or 5 when divided by 6 (otherwise, they would be divisible by 2 or 3) and the two remainders are on average equally represented among all primes. But the researchers found that a prime that has a remainder of 1 when divided by 6 is more likely to be followed by one that has a remainder of 5 than by another that has a remainder of 1. From a 6-centric point of view, then, gaps of multiples of 6 seem to be disfavoured.

Paradoxically, checking every possible divisor makes it appear that almost all gaps are disfavoured, suggesting that a subtler explanation than a simple accounting of favoured and disfavoured gaps must be at work. “It’s a completely weird thing,” says Soundararajan.

Mystifying phenomenon

The researchers have checked primes up to a few trillion, but they think that they have to invoke the k-tuple conjecture to show that the pattern persists. “I have no idea how you would possibly formulate the right conjecture without assuming it,” says Lemke Oliver.

Without assuming unproven statements such as the k-tuple conjecture and the much-studied Riemann hypothesis, mathematicians’ understanding of the distribution of primes dries up. “What we know is embarrassingly little,” says Lemke Oliver. For example, without assuming the k-tuple conjecture, mathematicians have proved that the last-digit pairs 1–1, 3–3, 7–7 and 9–9 occur infinitely often, but they cannot prove that the other pairs do. “Perversely, given our work, the other pairs should be more common,” says Lemke Oliver.

He and Soundararajan feel that they have a long way to go before they understand the phenomenon on a deep level. Each has a pet theory, but none of them is really satisfying. “It still mystifies us,” says Soundararajan.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Evelyn Lamb & Nature magazine


Why prime numbers still fascinate mathematicians, 2,300 years later

Primes still have the power to surprise. Chris-LiveLoveClick/shutterstock.com

On March 20, American-Canadian mathematician Robert Langlands received the Abel Prize, celebrating lifetime achievement in mathematics. Langlands’ research demonstrated how concepts from geometry, algebra and analysis could be brought together by a common link to prime numbers.

When the King of Norway presents the award to Langlands in May, he will honor the latest in a 2,300-year effort to understand prime numbers, arguably the biggest and oldest data set in mathematics.

As a mathematician devoted to this “Langlands program,” I’m fascinated by the history of prime numbers and how recent advances tease out their secrets. Why they have captivated mathematicians for millennia?

How to find primes

To study primes, mathematicians strain whole numbers through one virtual mesh after another until only primes remain. This sieving process produced tables of millions of primes in the 1800s. It allows today’s computers to find billions of primes in less than a second. But the core idea of the sieve has not changed in over 2,000 years.

“A prime number is that which is measured by the unit alone,” mathematician Euclid wrote in 300 B.C. This means that prime numbers can’t be evenly divided by any smaller number except 1. By convention, mathematicians don’t count 1 itself as a prime number.

Euclid proved the infinitude of primes – they go on forever – but history suggests it was Eratosthenes who gave us the sieve to quickly list the primes.

 

Here’s the idea of the sieve. First, filter out multiples of 2, then 3, then 5, then 7 – the first four primes. If you do this with all numbers from 2 to 100, only prime numbers will remain.

With eight filtering steps, one can isolate the primes up to 400. With 168 filtering steps, one can isolate the primes up to 1 million. That’s the power of the sieve of Eratosthenes.

Tables and tables

An early figure in tabulating primes is John Pell, an English mathematician who dedicated himself to creating tables of useful numbers. He was motivated to solve ancient arithmetic problems of Diophantos, but also by a personal quest to organize mathematical truths. Thanks to his efforts, the primes up to 100,000 were widely circulated by the early 1700s. By 1800, independent projects had tabulated the primes up to 1 million.

To automate the tedious sieving steps, a German mathematician named Carl Friedrich Hindenburg used adjustable sliders to stamp out multiples across a whole page of a table at once. Another low-tech but effective approach used stencils to locate the multiples. By the mid-1800s, mathematician Jakob Kulik had embarked on an ambitious project to find all the primes up to 100 million.

This “big data” of the 1800s might have only served as reference table, if Carl Friedrich Gauss hadn’t decided to analyze the primes for their own sake. Armed with a list of primes up to 3 million, Gauss began counting them, one “chiliad,” or group of 1000 units, at a time. He counted the primes up to 1,000, then the primes between 1,000 and 2,000, then between 2,000 and 3,000 and so on.

Gauss discovered that, as he counted higher, the primes gradually become less frequent according to an “inverse-log” law. Gauss’s law doesn’t show exactly how many primes there are, but it gives a pretty good estimate. For example, his law predicts 72 primes between 1,000,000 and 1,001,000. The correct count is 75 primes, about a 4 percent error.

A century after Gauss’ first explorations, his law was proved in the “prime number theorem.” The percent error approaches zero at bigger and bigger ranges of primes. The Riemann hypothesis, a million-dollar prize problem today, also describes how accurate Gauss’ estimate really is.

The prime number theorem and Riemann hypothesis get the attention and the money, but both followed up on earlier, less glamorous data analysis.

Modern prime mysteries

Today, our data sets come from computer programs rather than hand-cut stencils, but mathematicians are still finding new patterns in primes.

Except for 2 and 5, all prime numbers end in the digit 1, 3, 7 or 9. In the 1800s, it was proven that these possible last digits are equally frequent. In other words, if you look at the primes up to a million, about 25 percent end in 1, 25 percent end in 3, 25 percent end in 7, and 25 percent end in 9.

A few years ago, Stanford number theorists Robert Lemke Oliver and Kannan Soundararajan were caught off guard by quirks in the final digits of primes. An experiment looked at the last digit of a prime, as well as the last digit of the very next prime. For example, the next prime after 23 is 29: One sees a 3 and then a 9 in their last digits. Does one see 3 then 9 more often than 3 then 7, among the last digits of primes?

Frequency of last-digit pairs, among successive prime numbers up to 100 million. Matching colors correspond to matching gaps. M.H. Weissman, CC BY

Number theorists expected some variation, but what they found far exceeded expectations. Primes are separated by different gaps; for example, 23 is six numbers away from 29. But 3-then-9 primes like 23 and 29 are far more common than 7-then-3 primes, even though both come from a gap of six.

Mathematicians soon found a plausible explanation. But, when it comes to the study of successive primes, mathematicians are (mostly) limited to data analysis and persuasion. Proofs – mathematicians’ gold standard for explaining why things are true – seem decades away.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Martin H. Weissman


Mathematicians Measure Infinities, and Find They’re Equal

Credit: Saul Gravy Getty Images

Proof rests on a surprising link between infinity size and the complexity of mathematical theories

In a breakthrough that disproves decades of conventional wisdom, two mathematicians have shown that two different variants of infinity are actually the same size. The advance touches on one of the most famous and intractable problems in mathematics: whether there exist infinities between the infinite size of the natural numbers and the larger infinite size of the real numbers.

The problem was first identified over a century ago. At the time, mathematicians knew that “the real numbers are bigger than the natural numbers, but not how much bigger. Is it the next biggest size, or is there a size in between?” said Maryanthe Malliaris of the University of Chicago, co-author of the new work along with Saharon Shelah of the Hebrew University of Jerusalem and Rutgers University.

In their new work, Malliaris and Shelah resolve a related 70-year-old question about whether one infinity (call it p) is smaller than another infinity (call it t). They proved the two are in fact equal, much to the surprise of mathematicians.

“It was certainly my opinion, and the general opinion, that should be less than t,” Shelah said.

Malliaris and Shelah published their proof last year in the Journal of the American Mathematical Society and were honored this past Julywith one of the top prizes in the field of set theory. But their work has ramifications far beyond the specific question of how those two infinities are related. It opens an unexpected link between the sizes of infinite sets and a parallel effort to map the complexity of mathematical theories.

Many Infinities

The notion of infinity is mind-bending. But the idea that there can be different sizes of infinity? That’s perhaps the most counterintuitive mathematical discovery ever made. It emerges, however, from a matching game even kids could understand.

Suppose you have two groups of objects, or two “sets,” as mathematicians would call them: a set of cars and a set of drivers. If there is exactly one driver for each car, with no empty cars and no drivers left behind, then you know that the number of cars equals the number of drivers (even if you don’t know what that number is).

In the late 19th century, the German mathematician Georg Cantor captured the spirit of this matching strategy in the formal language of mathematics. He proved that two sets have the same size, or “cardinality,” when they can be put into one-to-one correspondence with each other—when there is exactly one driver for every car. Perhaps more surprisingly, he showed that this approach works for infinitely large sets as well.

Consider the natural numbers: 1, 2, 3 and so on. The set of natural numbers is infinite. But what about the set of just the even numbers, or just the prime numbers? Each of these sets would at first seem to be a smaller subset of the natural numbers. And indeed, over any finite stretch of the number line, there are about half as many even numbers as natural numbers, and still fewer primes.

Yet infinite sets behave differently. Cantor showed that there’s a one-to-one correspondence between the elements of each of these infinite sets.

1 2 3 4 5 (natural numbers)
2 4 6 8 10 (evens)
2 3 5 7 11 (primes)

Because of this, Cantor concluded that all three sets are the same size. Mathematicians call sets of this size “countable,” because you can assign one counting number to each element in each set.

After he established that the sizes of infinite sets can be compared by putting them into one-to-one correspondence with each other, Cantor made an even bigger leap: He proved that some infinite sets are even larger than the set of natural numbers.

Consider the real numbers, which are all the points on the number line. The real numbers are sometimes referred to as the “continuum,” reflecting their continuous nature: There’s no space between one real number and the next. Cantor was able to show that the real numbers can’t be put into a one-to-one correspondence with the natural numbers: Even after you create an infinite list pairing natural numbers with real numbers, it’s always possible to come up with another real number that’s not on your list. Because of this, he concluded that the set of real numbers is larger than the set of natural numbers. Thus, a second kind of infinity was born: the uncountably infinite.

What Cantor couldn’t figure out was whether there exists an intermediate size of infinity—something between the size of the countable natural numbers and the uncountable real numbers. He guessed not, a conjecture now known as the continuum hypothesis.

In 1900, the German mathematician David Hilbert made a list of 23 of the most important problems in mathematics. He put the continuum hypothesis at the top. “It seemed like an obviously urgent question to answer,” Malliaris said.

In the century since, the question has proved itself to be almost uniquely resistant to mathematicians’ best efforts. Do in-between infinities exist? We may never know.

Forced Out

Throughout the first half of the 20th century, mathematicians tried to resolve the continuum hypothesis by studying various infinite sets that appeared in many areas of mathematics. They hoped that by comparing these infinities, they might start to understand the possibly non-empty space between the size of the natural numbers and the size of the real numbers.

Many of the comparisons proved to be hard to draw. In the 1960s, the mathematician Paul Cohen explained why. Cohen developed a method called “forcing” that demonstrated that the continuum hypothesis is independent of the axioms of mathematics—that is, it couldn’t be proved within the framework of set theory. (Cohen’s work complemented work by Kurt Gödel in 1940 that showed that the continuum hypothesis couldn’t be disproved within the usual axioms of mathematics.)

Cohen’s work won him the Fields Medal (one of math’s highest honors) in 1966. Mathematicians subsequently used forcing to resolve many of the comparisons between infinities that had been posed over the previous half-century, showing that these too could not be answered within the framework of set theory. (Specifically, Zermelo-Fraenkel set theory plus the axiom of choice.)

Some problems remained, though, including a question from the 1940s about whether p is equal to t. Both p and t are orders of infinity that quantify the minimum size of collections of subsets of the natural numbers in precise (and seemingly unique) ways.

The details of the two sizes don’t much matter. What’s more important is that mathematicians quickly figured out two things about the sizes of p and t. First, both sets are larger than the natural numbers. Second, p is always less than or equal to t. Therefore, if p is less than t, then p would be an intermediate infinity—something between the size of the natural numbers and the size of the real numbers. The continuum hypothesis would be false.

Mathematicians tended to assume that the relationship between p and t couldn’t be proved within the framework of set theory, but they couldn’t establish the independence of the problem either. The relationship between p and t remained in this undetermined state for decades. When Malliaris and Shelah found a way to solve it, it was only because they were looking for something else.

An Order of Complexity

Around the same time that Paul Cohen was forcing the continuum hypothesis beyond the reach of mathematics, a very different line of work was getting under way in the field of model theory.

For a model theorist, a “theory” is the set of axioms, or rules, that define an area of mathematics. You can think of model theory as a way to classify mathematical theories—an exploration of the source code of mathematics. “I think the reason people are interested in classifying theories is they want to understand what is really causing certain things to happen in very different areas of mathematics,” said H. Jerome Keisler, emeritus professor of mathematics at the University of Wisconsin, Madison.

In 1967, Keisler introduced what’s now called Keisler’s order, which seeks to classify mathematical theories on the basis of their complexity. He proposed a technique for measuring complexity and managed to prove that mathematical theories can be sorted into at least two classes: those that are minimally complex and those that are maximally complex. “It was a small starting point, but my feeling at that point was there would be infinitely many classes,” Keisler said.

It isn’t always obvious what it means for a theory to be complex. Much work in the field is motivated in part by a desire to understand that question. Keisler describes complexity as the range of things that can happen in a theory—and theories where more things can happen are more complex than theories where fewer things can happen.

A little more than a decade after Keisler introduced his order, Shelah published an influential book, which included an important chapter showing that there are naturally occurring jumps in complexity—dividing lines that distinguish more complex theories from less complex ones. After that, little progress was made on Keisler’s order for 30 years.

Then, in her 2009 doctoral thesis and other early papers, Malliaris reopened the work on Keisler’s order and provided new evidence for its power as a classification program. In 2011, she and Shelah started working together to better understand the structure of the order. One of their goals was to identify more of the properties that make a theory maximally complex according to Keisler’s criterion.

Malliaris and Shelah eyed two properties in particular. They already knew that the first one causes maximal complexity. They wanted to know whether the second one did as well. As their work progressed, they realized that this question was parallel to the question of whether p and t are equal. In 2016, Malliaris and Shelah published a 60-page paper that solved both problems: They proved that the two properties are equally complex (they both cause maximal complexity), and they proved that p equals t.

“Somehow everything lined up,” Malliaris said. “It’s a constellation of things that got solved.”

This past July, Malliaris and Shelah were awarded the Hausdorff medal, one of the top prizes in set theory. The honor reflects the surprising, and surprisingly powerful, nature of their proof. Most mathematicians had expected that p was less than t, and that a proof of that inequality would be impossible within the framework of set theory. Malliaris and Shelah proved that the two infinities are equal. Their work also revealed that the relationship between p and t has much more depth to it than mathematicians had realized.

“I think people thought that if by chance the two cardinals were provably equal, the proof would maybe be surprising, but it would be some short, clever argument that doesn’t involve building any real machinery,” said Justin Moore, a mathematician at Cornell University who has published a brief overview of Malliaris and Shelah’s proof.

Instead, Malliaris and Shelah proved that p and t are equal by cutting a path between model theory and set theory that is already opening new frontiers of research in both fields. Their work also finally puts to rest a problem that mathematicians had hoped would help settle the continuum hypothesis. Still, the overwhelming feeling among experts is that this apparently unresolvable proposition is false: While infinity is strange in many ways, it would be almost too strange if there weren’t many more sizes of it than the ones we’ve already found.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Kevin Hartnett & Quanta Magazine

 


Maria Agnesi, the greatest female mathematician you’ve never heard of

The outmoded gender stereotype that women lack mathematical ability suffered a major blow in 2014, when Maryam Mirzakhani became the first woman to receive the Fields Medal, math’s most prestigious award.

An equally important blow was struck by an Italian mathematician Maria Gaetana Agnesi in the 18th century. Agnesi was the first woman to write a mathematics textbook and to be appointed to a university chair in math, yet her life was marked by paradox.

Though brilliant, rich and famous, she eventually opted for a life of poverty and service to the poor. Her remarkable story serves as a source for mathematical inspiration even today.

Early years

Born May 16, 1718 in Milan, Agnesi was the eldest of her wealthy silk merchant father’s 21 children. By age 5 she could speak French, and by 11 she was known to Milanese society as the “seven-tongued orator” for her mastery of modern and classical languages. In part to give Agensi the best education possible, her father invited leading intellectuals of the day to the family’s home, where his daughter’s gifts shone.

When Agnesi was 9, she recited from memory a Latin oration, likely composed by one of her tutors. The oration decried the widespread prejudice against educating women in the arts and sciences, which had been grounded in the view that a life of managing a household would require no such learning. Agnesi presented a clear and convincing argument that women should be free to pursue any kind of knowledge available to men.

Agnesi eventually became tired of displaying her intellect and expressed a desire to enter a convent. When her father’s second wife died, however, she assumed responsibility for his household and the education of her many younger siblings.

Through this role, she recognized that teachers and students needed a comprehensive mathematics textbook to introduce Italian students to the many recent Enlightenment-era mathematical discoveries.

Agnesi’s textbook

Portrait of Maria Agnesi by an unknown artist.

Agnesi found a special appeal in mathematics. Most knowledge derived from experience, she believed, is fallible and open to dispute. From mathematics, however, come truths that are wholly certain, the contemplation of which brings particularly great joy. In writing her textbook, she was not only teaching a useful skill, but opening to her students the door to such contemplation.

Published in two volumes in 1748, Agnesi’s work was entitled the “Basic Principles of Analysis.” It was composed not in Latin, as was the custom for great mathematicians such as Newton and Euler, but Italian vernacular, to make it more accessible to students.

Hers represented one of the first textbooks in the relatively new field of calculus. It helped to shape the education of mathematics students for several generations that followed. Beyond Italy, contemporary scholars in Paris and Cambridge translated the textbook for use in their university classrooms.

Agnesi’s textbook was praised in 1749 by the French Academy: “It took much skill and sagacity to reduce to almost uniform methods discoveries scattered among the works of many mathematicians very different from each other. Order, clarity, and precision reign in all parts of this work. … We regard it as the most complete and best made treatise.”

In offering similarly fine words of praise, another contemporary mathematician, Jean-Etienne Montucla, also revealed some of the mathematical sexism that persists down to the present day. He wrote: “We cannot but behold with the greatest astonishment how a person of a sex that seems so little fitted to tread the thorny paths of these abstract sciences penetrates so deeply as she has done into all the branches of algebra.”

Agnesi dedicated the “Basic Principles” to Empress Maria Theresa of Austria, who acknowledged the favor with a letter of thanks and a diamond-bearing box and ring. Pope Benedict XIV praised the work and predicted that it would enhance the reputation of the Italians. He also appointed her to the chair of mathematics at the University of Bologna, though she never traveled there to accept it.

A life of service

A passionate advocate for the education of women and the poor, Agnesi believed that the natural sciences and math should play an important role in an educational curriculum. As a person of deep religious faith, however, she also believed that scientific and mathematical studies must be viewed in the larger context of God’s plan for creation.

When Maria’s father died in 1752, she was free to answer a religious calling and devote herself to her other great passion: service to the poor, sick and homeless. She began by founding a small hospital in her home. She eventually gave away her wealth, including the gifts she had received from the empress. When she died at age 80, she was buried in a pauper’s grave.

To this day, some mathematicians express surprise at Maria’s apparent turn from learning and mathematics to a religious vocation. To her, however, it made perfect sense. In her view, human beings are capable of both knowing and loving, and while it is important for the mind to marvel at many truths, it’s ultimately even more important for the heart to be moved by love.

“Man always acts to achieve goals; the goal of the Christian is the glory of God,” she wrote. “I hope my studies have brought glory to God, as there were useful to others, and derived from obedience, because that was my father’s will. Now I have found better ways and means to serve God, and to be useful to others.”

Though few remember Agnesi today, her pioneering role in the history of mathematics serves as an inspiring story of triumph over gender stereotypes. She helped to blaze a trail for women in math and science for generations to follow. Agnesi excelled at math, but she also loved it, perceiving in its mastery an opportunity to serve both her fellow human beings and a higher order.

For more insights like this, visit our website at www.international-maths-challenge.com.
Credit of the article given to Richard Gunderman, David Gunderman


Mathematicians Are Overselling the Idea That “Math Is Everywhere”

Credit: PK Flickr(CC BY 2.0)

The mathematics that is most important to society is the province of the exceptional few—and that’s always been true

Most people never become mathematicians, but everyone has a stake in mathematics. Almost since the dawn of human civilization, societies have vested special authority in mathematical experts. The question of how and why the public should support elite mathematics remains as pertinent as ever, and in the last five centuries (especially the last two) it has been joined by the related question of what mathematics most members of the public should know.

Why does mathematics matter to society at large? Listen to mathematicians, policymakers, and educators and the answer seems unanimous: mathematics is everywhere, therefore everyone should care about it. Books and articles abound with examples of the math that their authors claim is hidden in every facet of everyday life or unlocks powerful truths and technologies that shape the fates of individuals and nations. Take math professor Jordan Ellenberg, author of the bestselling book How Not to Be Wrong, who asserts “you can find math everywhere you look.”

To be sure, numbers and measurement figure regularly in most people’s lives, but this risks conflating basic numeracy with the kind of math that most affects your life. When we talk about math in public policy, especially the public’s investment in mathematical training and research, we are not talking about simple sums and measures. For most of its history, the mathematics that makes the most difference to society has been the province of the exceptional few. Societies have valued and cultivated math not because it is everywhere and for everyone but because it is difficult and exclusive. Recognizing that math has elitism built into its historical core, rather than pretending it is hidden all around us, furnishes a more realistic understanding of how math fits into society and can help the public demand a more responsible and inclusive discipline.

In the first agricultural societies in the cradle of civilization, math connected the heavens and the earth. Priests used astronomical calculations to mark the seasons and interpret divine will, and their special command of mathematics gave them power and privilege in their societies. As early economies grew larger and more complex, merchants and craftsmen incorporated more and more basic mathematics into their work, but for them mathematics was a trick of the trade rather than a public good. For millennia, advanced math remained the concern of the well-off, as either a philosophical pastime or a means to assert special authority.

The first relatively widespread suggestions that anything beyond simple practical math ought to have a wider reach date to what historians call the Early Modern period, beginning around five centuries ago, when many of our modern social structures and institutions started to take shape. Just as Martin Luther and other early Protestants began to insist that Scripture should be available to the masses in their own languages, scientific writers like Welsh polymath Robert Recorde used the relatively new technology of the printing press to promote math for the people. Recorde’s 1543 English arithmetic textbook began with an argument that “no man can do any thing alone, and much less talk or bargain with another, but he shall still have to do with number” and that numbers’ uses were “unnumerable” (pun intended).

Far more influential and representative of this period, however, was Recorde’s contemporary John Dee, who used his mathematical reputation to gain a powerful position advising Queen Elizabeth I. Dee hewed so closely to the idea of math as a secret and privileged kind of knowledge that his detractors accused him of conjuring and other occult practices. In the seventeenth century’s Scientific Revolution, the new promoters of an experimental science that was (at least in principle) open to any observer were suspicious of mathematical arguments as inaccessible, tending to shut down diverse perspectives with a false sense of certainty. During the eighteenth-century Enlightenment, by contrast, the savants of the French Academy of Sciences parlayed their mastery of difficult mathematics into a special place of authority in public life, weighing in on philosophical debates and civic affairs alike while closing their ranks to women, minorities, and the lower social classes.

Societies across the world were transformed in the nineteenth century by wave after wave of political and economic revolution, but the French model of privileged mathematical expertise in service to the state endured. The difference was in who got to be part of that mathematical elite. Being born into the right family continued to help, but in the wake of the French Revolution successive governments also took a greater interest in primary and secondary education, and strong performance in examinations could help some students rise despite their lower birth. Political and military leaders received a uniform education in advanced mathematics at a few distinguished academies which prepared them to tackle the specialized problems of modern states, and this French model of state involvement in mass education combined with special mathematical training for the very best found imitators across Europe and even across the Atlantic. Even while basic math reached more and more people through mass education, math remained something special that set the elite apart. More people could potentially become elites, but math was definitely not for everyone.

Entering the twentieth century, the system of channeling students through elite training continued to gain importance across the Western world, but mathematics itself became less central to that training. Partly this reflected the changing priorities of government, but partly it was a matter of advanced mathematics leaving the problems of government behind. Where once Enlightenment mathematicians counted practical and technological questions alongside their more philosophical inquiries, later modern mathematicians turned increasingly to forbiddingly abstract theories without the pretense of addressing worldly matters directly.

The next turning point, which continues in many ways to define the relations between math and society today, was World War II. Fighting a war on that scale, the major combatants encountered new problems in logistics, weapons design and use, and other areas that mathematicians proved especially capable of solving. It wasn’t that the most advanced mathematics suddenly got more practical, but that states found new uses for those with advanced mathematical training and mathematicians found new ways to appeal to states for support. After the war, mathematicians won substantial support from the United States and other governments on the premise that regardless of whether their peacetime research was useful, they now had proof that highly trained mathematicians would be needed in the next war.

Some of those wartime activities continue to occupy mathematical professionals, both in and beyond the state—from security scientists and code-breakers at technology companies and the NSA to operations researchers optimizing factories and supply chains across the global economy. Postwar electronic computing offered another area where mathematicians became essential. In all of these areas, it is the special mathematical advances of an elite few that motivate the public investments mathematicians continue to receive today. It would be great if everyone were confident with numbers, could write a computer program, and evaluate statistical evidence, and these are all important aims for primary and secondary education. But we should not confuse these with the main goals and rationales of public support for mathematics, which have always been about math at the top rather than math for everyone.

Imagining math to be everywhere makes it all too easy to ignore the very real politics of who gets to be part of the mathematical elite that really count—for technology, security, and economics, for the last war and the next one. Instead, if we see that this kind of mathematics has historically been built by and for the very few, we are called to ask who gets to be part of that few and what are the responsibilities that come with their expertise. We have to recognize that elite mathematics today, while much more inclusive than it was one or five or fifty centuries ago, remains a discipline that vests special authority in those who, by virtue of gender, race, and class, are often already among our society’s most powerful. If math were really everywhere, it would already belong to everyone equally. But when it comes to accessing and supporting math, there is much work to be done. Math isn’t everywhere.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to New York University