How the 18th-century ‘probability revolution’ fueled the casino gambling craze

The first commercial gambling operations emerged, coincidentally or not, at the same time as the study of mathematical probability in the mid-1600s.

By the early 1700s, commercial gambling operations were widespread in European cities such as London and Paris. But in many of the games that were offered, players faced steep odds.

Then, in 1713, the brothers Johann and Jacob Bernoulli proved their “Golden Theorem,” known now as the law of large numbers or long averages.

But gambling entrepreneurs were slow to embrace this theorem, which showed how it could actually be an advantage for the house to have a smaller edge over a larger one.

The book “The Gambling Century: Commercial Gaming in Britain from Restoration to Regency,” WEexplain how it took government efforts to ban and regulate betting for gambling operators to finally understand just how much money could be made off a miniscule house edge.

The illusion of even odds in games that were the ancestors of roulette and blackjack proved immensely profitable, sparking a “probability revolution” that transformed gambling in Britain and beyond.

A new theorem points to sneaky big profits

The law of large numbers refers to events governed by chance.

When you flip a coin, for example, you have a 50% – or “even money” – chance of getting heads or tails. Were you to flip a coin 10 times, it’s quite possible that heads will turn up seven times and tails three times. But after 100, or 1000, or 10,000 flips, the ratio of “heads” to “tails” will be closer and closer to the mathematical “mean of probability” – that is, half heads and half tails.

Mathematicians Johann and Jacob Bernoulli developed what’s known today as the law of large numbers. Oxford Science Archive/Print Collector via Getty Images

This principle was popularized by writers such as Abraham De Moivre, who applied them to games of chance.

De Moivre explained how, over time, someone with even the smallest statistical “edge” would eventually win almost all of the money that was staked.

This is what happens in roulette. The game has 36 numbers, 18 of which are red and 18 of which are black. However, there are also two green house numbers – “0” and “00” – which, if the ball lands on them, means that the house can take everyone’s wager. This gives the house a small edge.

Imagine 10 players with $100 apiece. Half of them bet $10 on red and the other half bet $10 on black. Assuming that the wheel strictly aligns with the mean of probability, the house will break even for 18 of 19 spins. But on the 19th spin, the ball will land on one of the green “house numbers,” allowing the house to collect all the money staked from all bettors.

After 100 spins, the house will have won half of the players’ money. After 200 spins, they’ll have won all of it.

Even with a single house number – the single 0 on the roulette wheels introduced in Monte Carlo by the casino entrepreneur Louis Blanc – the house would win everything after 400 spins.

This eventuality, as De Moivre put it, “will seem almost incredible given the smallness of the odds.”

Hesitating to test the math

As De Moivre anticipated, gamblers and gambling operators were slow to adopt these findings.

De Moivre’s complex mathematical equations were over the heads of gamblers who hadn’t mastered simple arithmetic.

Gambling operators didn’t initially buy into the Golden Theorem, either, seeing it as unproven and therefore risky.

Instead, they played it safe by promoting games with long odds.

One was the Royal Oak Lottery, a game played with a polyhedral die with 32 faces, like a soccer ball. Players could bet on individual numbers or combinations of two or four numbers, giving them, at best, 7-to-1 odds of winning.

Faro was another popular game of chance in which the house, or “bank” as it was then known, gave players the opportunity to defer collecting their winnings for chances at larger payouts at increasingly steep odds.

Faro was a popular game of chance in which players could delay collecting their winnings for the chance to win even bigger sums. Boston Public Library

These games – and others played against a bank – were highly profitable to gambling entrepreneurs, who operated out of taverns, coffeehouses and other similar venues. “Keeping a common gaming house” was illegal, but with the law riddled with loopholes, enforcement was lax and uneven.

Public outcry against the Royal Oak Lottery was such that the Lottery Act of 1699 banned it. A series of laws enacted in the 1730s and 1740s classified faro and other games as illegal lotteries, on the grounds that the odds of winning or losing were not readily apparent to players.

The law of averages put into practice

Early writers on probability had asserted that the “house advantage” did not have to be very large for a gambling operation to profit enormously. The government’s effort to ban games of chance now obliged gaming operators to put the law of long averages into practice.

Further statutes outlawed games of chance played with dice, cards, wheels or any other device featuring “numbers or figures.”

None of these measures deterred gambling operators from the pursuit of profit.

Since this language did not explicitly include letters, the game of EO, standing for “even odd,” was introduced in the mid 1740s, after the last of these gambling statutes was enacted. It was played on a wheel with 40 slots, all but two of which were marked either “E” or “O.” As in roulette, an ivory ball was rolled along the edge of the wheel as it was spun. If the ball landed in one of the two blank “bar holes,” the house would automatically win, similar to the “0” and “00” in roulette.

EO’s defenders could argue that it was not an unlawful lottery because the odds of winning or losing were now readily apparent to players and appeared to be virtually equal. The key, of course, is that the bar holes ensured they weren’t truly equal.

Although this logic might not stand up in court, overburdened law enforcement was happy for a reason to look the other way. EO proliferated; legislation to outlaw it was proposed in 1782 but failed.

In the 19th century, roulette became a big draw at Monte Carlo’s casinos.Hulton Archive/Getty Images

The allure of ‘even money’

Gambling operators may have even realized that evening the odds drew more players, who, in turn, staked more.

After EO appeared in Britain, gambling operations both there and on the continent of Europe introduced “even money” betting options into both new and established games.

For example, the game of biribi, which was popular in France throughout the 18th century, involved players betting on numbers from 1 to 72, which were shown on a betting cloth. Numbered beads would then be drawn from a bag to determine the win.

In one iteration from around 1720, players could bet on individual numbers, on vertical columns of six numbers, or other options that promised large payouts against steeper odds.

By the end of the 18th century, however, one biribi cloth featured even money options: Players could bet on any number between 36 and 70 being drawn, or on any number between 1 and 35. Players could also select red or black numbers, making it a likely inspiration for roulette.

In Britain, the Victorian ethos of morality and respectabilityeventually won out. Parliament outlawed games of chance played for money in public or private in 1845, restrictions that were not lifted until 1960.

By 1845, however, British gamblers could travel by steamship and train to one of the many European resorts cropping up across the continent, where the probability revolution had transformed casino gambling into the formidable business enterprise it is today.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to The Conversation

 


Triangular Numbers and Euler’s Number Triangle

There is a nice identity stating that a square number can be written as the sum of two subsequent triangular numbers.

Here we are writing tdn for the nth triangular of dimension d (d=2 are the flat polygonals, d=3 for they pyramidal polygonals, etc.)

There is also a nice relationship that connects cubes to polygonal numbers. It turns out that a cube of spheres can be unfolded into a packed-hexagonal pyramid. The “packed hexagonals” or “centered hexagonals” are not quite the usual hexagonal numbers – instead these are hexagons of dots with the gaps filled in. The picuture below shows how square numbers fill the gaps of the hexagonals perfectly to form the “packed hexagonals,” and how these in turn can be stacked to form a cube. Here we are using phdn for “packed hexagonals” hdn for hexagonals, sdn for squares, and tdn for triangular numbers.

Combining this result with the “triangulation” identities we have:

This gives us three nice identities for powers of n:

It turns out that these identities generalize for other positive integer powers of n. Every nd can be written as a sum of tdi where i ranges from n to n+1−d. (for any i less than 1, these terms are zero)

1.Write out the sequence of nd for at least 2d−2 terms. Take the finite difference of this sequence d−2 times (this reduces the sequence down to “2-dimensional” numbers, allowing us to use the 2-dimensional triangular numbers in our calculations).

2.The first term of the new sequence should be 1. Eliminate the first term by subtracting t2n from this sequence. This means that our sum begins with tdn, with a coefficient of 1. Ensure that the t2n values are subtracted from the corresponding terms of the sequence.

3. Now, the sequence has a new first term which is A. Eliminate this term by subtracting A t2n from the sequence. A is the coefficient of tdn−1.

4. Repeat step 3, eliminating the first term of the sequence each time with a multiple of t2n, which provides the coefficient for the next value of tdi.

5.The process ends when all terms in the nd sequence is eliminated, which happens at the dth step.

Carrying out this process for a few more powers of n, we end up with:

In general, we seem to have:

where the coefficients A(i,k) have the nice properties:

The coefficients are naturally analogous to the binomial coefficients, and can be arranged in a triangle like Pascal’s.

These coefficients are known as Eulerian numbers, and the construction above is known as Euler’s Number Triangle (not to be confused with the geometric construction called the Euler Triangle).

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


Study uses topological data analysis to identify voting deserts

Credit: Unsplash/CC0 Public Domain

In past years, elections in the U.S. have been marked by stories of long waiting lines at the voting polls. Add other barriers, like long commutes and inadequate transportation, and voting can become inaccessible. But these voting deserts are difficult to quantify.

In a paper, “Persistent Homology for Resource Coverage: A Case Study of Access to Polling Sites” in SIAM Review, SFI External Professor Mason Porter (UCLA) and his students applied topological data analysis, which gives a set of mathematical tools that can quantify shape and structure in data, to the problem of quantifying voting deserts in LA County, Chicago, Atlanta, Jacksonville, New York City, and Salt Lake City.

Using a type of topological data analysis called persistent homology, Porter and his co-authors used estimates of average waiting times and commute times to examine where the voting deserts are located.

Applying persistent homology to a data set can reveal clusters and holes in that data, and it offers a way to measure how long those holes persist. The combination of waiting times and commute times in the data creates a pattern, with holes filling in as time passes.

The longer the hole takes to fill, the more inaccessible voting is to people in that area. “We are basically playing connect-the-dots in a more sophisticated way, trying to fill in what’s there,” says Porter.

Moving forward, Porter hopes to use this strategy to more accurately determine voting deserts. Finding voting deserts will hopefully be used to make voting more accessible, but it requires better-quality data than what was available to him and his students.

“This is a proof of concept,” Porter said. “We had to make some very severe approximations, in terms of what data we had access to.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given  Santa Fe Institute


Ninth Dedekind number discovered: Scientists solve long-known problem in mathematics

Making history with 42 digits, scientists at Paderborn University and KU Leuven have unlocked a decades-old mystery of mathematics with the so-called ninth Dedekind number.

Experts worldwide have been searching for the value since 1991. The Paderborn scientists arrived at the exact sequence of numbers with the help of the Noctua supercomputer located there. The results will be presented in September at the International Workshop on Boolean Functions and their Applications (BFA) in Norway.

What started as a master’s thesis project by Lennart Van Hirtum, then a computer science student at KU Leuven and now a research associate at the University of Paderborn, has become a huge success. The scientists join an illustrious group with their work. Earlier numbers in the series were found by mathematician Richard Dedekind himself when he defined the problem in 1897, and later by greats of early computer science such as Randolph Church and Morgan Ward. “For 32 years, the calculation of D(9) was an open challenge, and it was questionable whether it would ever be possible to calculate this number at all,” Van Hirtum says.

The previous number in the Dedekind sequence, the 8th Dedekind number, was found in 1991 using a Cray 2, the most powerful supercomputer at the time. “It therefore seemed conceivable to us that it should be possible by now to calculate the 9th number on a large supercomputer,” says Van Hirtum, describing the motivation for the ambitious project, which he initially implemented jointly with the supervisors of his master’s thesis at KU Leuven.

Grains of sand, chess and supercomputers

The main subject of Dedekind numbers are so-called monotone Boolean functions. Van Hirtum explains, “Basically, you can think of a monotone Boolean function in two, three, and infinite dimensions as a game with an n-dimensional cube. You balance the cube on one corner and then color each of the remaining corners either white or red. There is only one rule: you must never place a white corner above a red one. This creates a kind of vertical red-white intersection.

“The object of the game is to count how many different cuts there are. Their number is what is defined as the Dedekind number. Even if it doesn’t seem like it, the numbers quickly become gigantic in the process: the 8th Dedekind number already has 23 digits.”

Comparably large—but incomparably easier to calculate—numbers are known from a legend concerning the invention of the game of chess. “According to this legend, the inventor of the chess game asked the king for only a few grains of rice on each square of the chess board as a reward: one grain on the first square, two grains on the second, four on the third, and twice as many on each of the following squares. The king quickly realized that this request was impossible to fulfill, because so much rice does not exist in the whole world.

“The number of grains of rice on the complete board would have 20 digits—an unimaginable amount, but still less than D(8). When you realize these orders of magnitude, it is obvious that both an efficient computational method and a very fast computer would be needed to find D(9),” Van Hirtum said.

Milestone: Years become months

To calculate D(9), the scientists used a technique developed by master’s thesis advisor Patrick De Causmaecker known as the P-coefficient formula. It provides a way to calculate Dedekind numbers not by counting, but by a very large sum. This allows D(8) to be decoded in just eight minutes on a normal laptop. But, “What takes eight minutes for D(8) becomes hundreds of thousands of years for D(9). Even if you used a large supercomputer exclusively for this task, it would still take many years to complete the calculation,” Van Hirtum points out.

The main problem is that the number of terms in this formula grows incredibly fast. “In our case, by exploiting symmetries in the formula, we were able to reduce the number of terms to ‘only’ 5.5×1018—an enormous amount. By comparison, the number of grains of sand on Earth is about 7.5×1018, which is nothing to sneeze at, but for a modern supercomputer, 5.5×1018 operations are quite manageable,” the computer scientist said.

The problem: The calculation of these terms on normal processors is slow and also the use of GPUs as currently the fastest hardware accelerator technology for many AI applications is not efficient for this algorithm.

The solution: Application-specific hardware using highly specialized and parallel arithmetic units—so-called FPGAs (field programmable gate arrays). Van Hirtum developed an initial prototype for the hardware accelerator and began looking for a supercomputer that had the necessary FPGA cards. In the process, he became aware of the Noctua 2 computer at the “Paderborn Center for Parallel Computing (PC2)” at the University of Paderborn, which has one of the world’s most powerful FPGA systems.

Prof. Dr. Christian Plessl, head of PC2, explains, “When Lennart Van Hirtum and Patrick De Causmaeker contacted us, it was immediately clear to us that we wanted to support this moonshot project. Solving hard combinatorial problems with FPGAs is a promising field of application and Noctua 2 is one of the few supercomputers worldwide with which the experiment is feasible at all. The extreme reliability and stability requirements also pose a challenge and test for our infrastructure. The FPGA expert consulting team worked closely with Lennart to adapt and optimize the application for our environment.”

After several years of development, the program ran on the supercomputer for about five months. And then the time had come: on March 8, the scientists found the 9th Dedekind number: 286386577668298411128469151667598498812366.

Today, three years after the start of the Dedekind project, Van Hirtum is working as a fellow of the NHR Graduate School at the Paderborn Center for Parallel Computing to develop the next generation of hardware tools in his Ph.D. The NHR (National High Performance Computing) Graduate School is the joint graduate school of the NHR centers. He will report on his extraordinary success together with Patrick De Causmaecker on June 27 at 2 p.m. in Lecture Hall O2 of the University of Paderborn.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Universität Paderborn


AI can teach math teachers how to improve student skills

When middle school math teachers completed an online professional development program that uses artificial intelligence to improve their math knowledge and teaching skills, their students’ math performance improved.

My colleagues and I developed this online professional development program, which relies on a virtual facilitator that can—among other things—present problems to the teacher around teaching math and provide feedback on the teacher’s answers.

Our goal was to enhance teachers’ mastery of knowledge and skills required to teach math effectively. These include understanding why the mathematical rules and procedures taught in school work. The program also focuses on common struggles students have as they learn a particular math concept and how to use instructional tools and strategies to help them overcome these struggles.

We then conducted an experiment in which 53 middle school math teachers were randomly assigned to either this AI-based professional development or no additional training. On average, teachers spent 11 hours to complete the program. We then gave 1,727 of their students a math test. While students of these two groups of teachers started off with no difference in their math performance, the students taught by teachers who completed the program increased their mathematics performance by 0.18 of a standard deviation more on average. This is a statistically significant gain that is equal to the average math performance difference between sixth and seventh graders in the study.

Why it matters

This study demonstrates the potential for using AI technologies to create effective, widely accessible professional development for teachers. This is important because teachers often have limited access to high-quality professional development programs to improve their knowledge and teaching skills. Time conflicts or living in rural areas that are far from in-person professional development programs can prevent teachers from receiving the support they need.

Additionally, many existing in-person professional development programs for teachers have been shown to enhance participants’ teaching knowledge and practices but to have little impact on student achievement.

Effective professional development programs include opportunities for teachers to solve problems, analyse students’ work and observe teaching practices. Teachers also receive real-time support from the program facilitators. This is often a challenge for asynchronous online programs.

Our program addresses the limitations of asynchronous programs because the AI-supported virtual facilitator acts as a human instructor. It gives teachers authentic teaching activities to work on, asks questions to gauge their understanding and provides real-time feedback and guidance.

What’s next

Advancements in AI technologies will allow researchers to develop more interactive, personalized learning environments for teachers. For example, the language processing systems used in generative AI programs such as ChatGPT can improve the ability of these programs to analyse teachers’ responses more accurately and provide more personalized learning opportunities. Also, AI technologies can be used to develop new learning materials so that programs similar to ours can be developed faster.

More importantly, AI-based professional development programs can collect rich, real-time interaction data. Such data makes it possible to investigate how learning from professional development occurs and therefore how programs can be made more effective. Despite billions of dollars being spent each year on professional development for teachers, research suggests that how teachers learn through professional development is not yet well understood.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Yasemin Copur-Gencturk, The Conversation

 


Pi Day: How To Calculate Pi Using A Cardboard Tube And A Load Of Balls

Grab a few balls and get calculating pi

Pi Day, which occurs every 14 March – or 3/14, in the US date format – celebrates the world’s favourite mathematical constant. This year, why not try an experiment to calculate its value? All you will need is a cardboard tube and a series of balls, each 100 times lighter than the next. You have those lying around the house, right?

This experiment was first formulated by mathematician Gregory Galperin in 2001. It works because of a mathematical trick involving the masses of a pair of balls and the law of conservation of energy.

First, take the tube and place one end up against a wall. Place two balls of equal mass in the tube. Let’s say that the ball closer to the wall is red, and the other is blue.

Next, bounce the blue ball off the red ball. If you have rolled the blue ball hard enough, there should be three collisions: the blue ball hits the red one, the red ball hits the wall, and the red ball bounces back to hit the blue ball once more. Not-so-coincidentally, three is also the first digit of pi.

To calculate pi a little bit more precisely, replace the red ball with one that is 100 times less massive than the blue ball – a ping pong ball might work, so we will call this the white ball.

When you perform the experiment again, you will find that the blue ball hits the white ball, the white ball hits the wall and then the white ball continues to bounce back and forth between the blue ball and the wall as it slows down. If you count the bounces, you’ll find that there are 31 collisions. That gives you the first two digits of pi: 3.1.

Galperin calculated that if you continue the same way, you will keep getting more digits of pi. If you replace the white ball with another one that is 10,000 times less massive than the blue ball, you will find that there are 314 collisions, and so on. If you have enough balls, you can count as many digits of pi as you like.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Leah Crane*


Air pollution found to impair performance in matriculation exams in mathematical subjects

Researchers from the University of Oulu, Finland, investigated how air pollution affects students’ performance in matriculation exams, particularly in mathematical subjects. The study revealed that performance declines in exams involving thinking and memorization when fine particulate matter (PM2.5) levels in the school’s vicinity increase even slightly.

The research is the first to examine the same student’s performance in a test measuring the same skill in a short time frame. Skills refer to linguistic and mathematical abilities, measured by exams in the Finnish language, writing, reading, mathematics, and physics.

Researchers from the University of Oulu Business School examined the effects of very short-term exposure to air pollution on students’ performance in matriculation exams in Finland from 2006 to 2016.

According to the study, a one-unit increase in PM2.5 particle concentration (particles smaller than 2.5 micrometers) reduced the average student’s performance in a mathematical exam by approximately 0.13 percentage points compared to performance in a similar exam with lower fine particulate concentrations.

The study found no impact on linguistic skills due to an increase in fine particulate matter, and there were no gender differences observed.

Researchers were surprised to find significant effects on matriculation exam performance in Finland, a country with relatively low air pollution levels. This is the first time such effects have been demonstrated in Finland. The researchers emphasize that even in countries like Finland, where air pollution levels generally comply with the World Health Organization’s recommendations, reducing air pollution remains crucial.

“Increasing evidence suggests that exposure to air pollution during exams may have a decisive impact on the progression of students into further studies, especially if matriculation exam results are used as a significant selection criterion,” says University Researcher Marko Korhonen.

The primary data for the study came from Statistics Finland, covering all matriculation exams in Finland from spring 2006 to autumn 2016, including 22 academic terms. The study included over 370,000 final exams from Finnish high schools, involving 172,414 students from 253 schools in 54 municipalities.

Student performance was assessed using hourly air quality measurements from monitoring points located near the exam venues. The structure of Finnish high school final exams, where students take multiple exams in different courses, allowed the examination of each student’s test results in various final exams. Exams were conducted on different days in the same schools, and air quality was measured during the exams near each school.

The study, titled “The impact of ambient PM2.5 air pollution on student performance: Evidence from Finnish matriculation examinations,” has been published in Economics Letters.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Oulu

 


Millennium Prize: The Poincaré Conjecture

The problem’s been solved … but the sweet treats were declined. Back to the Cutting Board

In 1904, French mathematician Henri Poincaré asked a key question about three-dimensional spaces (“manifolds”).

Imagine a piece of rope, so that firstly a knot is tied in the rope and then the ends are glued together. This is what mathematicians call a knot. A link is a collection of knots that are tangled together.

It has been observed that DNA, which is coiled up within cells, occurs in closed knotted form.

Complex molecules such as polymers are tangled in knotted forms. There are deep connections between knot theory and ideas in mathematical physics. The outsides of a knot or link in space give important examples of three-dimensional spaces.

Torus. Fropuff

Back to Poincaré and his conjecture. He asked if the 3-sphere (which can be formed by either adding a point at infinity to ordinary three-dimensional Euclidean space or by gluing two solid three-dimensional balls together along their boundary 2-spheres) was the only three-dimensional space in which every loop can be continuously shrunk to a point.

Poincaré had introduced important ideas in the structure and classification of surfaces and their higher dimensional analogues (“manifolds”), arising from his work on dynamical systems.

Donuts to go, please

A good way to visualise Poincaré’s conjecture is to examine the boundary of a ball (a two-dimensional sphere) and the boundary of a donut (called a torus). Any loop of string on a 2-sphere can be shrunk to a point while keeping it on the sphere, whereas if a loop goes around the hole in the donut, it cannot be shrunk without leaving the surface of the donut.

Many attempts were made on the Poincaré conjecture, until in 2003 a wonderful solution was announced by a young Russian mathematician, Grigori “Grisha” Perelman.

This is a brief account of the ideas used by Perelman, which built on work of two other outstanding mathematicians, Bill Thurston and Richard Hamilton.

3D spaces

Thurston made enormous strides in our understanding of three-dimensional spaces in the late 1970s. In particular, he realised that essentially all the work that had been done since Poincaré fitted into a single theme.

He observed that known three-dimensional spaces could be divided into pieces in a natural way, so that each piece had a uniform geometry, similar to the flat plane and the round sphere. (To see this geometry on a torus, one must embed it into four-dimensional space!).

Thurston made a bold “geometrisation conjecture” that this should be true for all three-dimensional spaces. He had many brilliant students who further developed his theories, not least by producing powerful computer programs that could test any given space to try to find its geometric structure.

Thurston made spectacular progress on the geometrisation conjecture, which includes the Poincaré conjecture as a special case. The geometrisation conjecture predicts that any three-dimensional space in which every loop shrinks to a point should have a round metric – it would be a 3-sphere and Poincaré’s conjecture would follow.

In 1982, Richard Hamilton published a beautiful paper introducing a new technique in geometric analysis which he called Ricci flow. Hamilton had been looking for analogues of a flow of functions, so that the energy of the function decreases until it reaches a minimum. This type of flow is closely related to the way heat spreads in a material.

Hamilton reasoned that there should be a similar flow for the geometric shape of a space, rather than a function between spaces. He used the Ricci tensor, a key feature of Einstein’s field equations for general relativity, as the driving force for his flow.

He showed that, for three-dimensional spaces where the Ricci curvature is positive, the flow gradually changes the shape until the metric satisfies Thurston’s geometrisation conjecture.

Hamilton attracted many outstanding young mathematicians to work in this area. Ricci flow and other similar flows have become a huge area of research with applications in areas such as moving interfaces, fluid mechanics and computer graphics.

Ricci flow. CBN

He outlined a marvellous program to use Ricci flow to attack Thurston’s geometrisation conjecture. The idea was to keep evolving the shape of a space under Ricci flow.

Hamilton and his collaborators found the space might form a singularity, where a narrow neck became thinner and thinner until the space splits into two smaller spaces.

Hamilton worked hard to try to fully understand this phenomenon and to allow the pieces to keep evolving under Ricci flow until the geometric structure predicted by Thurston could be found.

Perelman

This is when Perelman burst on to the scene. He had produced some brilliant results at a very young age and was a researcher at the famous Steklov Institute in St Petersburg. Perelman got a Miller fellowship to visit UC Berkeley for three years in the early 1990s.

I met him there around 1992. He then “disappeared” from the mathematical scene for nearly ten years and re-emerged to announce that he had completed Hamilton’s Ricci flow program, in a series of papers he posted on the electronic repository called ArXiv.

His papers created enormous excitement and within several months a number of groups had started to work through Perelman’s strategy.

Eventually everyone was convinced that Perelman had indeed succeeded and both the geometrisation and Poincaré conjecture had been solved.

Perelman was awarded both a Fields medal (the mathematical equivalent of a Nobel prize) and also offered a million dollars for solving one of the Millenium prizes from the Clay Institute.

He turned down both these awards, preferring to live a quiet life in St Petersburg. Mathematicians are still finding new ways to use the solution to the geometrisation conjecture, which is one of the outstanding mathematical results of this era.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Hyam Rubinstein*

 


How linguists are unlocking the meanings of Shakespeare’s words using numbers

Today it would seem odd to describe a flower with the word “bastard”—why apply a term of personal abuse to a flower? But in Shakespeare’s time, “bastard” was a technical term describing certain plants.

Similarly, associating the word “bad” with success and talking of a “bad success” would be decidedly odd today. But it was not unusual then, when success meant outcome, which could be good or bad.

Corpus linguistics is a branch of linguistics which uses computers to explore the use of words in huge collections of language. It can spot nuances that might be overlooked by linguists working manually, or large patterns that a lifetime of studying may not reveal. And numbers, counts of words and keeping track of where the words are occurring, are key.

In my experience at conferences and the like, talk of numbers is not unanimously well received in the world of literary studies. Numbers are sometimes perceived as being reductive, or inappropriate when discussing creative works, or only accessible to specialists.

Yet, describing any pattern involves numbers. In the first paragraph above, I used the words “normal,” “odd” and “unusual” as soft ways of describing frequencies—the numbers of occurrences (think also of, for example, “unique,” “rare,” “common”).

Even talking about “associations” involves numbers. Often associations evolve from an unusually high number of encounters among two or more things. And numbers help us to see things.

Changing meanings

Along with my team at Lancaster University, I have used computers to examine some 20,000 words gleaned from a million-word corpus (a collection of written texts) of Shakespeare’s plays, resulting in a new kind of dictionary.

People have created Shakespeare dictionaries before, but this is the first to use the full armory of corpus techniques and the first to be comparative. It not only looks at words inside Shakespeare’s plays, but also compares them with a matching million-word corpus of contemporary early modern plays, along with huge corpus of 320 million words of various writings of the period.

Of course, words in early modern England had lives outside Shakespeare. “Bastard” was generally a term for a hybrid plant, occurring in technical texts on horticulture.

It could be, and very occasionally was, used for personal abuse, as in King Lear, where Edmund is referred to as a “bastard.” But this is no general term of abuse, let alone banter, as you might see it used today. It is a pointed attack on him being of illegitimate parentage, genetically hybrid, suspect at his core.

The word “bad” is not now associated with the word “success,” yet 400 years ago it was, as were other negative words, including “disastrous,” “unfortunate,” “ill,” “unhappy” and “unlucky.”

We can tap into a word’s associations by examining its collocates, that is, words with which it tends to occur (rather like we make judgements about people partly on the basis of the company they keep). In this way we can see that the meaning of “success” was “outcome” and that outcome, given its collocates, could be good or bad.

Highly frequent words

We can use intuition to guess some word patterns. It’s no surprise that in early modern English, the word “wicked” occurred very frequently in religious texts of the time. But less intuitively, so did “ourselves,” a word associated with sermons and plays, both of which have in common a habit of making statements about people on earth.

Highly frequent words, so often excluded by historical dictionaries and reference works, are often short words that seem insignificant. They have a wood-for-trees problem.

Yet corpus techniques highlight the interesting patterns. It turns out that a frequent sense of the humble preposition “by” is religious: to reinforce the sincerity of a statement by invoking the divine (for example, “by God”).

Numbers can also reveal what is happening inside Shakespeare’s works. Frequent words such as “alas” or “ah” are revealed to be heavily used by Shakespeare’s female characters, showing that they do the emotional work of lamentation in the plays, especially his histories.

Infrequent words

What of the infrequent? Words that occur only once in Shakespeare—so-called hapax legomena—are nuggets of interest. The single case of “bone-ache” in Troilus and Cressida evokes the horrifying torture that syphilis, which it applies to, would have been. In contrast, “ear-kissing” in King Lear is Shakespeare’s rather more pleasant and creative metaphor for whispering (interestingly, other writers used it for the notion of flattering).

Another group of interesting infrequent words concerns words that seem to have their earliest occurrence in Shakespeare. Corpus techniques allowed us to navigate the troubled waters of spelling variation. Before spelling standardization, searching for the word “sweet,” for instance, would miss cases spelt “sweete,” “swete” or “svveet.”

In this way, we can better establish whether a word written by a writer really is the earliest instance. Shakespearean firsts include the rather boring “branchless” (Antony and Cleopatra), a word probably not coined by Shakespeare but merely first recorded in his text. But there is also the more creative “ear-piercing” (Othello) and the distinctly modern-sounding “self-harming” (The Comedy of Errors and Richard II).

Why are these advances in historical corpus linguistics happening now? Much of the technology to produce these findings was not in place until relatively recently.

Programs to deal with spelling variation (such as Vard) or to analyse vast collections of electronic texts in sophisticated ways (such as CQPweb), to say nothing of the vast quantities of computer-readable early modern language data (such as EEBO-TCP), have only been widely used in the last 10 or so years. We are therefore on the cusp of a significant increase in our understanding and appreciation of major writers such as Shakespeare.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Culpeper, The Conversation

 

 


A mathematical understanding of project schedules

Complex projects are made up of many activities, the duration of which vary according to a power law; this model can be used to predict overall project duration and delay.

We have all been frustrated when a project is delayed because one sub-task cannot begin before another ends. It is less well known that the process of scheduling projects efficiently can be described in mathematical terms.

Now, Alexei Vazquez, of technology company Nodes & Links and based in Cambridge, U.K., has shown that the distribution of activity lengths in a project follows the mathematical relationship of power law scaling. He has published his findings in The European Physical Journal B.

Any relationship in which one quantity varies as a power of another (such as squared or cubed) is known as a power law. These can be applied to a wide range of physical (e.g., cloud sizes or solar flares), biological (e.g. species frequencies in a habitat) and man-made (e.g. income distribution) phenomena.

In Vazquez’ analysis of projects, the quantities that depend on power laws were the duration of each of the activities that make up the project and the slack times between each activity, or “floats.”

Vazquez analysed data on 118 construction projects, together comprising more than 1,000 activities, that was stored in a database belonging to his company. The activity durations in a given project fitted a power law with a negative exponent (i.e., there were more short-duration activities, and a “tail” of small numbers of longer ones); the value of the exponent varied from project to project. The distribution of float times for the activities in a project can be expressed in a similar but independent power law.

He explained that these power law scalings arise from different processes: in the case of the activities, from a historical process in which a generic activity fragments over time into a number of more specialized ones. Furthermore, he showed that estimation of delays associated with a whole project depends on the power law scaling of the activities but not of the floats. This analysis has the potential to forecast delays in planned projects accurately, minimizing the annoyance caused by those long waits.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Clare Sansom, SciencePOD