On Constructing Functions, Part 3

This post is the third example in an ongoing list of various sequences of functions which converge to different things in different ways.

‍Example 3

A sequence of continuous functions {fn:R→[0,∞)}{fn:R→[0,∞)} which converges to 0 in the L1L1 norm, but does not converge to 0 uniformly.

There are four criteria we want our functions to satisfy:

  1. First off is the uniform convergence. Observe that “{fn}{fn} does not converge to 0 uniformly” can mean one of three things:
  • converges to 0 pointwise only
  • converges to something other than 0 (pointwise or uniformly)
  • does not converge at all

So it’s up to you to decide which one feels more comfortable to work with. Here we’ll choose the second option.

  1. Next, “{fn}{fn} converges to 0 in the L1L1 norm” means that we want to choose our sequence so that the area under the curve of the fnfn gets smaller and smaller as n→∞n→∞.
  2. Further, we also want the fnfn to be positive (the image of each fnfn must be [0,∞)[0,∞)) (notice this allows us to remove the abosolute value sign in the L1L1 norm: ∫|fn|⇒∫fn∫|fn|⇒∫fn)
  3. Lastly, the functions must be continuous.

A slick* but very simple solution is a sequence of triangles of decreasing area with height 1!

This works because: At x=0x=0, fn(x)=1fn(x)=1 for all nn, so there’s no way it can converge to zero (much less uniformly). In fact we have fn→ffn→f pointwise wheref(x)={1,if x=00otherwise.f(x)={1,if x=00otherwise.The area of each triangle is 1n1n which clearly goes to zero for nn large. Also, it’s clear to see visually that the area is getting smaller. This guarantees fn→0fn→0 in the L1L1 norm. Further, each fnfn is positive since we’ve defined it to equal zero as soon as the edges of the triangle reach the xx-axis. And lastly we have piecewise continuity.

The details: Let ϵ>0ϵ>0 and x∈Rx∈R. If x=0x=0, then fn(x)=1fn(x)=1 for all n and so fn→1fn→1. Otherwise x>0x>0 or x<0x<0 If x>0x>0 and x>1x>1, then fn(x)=0fn(x)=0 for all nn. Otherwise if x∈(0,1]x∈(0,1] choose N>1xN>1x. Then whenever n>Nn>N we have fn(x)=1−nx<1−1xx=0<ϵ.fn(x)=1−nx<1−1xx=0<ϵ. The case when x<0x<0 follows a similar argument.

Lastly fn→0fn→0 in the L1L1 norm since, as we mentioned, the areas are decreasing to 0. Explicitly:  ∫R|fn|=∫0−1n1+nx+∫1n01−nx=2n→0.∫R|fn|=∫−1n01+nx+∫01n1−nx=2n→0.

‍*I can brag because this particular example came from a friend. My own attempt at a solution was not nearly as intuitive.

Constructing the Tensor Product of Modules

The Basic Idea

Today we talk tensor products. Specifically this post covers the construction of the tensor product between two modules over a ring. But before jumping in, I think now’s a good time to ask, “What are tensor products good for?” Here’s a simple example where such a question might arise:

Suppose you have a vector space VV over a field FF. For concreteness, let’s consider the case when VV is the set of all 2×22×2 matrices with entries in RR and let F=RF=R. In this case we know what “FF-scalar multiplication” means: if M∈VM∈V is a matrix and c∈Rc∈R, then the new matrix cMcM makes perfect sense. But what if we want to multiply MM by complex scalars too? How can we make sense of something like (3+4i)M(3+4i)M? That’s precisely what the tensor product is for! We need to create a set of elements of the form(complex number) “times” (matrix)(complex number) “times” (matrix)so that the mathematics still makes sense. With a little massaging, this set will turn out to be C⊗RVC⊗RV.

So in general, if FF is  an arbitrary field and VV an FF-vector space, the tensor product answers the question “How can I define scalar multiplication by some larger field which contains FF?” And of course this holds if we replace the word “field” by “ring” and consider the same scenario with modules.

Now this isn’t the only thing tensor products are good for (far from it!), but I think it’s the most intuitive one since it is readily seen from the definition (which is given below).

So with this motivation in mind, let’s go!

‍From English to Math

Let RR be a ring with 1 and let MM be a right RR-module and NN a left RR-module and suppose AA is any abelian group. Our goal is to create an abelian group M⊗RNM⊗RN, called the tensor product of MM and NN, such that if there is an RR-balanced map i:M×N→M⊗RNi:M×N→M⊗RN and any RR-balanced map φ:M×N→Aφ:M×N→A, then there is a unique abelian group homomorphism Φ:M⊗RN→AΦ:M⊗RN→A such that φ=Φ∘iφ=Φ∘i, i.e. so the diagram below commutes.

Notice that the statement above has the same flavor as the universal mapping property of free groups!

Definition: Let XX be a set. A group FF is said to be a free group on XX if there is a function i:X→Fi:X→F such that for any group GG and any set map φ:X→Gφ:X→G, there exists a unique group homomorphism Φ:F→GΦ:F→G such that the following diagram commutes: (i.e. φ=Φ∘iφ=Φ∘i)

set map, so in particular we just want our’s to be RR-balanced:

: Let RR be a ring with 1. Let MM be a right RR-module, NN a left RR-module, and AA an abelian group. A map φ:M×N→Rφ:M×N→R is called RR-balanced if for all m,m1,m2∈Mm,m1,m2∈M, all n,n1,n2∈Nn,n1,n2∈N and all r∈Rr∈R,
φ(m1+m2,n)=φ(m1,n)+φ(m2,n)φ(m1+m2,n)=φ(m1,n)+φ(m2,n)φ(m,n1+n2)=φ(m,n1)+φ(m,n2)φ(m,n1+n2)=φ(m,n1)+φ(m,n2)φ(mr,n)=φ(m,rn)φ(mr,n)=φ(m,rn)

By “replacing” F by a certain quotient group F/HF/H! (We’ll define HH precisely below.)
These observations give us a road map to construct the tensor product. And so we begin:

‍Step 1

Let FF be a free abelian group generated by M×NM×N and let AA be an abelian group. Then by definition (of free groups), if φ:M×N→Aφ:M×N→A is any set map, and M×N↪FM×N↪F by inclusion, then there is a unique abelian group homomorphism Φ:F→AΦ:F→A so that the following diagram commutes.

Step 2

that the inclusion map M×N↪FM×N↪F is not RR-balanced! To fix this, we must “modify” the target space FF by replacing it with the quotient F/HF/H where H≤FH≤F is the subgroup of FF generated by elements of the form

(m1+m2,n)−(m1,n)−(m2,n)(m1+m2,n)−(m1,n)−(m2,n)

  • (m,n1+n2)−(m,n1)−(m,n2)(m,n1+n2)−(m,n1)−(m,n2)
  • (mr,n)−(m,rn)(mr,n)−(m,rn)

where m1,m2,m∈Mm1,m2,m∈M, n1,n2,n∈Nn1,n2,n∈N and r∈Rr∈R. Why elements of this form? Because if we define the map i:M×N→F/Hi:M×N→F/H byi(m,n)=(m,n)+H,i(m,n)=(m,n)+H,we’ll see that ii is indeed RR-balanced! Let’s check:

So, are we done now? Can we really just replace FF with F/HF/H and replace the inclusion map with the map ii, and still retain the existence of a unique homomorphism Φ:F/H→AΦ:F/H→A? No! Of course not. F/HF/H is not a free group generated by M×NM×N, so the diagram below is bogus, right?

Not totally. We haven’t actually disturbed any structure!

How can we relate the pink and blue lines? We’d really like them to be the same. But we’re in luck because they basically are!

‍Step 3

H⊆ker(f)H⊆ker⁡(f), that is as long as f(h)=0f(h)=0 for all h∈Hh∈H. And notice that this condition, f(H)=0f(H)=0, forces ff to be RR-balanced!

Let’s check:

Sooooo… homomorphisms f:F→Af:F→A such that H⊆ker(f)H⊆ker⁡(f) are the same as RR-balanced maps from M×NM×N to AA! (Technically, I should say homomorphisms ff restricted to M×NM×N.) In other words, we have

In conclusion, to say “abelian group homomorphisms from F/HF/H to AA are the same as (isomorphic to) RR-balanced maps from M×NM×N to AA” is the simply the hand-wavy way of saying

Whenever i:M×N→Fi:M×N→F is an RR-balanced map and φ:M×N→Aφ:M×N→A is an RR-balanced map where AA is an abelian group, there exists a unique abelian group homomorphism Φ:F/H→AΦ:F/H→A such that the following diagram commutes:

And this is just want we want! The last step is merely the final touch:

‍Step 4

the abelian quotient group F/HF/H to be the tensor product of MM and NN,

whose elements are cosets,

where m⊗nm⊗n for m∈Mm∈M and n∈Nn∈N is referred to as a simple tensor. And there you have it! The tensor product, constructed.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


On Constructing Functions, Part 2

This post is the second example in an ongoing list of various sequences of functions which converge to different things in different ways.

‍Example 2

A sequence of functions {fn:R→R}{fn:R→R} which converges to 0 uniformly but does not converge to 0 in L1L1.

This works because:  The sequence tends to 0 as n→∞n→∞ since the height of each function tends to 0 and the the region where fnfn is taking on this decreasing height is tending towards all of R+R+ ((0,n)(0,n) as n→∞n→∞) (and it’s already 0 on R−∪{0}R−∪{0}). The convergence is uniform because the number of times we have to keep “squishing” the rectangles until their height is less than ϵϵ does not depend on xx.

The details: Let ϵ>0ϵ>0 and choose N∈NN∈N so that N>1ϵN>1ϵ and let n>Nn>N. Fix x∈Rx∈R.

Case 1 (x≤0x≤0 or x≥nx≥n) Then fn(x)=0fn(x)=0 and so |fn(x)−0|=0<ϵ|fn(x)−0|=0<ϵ.

  • Case 2 (0<x<n0<x<n ) Then fn(x)=1nfn(x)=1n and so |fn(x)−0|=1n<1N<ϵ|fn(x)−0|=1n<1N<ϵ

Finally, fn↛0fn↛0 in L1L1 since∫R|fn|=∫(0,n)1n=1nλ((0,n))=1.∫R|fn|=∫(0,n)1n=1nλ((0,n))=1.

Remark: Here’s a question you could ask: wouldn’t fn=nχ(0,1n)fn=nχ(0,1n) work here too? Both are tending to 0 everywhere and both involve rectangles of area 1. The answer is “kinda.” The problem is that the convergence of nχ(0,1n)nχ(0,1n) is pointwise. BUT Egoroff’s Theorem gives us a way to actually “make” it uniform!.

‍On the notation above:   For a measurable set X⊂RX⊂R, denote the set of all Lebesgue integrable functions f:X→Rf:X→R by L1(X)L1(X). Then a sequence of functions {fn}{fn} is said to converge in L1L1  to a function ff if limn→∞∫|fn−f|=0limn→∞∫|fn−f|=0.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


On Constructing Functions, Part 1

Given a sequence of real-valued functions {fn}{fn}, the phrase, “fnfn converges to a function ff” can mean a few things:

  • fnfn converges uniformly
  • fnfn converges pointwise
  • fnfn converges almost everywhere (a.e.)
  • fnfn converges in L1L1 (set of Lebesgue integrable functions)
  • and so on…

Other factors come into play if the fnfn are required to be continuous, defined on a compact set, integrable, etc.. So since I do not have the memory of an elephant (whatever that phrase means…), I’ve decided to keep a list of different sequences that converge (or don’t converge) to different functions in different ways. With each example I’ll also include a little (and hopefully) intuitive explanation for why. Having these sequences close at hand is  especially useful when analysing the behavior of certain functions or constructing counterexamples.

The first sequence we’ll look at is one which converges almost everywhere, but does not converge in L1L1 (the set of Lebesgue integrable functions).

‍Example 1

A sequence of functions {fn:R→R}{fn:R→R} which converges to 0 almost everywhere but does not converge to 0 in L1L1.       

This works because: Recall that to say fn→0fn→0 almost everywhere means fn→0fn→0 pointwise on RR except for a set of measure 0. Here, the set of measure zero is the singleton set {0}{0} (at x=0x=0, fn(x)=nfn(x)=n and we can’t make this less than ϵϵ for any ϵ>0ϵ>0). So fnfn converges to 0 pointwise on (0,1](0,1]. This holds because if x<0x<0 or x>1x>1 then fn(x)=0fn(x)=0 for all nn. Otherwise, if x∈(0,1]x∈(0,1], we can choose nn appropriately:

The details:  Let ϵ>0ϵ>0 and x∈(0,1]x∈(0,1] and choose N∈NN∈N so that N>1xN>1x. Then whenever n>Nn>N, we have n>1xn>1x which implies x>1nx>1n and so fn(x)=0fn(x)=0. Hence |fnx−0|=0<ϵ|fnx−0|=0<ϵ.

Further*, fn↛0fn↛0 in L1L1 since∫R|fn|=∫[0,1n]n=nλ([0,1n])=1.∫R|fn|=∫[0,1n]n=nλ([0,1n])=1.

Remark: Notice that Egoroff’s theorem applies here! We just proved that fn→0fn→0 pointwise a.e. on RR, but Egoroff says that we can actually get uniform convergence a.e. on a bounded subset of RR, say (0,1](0,1].

In particular for each ϵ>0ϵ>0 we are guaranteed the existence of a subset E⊂(0,1]E⊂(0,1] such that fn→0fn→0 uniformly and λ((0,1]∖E)<ϵλ((0,1]∖E)<ϵ. In fact, it should be clear that that subset must be something like (ϵ2,1](ϵ2,1] (the “zero region” in the graph above). Then no matter where xx is in (0,1](0,1], we can always find nn large enough – namely all nn which satisfy 1n<ϵ21n<ϵ2 – so that fn(x)=0fn(x)=0, i.e. fn→ffn→f uniformly. And indeed, λ((0,1]∖(ϵ2,1]=ϵ/2<ϵλ((0,1]∖(ϵ2,1]=ϵ/2<ϵ as claimed.

‍On the notation above:   For a measurable set X⊂RX⊂R, denote the set of all Lebesgue integrable functions f:X→Rf:X→R by L1(X)L1(X). Then a sequence of functions {fn}{fn} is said to converge in L1L1  to a function ff if limn→∞∫|fn−f|=0limn→∞∫|fn−f|=0.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


Make mine a double: Moore’s Law and the future of mathematics

What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law.

Moore’s Law stems from a seminal 1965 article by Intel founder Gordon Moore. He wrote:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”

Moore noted that in 1965 engineering advances were enabling a doubling in semiconductor density every 12 months, but this rate was later modified to roughly 18 months. Informally, we may think of this as doubling computer performance.

In any event, Moore’s Law has now continued unabated for 45 years, defying several confident predictions it would soon come to a halt, and represents a sustained exponential rate of progress that is without peer in the history of human technology. Here is a graph of Moore’s Law, shown with the transistor count of various computer processors:

Where we’re at with Moore’s Law

At the present time, researchers are struggling to keep Moore’s Law on track. Processor clock rates have stalled, as chip designers have struggled to control energy costs and heat dissipation, but the industry’s response has been straightforward — simply increase the number of processor “cores” on a single chip, together with associated cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections.

The capacity of leading-edge DRAM main memory chips continues to advance apace with Moore’s Law. The current state of the art in computer memory devices is a 3D design, which will be jointly produced by IBM and Micron Technology, according to a December 2011 announcement by IBM representatives.

As things stand, the best bet for the future of Moore’s Law are nanotubes — submicroscopic tubes of carbon atoms that have remarkable properties.

According to a recent New York Times article, Stanford researchers have created prototype electronic devices by first growing billions of carbon nanotubes on a quartz surface, then coating them with an extremely fine layer of gold atoms. They then used a piece of tape (literally!) to pick the gold atoms up and transfer them to a silicon wafer. The researchers believe that commercial devices could be made with these components as early as 2017.

Moore’s Law in science and maths

So what does this mean for researchers in science and mathematics?

Plenty, as it turns out. A scientific laboratory typically uses hundreds of high-precision devices that rely crucially on electronic designs, and with each step of Moore’s Law, these devices become ever cheaper and more powerful. One prominent case is DNA sequencers. When scientists first completed sequencing a human genome in 2001, at a cost of several hundred million US dollars, observers were jubilant at the advances in equipment that had made this possible.

Now, only ten years later, researchers expect to reduce this cost to only US$1,000 within two years and genome sequencing may well become a standard part of medical practice. This astounding improvement is even faster than Moore’s Law!

Applied mathematicians have benefited from Moore’s Law in the form of scientific supercomputers, which typically employ hundreds of thousands of state-of-the-art components. These systems are used for tasks such as climate modelling, product design and biological structure calculations.

Today, the world’s most powerful system is a Japanese supercomputer that recently ran the industry-standard Linpack benchmark test at more than ten “petaflops,” or, in other words, 10 quadrillion floating-point operations per second.

Below is a graph of the Linpack performance of the world’s leading-edge systems over the time period 1993-2011, courtesy of the website Top 500. Note that over this 18-year period, the performance of the world’s number one system has advanced more than five orders of magnitude. The current number one system is more powerful than the sum of the world’s top 500 supercomputers just four years ago.

Linpack performance over time.

Pure mathematicians have been a relative latecomer to the world of high-performance computing. The present authors well remember the era, just a decade or two ago, when the prevailing opinion in the community was that “real mathematicians don’t compute.”

But thanks to a new generation of mathematical software tools, not to mention the ingenuity of thousands of young, computer-savvy mathematicians worldwide, remarkable progress has been achieved in this arena as well (see our 2011 AMS Notices article on exploratory experimentation in mathematics).

In 1963 Daniel Shanks, who had calculated pi to 100,000 digits, declared that computing one billion digits would be “forever impossible.” Yet this level was reached in 1989. In 1989, famous British physicist Roger Penrose, in the first edition of his best-selling book The Emperor’s New Mind, declared that humankind would likely never know whether a string of ten consecutive sevens occurs in the decimal expansion of pi. Yet this was found just eight years later, in 1997.

Computers are certainly being used for more than just computing and analysing digits of pi. In 2003, the American mathematician Thomas Hales completed a computer-based proof of Kepler’s conjecture, namely the long-hypothesised fact that the simple way the grocer stacks oranges is in fact the optimal packing for equal-diameter spheres. Many other examples could be cited.

Future prospects

So what does the future hold? Assuming that Moore’s Law continues unabated at approximately the same rate as the present, and that obstacles in areas such as power management and system software can be overcome, we will see, by the year 2021, large-scale supercomputers that are 1,000 times more powerful and capacious than today’s state-of-the-art systems — “exaflops” computers (see NAS Report). Applied mathematicians eagerly await these systems for calculations, such as advanced climate models, that cannot be done on today’s systems.

Pure mathematicians will use these systems as well to intuit patterns, compute integrals, search the space of mathematical identities, and solve intricate symbolic equations. If, as one of us discussed in a recent Conversation article, such facilities can be combined with machine intelligence, such as a variation of the hardware and software that enabled an IBM system to defeat the top human contestants in the North American TV game show Jeopardy! we may see a qualitative advance in mathematical discovery and even theory formation.

It is not a big leap to imagine that within the next ten years tailored and massively more powerful versions of Siri (Apple’s new iPhone assistant) will be an integral part of mathematics, not to mention medicine, law and just about every other part of human life.

Some observers, such as those in the Singularity movement, are even more expansive, predicting a time just a few decades hence when technology will advance so fast that at the present time we cannot possibly conceive or predict the outcome.

Your present authors do not subscribe to such optimistic projections, but even if more conservative predictions are realised, it is clear that the digital future looks very bright indeed. We will likely look back at the present day with the same technological disdain with which we currently view the 1960s.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis

 


Mathematical models may help shed light on body clock disruptions

Researchers are using mathematical models to better understand the effects of disruptions like daylight savings time, working night shifts, jet lag or even late-night phone scrolling on the body’s circadian rhythms.

The University of Waterloo and the University of Oxford researchers have developed a new model to help scientists better understand the resilience of the brain’s master clock: the cluster of neurons in the brain that coordinates the body’s other internal rhythms. They also hope to suggest ways to help improve this resilience in individuals with weak or impaired circadian rhythms. The study, “Can the Clocks Tick Together Despite the Noise? Stochastic Simulations and Analysis,” appears in the SIAM Journal on Applied Dynamical Systems.

Sustained disruptions to circadian rhythm have been linked to diabetes, memory loss, and many other disorders.

“Current society is experiencing a rapid increase in demand for work outside of traditional daylight hours,” said Stéphanie Abo, a Ph.D. student in applied mathematics and the study’s lead author. “This greatly disrupts how we are exposed to light, as well as other habits such as eating and sleeping patterns.”

Humans’ circadian rhythms, or internal clocks, are the roughly 24-hour cycles many body systems follow, usually alternating between wakefulness and rest. Scientists are still working to understand the cluster of neurons known as suprachiasmatic nucleus (SCN) or master clock.

Using mathematical modeling techniques and differential equations, the team of applied mathematics researchers modeled the SCN as a macroscopic, or big-picture, system comprised of a seemingly infinite number of neurons. They were especially interested in understanding the system’s couplings—the connections between neurons in the SCN that allow it to achieve a shared rhythm.

Frequent and sustained disturbances to the body’s circadian rhythms eliminated the shared rhythm, implying a weakening of the signals transmitted between SCN neurons.

Abo said they were surprised to find that “a small enough disruption can actually make the connections between neurons stronger.”

“Mathematical models allow you to manipulate body systems with specificity that cannot be easily or ethically achieved in the body or a petri dish,” Abo said. “This allows us to do research and develop good hypotheses at a lower cost.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Waterloo

 


Math teachers hold a bias against girls when the teachers think gender equality has been achieved, says study

Math teachers who believe women no longer face discrimination tend to be biased against girls’ ability in math. This is what we found through an experiment we conducted with over 400 elementary and middle school math teachers across the United States. Our findings were published in a peer-reviewed article that appeared in April 2023 in the International Journal of STEM Education.

For our experiment, we asked teachers to evaluate a set of student solutions to math problems. The teachers didn’t know that gender- and race-specific names, such as Tanisha and Connor, had been randomly assigned to the solutions. We did this so that if they evaluated identical student work differently, it would be because of the gender- and race-specific names they saw, not the differences in student work. The idea was to see if the teachers had any unconscious biases.

After the teachers evaluated the student solutions, we asked a series of questions about their beliefs and experiences. We asked if they felt society had achieved gender equality. We asked them whether they felt anxious about doing math. We asked whether they felt students’ ability in math was fixed or could be improved. We also asked teachers to think about their own experience as math students and to report how frequently they experienced feelings of unequal treatment because of their race or gender.

We then investigated if these beliefs and experiences were related to how they evaluated the math ability of students of different genders or racial groups.

Consistent with our prior work, we found that implicit bias against girls arises in ambiguous situations—in this case, when student solutions were not completely correct.

Further, for teachers who believed that U.S. society had achieved gender equality, they tended to rate a student’s ability higher when they saw a male student name than when they saw a female student name for the same student work.

Teachers’ unconscious gender biases in math classes have been documented repeatedly.

Our study identifies factors that underlie such biases; namely, that biases are stronger among teachers who believe that gender discrimination is not a problem in the United States. Understanding the relationship between teachers’ beliefs and biases can help teacher educators create effective and targeted interventions to remove such biases from classrooms.

Our findings also shed light on potential reasons that males tend to have higher confidence in math and stick with math-intensive college majors even when they’re not high performers.

One big remaining question is how to create targeted interventions to help teachersovercome such biases. Evidence suggests that unconscious biases come into play in situations where stereotypes might emerge. Further, research suggests that these unconscious biases can be suppressed only when people are aware of them and motivated to restrain them.

Since bias may take on different forms in different fields, a one-time, one-size-fits-all anti-bias training may not have a lasting effect. We think it’s worthwhile to investigate if it’s more effective to provide implicit bias training programs that are specific to the areas where bias is revealed.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Yasemin Copur-Gencturk, Ian Thacker and Joseph Cimpian, The Conver


Generating Random Walks in Mathematics

With connections to the study of gambling, Brownian motion, fractals, and more, random walks are a favourite topic in recreational mathematics.

The diagram above (from Energy transformations during horizontal walking by F. G. Benedict and H. Murschhauser, published in 1915) suggests one method for generating walking data. Creating random walk simulations in Fathom or TinkerPlots is a little more straightforward.

First simulation – using sliders to determine a ‘base angle’

This first example lets you set up random walks where the direction chosen is based on an angle k*2pi/n for a fixed n (whose value is determined by a slider) and a random k (a random integer between 1 and n).

First, create a slider n, then create the attributes below and finally add the data (any number is fine – start with ~500 cases). The formulas below were entered in TinkerPlots, but would work equally well in Fathom.

Plots of (x,y) will show the walk, and plots of (step, distance) will show how the distance from the origin changes over the course of the walk. Different values for n provide walks with their own particular geometries.

The walks start at (0,0) and wander about the plane from there. Re-randomizing (CNTRL-Y) generates new walks.

The simulation gives lots of nice pictures of random walks. You could generate statistics from these by adding measures and measure collections.

One limitation of this simulation is that it is difficult to determine exactly when the walker has returned to the start (0,0).  This turns out to be an interesting question for random walks on the plane (see the wikipedia entry for more on this). Because of the inexactness in the positions calculated using sine and cosine, the walker seems to never return to the origin. There are several ways of dealing with this, but one is to design a simpler simulation that uses exact values – one that sticks to lattice points (xy), where x and y are both integers.

Second simulation – sticking to Integer lattice points

This second simulation can be thought of an ‘urban walker’ where all paths must follow a strictly laid out grid, like some downtown streets. The exactness of the positions means that we can detect with confidence when the walker has crossed back to their starting point. For this simulation, no slider is required – just enter the attributes and add cases.

Using the crossed_start attribute as a filter or to gather measures, you will find that walks often quickly pass over the starting point. You will also find that as you increase the number of cases, the straight ‘etch-a-sketch’ lines of the urban walk take on very interesting fractal-like contours.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


New Research Disproves a Long-Held ‘Cognitive Illusion’ That Hockey Goaltenders Improve Under Pressure

The good news is that—statistically speaking—there is reason to believe Edmonton Oilers goalie Stuart Skinner will improve against the Florida Panthers in the Stanley Cup final.

The bad news is it may not be enough to make a difference.

That’s according to a new study, “Do NHL goalies get hot in the playoffs?” by Likang Ding, a doctoral student studying operations and information systems in the Alberta School of Business. The study is published on the arXivpreprint server.

Ding’s statistical analysis—in the final stage of review for publication—disproves the long-held and prevailing “hot hand” theory that if a goalie is performing well, he’ll continue to perform as well or better as pressure intensifies.

The term “hot hand” derives from basketball, where it is believed a shooter is more likely to score if their previous attempts were successful.

“Our main finding is the nonexistence of the hot-hand phenomenon (for hockey goaltenders),” says Ding. “That is, no positive influence of recent save performance on the save probability for the next shot.”

Instead, Ding and co-authors Ivor Cribben, Armann Ingolfsson and Monica Tran found that, by a small margin, “better past performance may result in a worse future performance.”

That could mean Panthers goaltender Sergei Bobrovsky is due for a slight slump, given his relatively hot streak of late. But according to Ding, that decline may amount to no more than about 1%—certainly nothing to count on.

The reverse is also true, says Ding. If a goalie is underperforming, as Skinner has on occasion during the playoffs, statistics would forecast a slight uptick in his save percentage.

The explanation in that case might be the “motivation effect”; when a goaltender’s recent save performance has been below his average, his effort and focus increase, “causing the next-shot save probability to be higher.”

Here Ding quotes Hall of Fame goaltender Ken Dryden, who once said, “If a shot beats you, make sure you stop the next one, even if it is harder to stop than the one before.”

Though it wasn’t part of his current study, Ding says he reviewed Skinner’s stats before the finals and found a worse-than-average performance, “so I’m hoping he will come back eventually.”

Ding wanted to take a closer look at the hot hand theory because it is crucial in understanding coaches’ decisions about which goaltender to start in a given game. It could mean the second goalie deserves a chance to enter the fray, get used to the pace and stay fresh, even if it might seem risky.

Ding’s data set includes information about all shots on goal in the NHL playoffs from 2008 to 2016, amounting to 48,431 shots faced by 93 goaltenders over 795 games and nine playoff seasons.

The hot hand theory has been around for at least as long as professional sports and is often applied to a range of human endeavour to support the notion that “success breeds success”—an appealing, almost intuitive assumption.

And yet, a series of studies in the 1980s focused on basketball shooting percentages showed there was no statistical evidence to support the theory, says Ding, attributing it instead to a psychological tendency to see patterns in random data.

The hot hand theory remained controversial after the statistical methods used in those studies were later shown to be biased, says Ding. But even once the bias was corrected, the theory has since been largely disproven.

Nobel Prize-winning cognitive scientist Daniel Kahneman once called the phenomenon “a massive and widespread cognitive illusion.” Ding’s study is one more confirming the consensus that the hot hand is no more than wishful thinking.

For more insights like this, visit our website at www.international-maths-challenge.com.

Credit of the article given to Geoff McMaster, University of Alberta


Farey definition, property, and algorithm

Here is an outline of how you can go about generating this data. The definition and properties of Farey sequences here are from Hardy & Wright’s An Introduction to the Theory of Numbers (which I am slowly working my way through).
The Farey sequence of order n is the sequence of irreducible fractions between 0 and 1 whose denominator does not exceed n. So, the elements of the sequence are of the form h/k, where h < k < n, and h and k are relatively prime.

The main theorem about Farey numbers provides them with their characteristic property (Theorem 29 in TofN). The characteristic property of Farey sequences is that if h/kh”/k”, and h’/k’ are successive terms in a Farey sequence, then h”/k” is the mediant of h/k and h’/k’. If h/k and h’/k’ are two reduced fractions, their mediant is given by (h+h’)/(k+k’).

It’s nice when a theorem tells you how to implement an algorithm. This property tells us that Farey sequences can be built iteratively or recursively, beginning with F1={0/1 ,1/1}. The algorithm to do this is a nice one – it’s probably not often used as a textbook exercise in recursion because it helps if you to have some data structure or class to represent the fraction, and a way of telling if integers are relatively prime (you can use the Euclidean algorithm to implement a gcd() function).

Here is an outline of how to calculate the next Farey sequence, given that you have one already.

0) input a Farey sequence oldSequence (initial sequence will be {0/1, 1/1})

1) create a new empty sequence newSequence

2) iterate over oldSequence and find out its level by finding the largest denominator that occurs store this in n

3) set n= n+1

4) iterate over oldSequence, looking at each pair of adjacent elements (left and right)

4.1) add left to newSequence
4.2) if the denominators of left and right sum to n, form their mediant
4.2.1) if the numerator and denominator of the mediant are relatively prime, add mediant to newSequence

5) add the last element of oldSequence to newSequence

Note that you only need to add in new elements where the denominators of existing adjacent elements sum to the n value – when this happens you form the mediant of the two adjacent elements. Furthermore, the mediant is only added if the fraction can’t be reduced.

Below is some Java-ish code corresponding to the above – it assumes that the oldSequence and newSequence are an ArrayList and that you have a class Fraction that has fields num (numerator) and den (denominator).

Here are the first five Farey sequences that you get from the algorithm:

The image at the top of the post was generated by implementing the algorithm in Processing, and using the result to draw the associated Ford circles – you could do something similar in regular Java (or other language). If you draw the Ford Circles associated with the sequence, the circle for a fraction “frac” will be centered at (x,y) and have a radius r where

x = (scale)*frac.num/frac.den

y = r

r = (scale)/(2*(frac.den)^2)

where “scale” is some scaling factor (probably in the 100’s) that increases the size of the image.

Here I decided to draw two copies of each circle, one on top of the other.

 

That it contains only fractions between 0 and 1 and that it contains all reduced fractions for denominators n, connects Farey sequences to Euler’s totient function. Euler’s totient function is an arithmetic function that for a given k, counts the integers less than k that are relatively prime to it. This is exactly the number of times that a fraction of the form h/k will appear in the Farey sequence for k>1.

The Farey algorithm, how to draw Ford circles, and the connection to Euler’s totient function are described nicely in J.H. Conway and R.K. Guy’s The Book of Numbers – a great companion to a book like TofN.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


What game theory can teach us about standing up to bullies

In a time of income inequality and ruthless politics, people with outsized power or an unrelenting willingness to browbeat others often seem to come out ahead.

New research from Dartmouth, however, shows that being uncooperative can help people on the weaker side of the power dynamic achieve a more equal outcome—and even inflict some loss on their abusive counterpart.

The findings provide a tool based in game theory—the field of mathematics focused on optimizing competitive strategies—that could be applied to help equalize the balance of power in labor negotiations or international relations, and could even be used to integrate cooperation into interconnected artificial intelligence systems such as driverless cars.

Published in PNAS Nexus, the study takes a fresh look at what are known in game theoryas “zero-determinant strategies” developed by renowned scientists William Press, now at the University of Texas at Austin, and the late Freeman Dyson at the Institute for Advanced Study in Princeton, New Jersey.

Zero-determinant strategies dictate that “extortionists” control situations to their advantage by becoming less and less cooperative—though just cooperative enough to keep the other party engaged—and by never being the first to concede when there’s a stalemate. Theoretically, they will always outperform their opponent by demanding and receiving a larger share of what’s at stake.

But the Dartmouth paper uses mathematical models of interactions to uncover an “Achilles heel” to these seemingly uncrackable scenarios, said senior author Feng Fu, an associate professor of mathematics. Fu and first author Xingru Chen, who received her Ph.D. in mathematics from Dartmouth in 2021, discovered an “unbending strategy” in which resistance to being steamrolled not only causes an extortionist to ultimately lose more than their opponent but can result in a more equal outcome as the overbearing party compromises in a scramble to get the best payoff.

“Unbending players who choose not to be extorted can resist by refusing to fully cooperate. They also give up part of their own payoff, but the extortioner loses even more,” said Chen, who is now an assistant professor at the Beijing University of Posts and Telecommunications.

“Our work shows that when an extortioner is faced with an unbending player, their best response is to offer a fair split, thereby guaranteeing an equal payoff for both parties,” she said. “In other words, fairness and cooperation can be cultivated and enforced by unbending players.”

These scenarios frequently play out in the real world, Fu said. Labor relations provide a poignant model. A large corporation can strong-arm suppliers and producers such as farmworkers to accept lower prices for their effort by threatening to replace them and cut them off from a lucrative market. But a strike or protest can turn the balance of power back toward the workers’ favour and result in more fairness and cooperation, such as when a labor union wins some concessions from an employer.

While the power dynamic in these scenarios is never equal, Fu said, his and Chen’s work shows that unbending players can reap benefits by defecting from time to time and sabotaging what extortioners are truly after—the highest payoff for themselves.

“The practical insight from our work is for weaker parties to be unbending and resist being the first to compromise, thereby transforming the interaction into an ultimatum game in which extortioners are incentivized to be fairer and more cooperative to avoid ‘lose-lose’ situations,” Fu said.

“Consider the dynamics of power between dominant entities such as Donald Trump and the lack of unbending from the Republican Party, or, on the other hand, the military and political resistance to Russia’s invasion of Ukraine that has helped counteract incredible asymmetry,” he said. “These results can be applied to real-world situations, from social equity and fair pay to developing systems that promote cooperation among AI agents, such as autonomous driving.”

Chen and Fu’s paper expands the theoretical understanding of zero-determinant interactions while also outlining how the outsized power of extortioners can be checked, said mathematician Christian Hilbe, leader of the Dynamics of Social Behaviour research group at the Max Planck Institute for Evolutionary Biology in Germany

“Among the technical contributions, they stress that even extortioners can be outperformed in some games. I don’t think that has been fully appreciated by the community before,” said Hilbe, who was not involved in the study but is familiar with it. “Among the conceptual insights, I like the idea of unbending strategies, behaviours that encourage an extortionate player to eventually settle at a fairer outcome.”

Behavioural research involving human participants has shown that extortioners may constitute a significant portion of our everyday interactions, said Hilbe, who published a 2016 paper in the journal PLOS ONE reporting just that. He also co-authored a 2014 study in Nature Communications that found people playing against a computerized opponent strongly resisted when the computer engaged in threatening conduct, even when it reduced their own payout.

“The empirical evidence to date suggests that people do engage in these extortionate behaviours, especially in asymmetric situations, and that the extorted party often tries to resist it, which is then costly to both parties,” Hilbe said.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Morgan Kelly, Dartmouth College