What does a physicist see when looking at the NFT market?

The market for collectible digital assets, or non-fungible tokens, is an interesting example of a physical system with a large scale of complexity, non-trivial dynamics, and an original logic of financial transactions. At the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, its global statistical features have been analysed more extensively.

In the past, the value of money was determined by the amount of precious metals it contained. Today, we attribute it to certain sequences of digital zeros and ones, simply agreeing that they correspond to coins or banknotes. Non-fungible tokens (NFTs) operate by a similar convention: their owners assign a measurable value to certain sets of ones and zeros, treating them as virtual equivalents of assets such as works of art or properties.

NFTs are closely linked to the cryptocurrency markets but change their holders in a different way to, for example, bitcoins. While each bitcoin is exactly the same and has the same value, each NFT is a unique entity with an individually determined value, integrally linked to information about its current owner.

“Trading in digital assets treated in this way is not guided by the logic of typical currency markets, but by the logic of markets trading in objects of a collector’s nature, such as paintings by famous painters,” explains Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology.)

“We have already become familiar with the statistical characteristics of cryptocurrency markets through previous analyses. The question of the characteristics of a new, very young and at the same time fundamentally different market, also built on blockchain technology, therefore arose very naturally.”

The market for NFTs was initiated in 2017 with the blockchain created for the Ethereum cryptocurrency. The popularization of the idea and the rapid growth of trading took place during the pandemic. At that time, a record-breaking transaction was made at an auction organized by the famous English auction house Christie’s, when the art token Everyday: The First 5000 Days, created by Mike Winkelmann, was sold for $69 million.

Tokens are generally grouped into collections of different sizes, and the less frequently certain characteristics of a token occur in a collection, the higher its value tends to be. Statisticians from IFJ PAN examined publicly available data from the CryptoSlam (cryptoslam.io) and Magic Eden (magiceden.io) portals on five popular collections running on the Solana cryptocurrency blockchain.

These were sets of images and animations known as Blocksmith Labs Smyths, Famous Fox Federation, Lifinity Flares, Okay Bears, and Solana Monkey Business, each containing several thousand tokens with an average transaction value of close to a thousand dollars.

“We focused on analysing changes in the financial parameters of a collection such as its capitalization, minimum price, the number of transactions executed on individual tokens per unit of time (hour), the time interval between successive transactions, or the value of transaction volume. The data covered the period from the launch date of a particular collection up to and including August 2023,” says Dr. Marcin Watorek (PK).

For stabilized financial markets, the presence of certain power laws is characteristic, signaling that the likelihood of large events occurring is greater than would result from a typical Gaussian probability distribution. It appears that the operation of such laws is already evident in the fluctuations of NFT market parameters, for example, in the distribution of times between individual trades or in volume fluctuations.

Among the statistical parameters analysed by the researchers from the IFJ PAN was the Hurst exponent, which describes the reluctance of a system to change its trend. The value of this exponent falls below 0.5 when the system has a tendency to fluctuate: all rises increase the probability of a decrease (or vice versa).

In contrast, values above 0.5 indicate the existence of a certain long-term memory: after a rise, there is a higher probability of another rise; after a fall, there is a higher probability of another fall. For the token collections studied, the values of the Hurst exponent were between 0.6 and 0.8, thus at a level characteristic of highly reputable markets. In practice, this property means that the trading prices of tokens from a given collection fluctuate in a similar manner in many cases.

The existence of a certain long-term memory of the system, reaching up to two months in the NFT market, may indicate the presence of multifractality. When we start to magnify a fragment of an ordinary fractal, sooner or later, we see a structure resembling the initial object, always after using the same magnification. Meanwhile, in the case of multifractals, their different fragments have to be magnified at different speeds.

It is precisely this non-linear nature of self-similarity that has also been observed in the digital collectors’ market, among others, for minimum prices, numbers of transactions per unit of time, and intervals between transactions. However, this multifractality was not fully developed and was best revealed in those situations where the greatest fluctuations were observed in the system under study.

“Our research also shows that the price of the cryptocurrency for which collections are sold directly affects the volume they generate. This is an important observation, as cryptocurrency markets are already known to show many signs of statistical maturity,” notes Pawel Szydlo, first author of the article in Chaos: An Interdisciplinary Journal of Nonlinear Science.

The analyses carried out at IFJ PAN lead to the conclusion that, despite its young age and slightly different trading mechanisms, the NFT market is beginning to function in a manner that is statistically similar to established financial markets. This fact seems to indicate the existence of a kind of universalism among financial markets, even of a significantly different nature. However, its closer understanding will require further research.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Polish Academy of Sciences


On Constructing Functions, Part 5

Example 5

A sequence of functions {fn:R→R}{fn:R→R} which converges to 0 pointwise but does not converge to 0 in L1L1.

This works because: The sequence tends to 0 pointwise since for a fixed x∈Rx∈R, you can always find N∈NN∈N so that fn(x)=0fn(x)=0 for all nn bigger than NN. (Just choose N>xN>x!)

The details: Let x∈Rx∈R and fix ϵ>0ϵ>0 and choose N∈NN∈N so that N>xN>x. Then whenever n>Nn>N, we have |fn(x)−0|=0<ϵ|fn(x)−0|=0<ϵ.

Of course, fn↛0fn↛0 in L1L1 since∫R|fn|=∫(n,n+1)fn=1⋅λ((n,n+1))=1.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


What math tells us about social dilemmas

Human coexistence depends on cooperation. Individuals have different motivations and reasons to collaborate, resulting in social dilemmas, such as the well-known prisoner’s dilemma. Scientists from the Chatterjee group at the Institute of Science and Technology Austria (ISTA) now present a new mathematical principle that helps to understand the cooperation of individuals with different characteristics. The results, published in PNAS, can be applied to economics or behavioural studies.

A group of neighbours shares a driveway. Following a heavy snowstorm, the entire driveway is covered in snow, requiring clearance for daily activities. The neighbours have to collaborate. If they all put on their down jackets, grab their snow shovels, and start digging, the road will be free in a very short amount of time. If only one or a few of them take the initiative, the task becomes more time-consuming and labor-intensive. Assuming nobody does it, the driveway will stay covered in snow. How can the neighbours overcome this dilemma and cooperate in their shared interests?

Scientists in the Chatterjee group at the Institute of Science and Technology Austria (ISTA) deal with cooperative questions like that on a regular basis. They use game theory to lay the mathematical foundation for decision-making in such social dilemmas.

The group’s latest publication delves into the interactions between different types of individuals in a public goods game. Their new model, published in PNAS, explores how resources should be allocated for the best overall well-being and how cooperation can be maintained.

The game of public goods

For decades, the public goods game has been a proven method to model social dilemmas. In this setting, participants decide how much of their own resources they wish to contribute for the benefit of the entire group. Most existing studies considered homogeneous individuals, assuming that they do not differ in their motivations and other characteristics.

“In the real world, that’s not always the case,” says Krishnendu Chatterjee. To account for this, Valentin Hübner, a Ph.D. student, Christian Hilbe, and Maria Kleshina, both former members of the Chatterjee group, started modeling settings with diverse individuals.

A recent analysis of social dilemmas among unequals, published in 2019, marked the foundation for their work, which now presents a more general model, even allowing multi-player interaction.

“The public good in our game can be anything, such as environmental protection or combating climate change, to which everybody can contribute,” Hübner explains. The players have different levels of skills. In public goods games, skills typically refer to productivity.

“It’s the ability to contribute to a particular task,” Hübner continues. Resources, technically called endowment or wealth, on the other hand, refer to the actual things that participants contribute to the common good.

In the snowy driveway scenario, the neighbours vary significantly in their available resources and in their abilities to use them. Solving the problem requires them to cooperate. But what role does their inequality play in such a dilemma?

The two sides of inequality

Hübner’s new model provides answers to this question. Intuitively, it proposes that for diverse individuals to sustain cooperation, a more equal distribution of resources is necessary. Surprisingly, more equality does not lead to maximum general welfare. To reach this, the resources should be allocated to more skilled individuals, resulting in a slightly uneven distribution.

“Efficiency benefits from unequal endowment, while robustness always benefits from equal endowment,” says Hübner. Put simply, for accomplishing a task, resources should be distributed almost evenly. Yet, if efficiency is the goal, resources should be in the hands of those more willing to participate—but only to a certain extent.

What is more important—cooperation efficiency or stability? The scientists’ further simulations of learning processes suggest that individuals balance the trade-off between these two things. Whether this is also the case in the real world remains to be seen. Numerous interpersonal nuances also contribute to these dynamics, including aspects like reciprocity, morality, and ethical issues, among others.

Hübner’s model solely focuses on cooperation from a mathematical standpoint. Yet, due to its generality, it can be applied to any social dilemma with diverse individuals, like climate change, for instance. Testing the model in the real world and applying it to society are very interesting experimental directions.

“I’m quite sure that there will be behavioural experiments benefiting from our work in the future,” says Chatterjee. The study could potentially also be interesting for economics, where the new model’s principles can help to better inform economic systems and policy recommendations.

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given Institute of Science and Technology Austria

 


On Constructing Functions, Part 4

This post is the fourth example in an ongoing list of various sequences of functions which converge to different things in different ways.

Also in this series:

Example 1: converges almost everywhere but not in L1L1
Example 2: converges uniformly but not in L1L1
Example 3: converges in L1L1 but not uniformly
Example 5: converges pointwise but not in L1L1
Example 6: converges in L1L1 but does not converge anywhere

Example 4

A sequence of (Lebesgue) integrable functions fn:R→[0,∞)fn:R→[0,∞) so that {fn}{fn} converges to f:R→[0,∞)f:R→[0,∞) uniformly,  yet ff is not (Lebesgue) integrable.

‍Our first observation is that “ff is not (Lebesgue) integrable” can mean one of two things: either ff is not measurable or ∫f=∞∫f=∞. The latter tends to be easier to think about, so we’ll do just that. Now what function do you know of such that when you “sum it up” you get infinity? How about something that behaves like the divergent geometric series? Say, its continuous cousin f(x)=1xf(x)=1x? That should work since we know∫R1x=∫∞11x=∞.∫R1x=∫1∞1x=∞.Now we need to construct a sequence of integrable functions {fn}{fn} whose uniform limit is 1x1x. Let’s think simple: think of drawring the graph of f(x)f(x) one “integral piece” at a time. In other words, define:

This works because: It makes sense to define the fnfn as  f(x)=1xf(x)=1x “chunk by chunk” since this way the convergence is guaranteed to be uniform. Why? Because how far out we need to go in the sequence so that the difference f(x)−fn(x)f(x)−fn(x) is less than ϵϵ only depends on how small (or large) ϵϵ is. The location of xx doesn’t matter!

Also notice we have to define fn(x)=0fn(x)=0 for all x<1x<1 to avoid the trouble spot ln(0)ln⁡(0) in the integral ∫fn∫fn. This also ensures that the area under each fnfn is finite, guaranteeing integrability.

The details: Each fnfn is integrable since for a fixed nn,∫Rfn=∫n11x=ln(n).∫Rfn=∫1n1x=ln⁡(n).To see fn→ffn→f uniformly, let ϵ>0ϵ>0 and choose NN so that N>1/ϵN>1/ϵ. Let x∈Rx∈R. If x≤1x≤1, any nn will do, so suppose x>1x>1 and let n>Nn>N. If 1<x≤n1<x≤n, then we have |fn(x)−f(x)|=0<ϵ|fn(x)−f(x)|=0<ϵ. And if x>nx>n, then∣∣1xχ[1,∞)(x)−1xχ[1,n](x)∣∣=∣∣1x−0∣∣=1x<1n<1N<ϵ.|1xχ[1,∞)(x)−1xχ[1,n](x)|=|1x−0|=1x<1n<1N<ϵ.

‍For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


On Constructing Functions, Part 3

This post is the third example in an ongoing list of various sequences of functions which converge to different things in different ways.

‍Example 3

A sequence of continuous functions {fn:R→[0,∞)}{fn:R→[0,∞)} which converges to 0 in the L1L1 norm, but does not converge to 0 uniformly.

There are four criteria we want our functions to satisfy:

  1. First off is the uniform convergence. Observe that “{fn}{fn} does not converge to 0 uniformly” can mean one of three things:
  • converges to 0 pointwise only
  • converges to something other than 0 (pointwise or uniformly)
  • does not converge at all

So it’s up to you to decide which one feels more comfortable to work with. Here we’ll choose the second option.

  1. Next, “{fn}{fn} converges to 0 in the L1L1 norm” means that we want to choose our sequence so that the area under the curve of the fnfn gets smaller and smaller as n→∞n→∞.
  2. Further, we also want the fnfn to be positive (the image of each fnfn must be [0,∞)[0,∞)) (notice this allows us to remove the abosolute value sign in the L1L1 norm: ∫|fn|⇒∫fn∫|fn|⇒∫fn)
  3. Lastly, the functions must be continuous.

A slick* but very simple solution is a sequence of triangles of decreasing area with height 1!

This works because: At x=0x=0, fn(x)=1fn(x)=1 for all nn, so there’s no way it can converge to zero (much less uniformly). In fact we have fn→ffn→f pointwise wheref(x)={1,if x=00otherwise.f(x)={1,if x=00otherwise.The area of each triangle is 1n1n which clearly goes to zero for nn large. Also, it’s clear to see visually that the area is getting smaller. This guarantees fn→0fn→0 in the L1L1 norm. Further, each fnfn is positive since we’ve defined it to equal zero as soon as the edges of the triangle reach the xx-axis. And lastly we have piecewise continuity.

The details: Let ϵ>0ϵ>0 and x∈Rx∈R. If x=0x=0, then fn(x)=1fn(x)=1 for all n and so fn→1fn→1. Otherwise x>0x>0 or x<0x<0 If x>0x>0 and x>1x>1, then fn(x)=0fn(x)=0 for all nn. Otherwise if x∈(0,1]x∈(0,1] choose N>1xN>1x. Then whenever n>Nn>N we have fn(x)=1−nx<1−1xx=0<ϵ.fn(x)=1−nx<1−1xx=0<ϵ. The case when x<0x<0 follows a similar argument.

Lastly fn→0fn→0 in the L1L1 norm since, as we mentioned, the areas are decreasing to 0. Explicitly:  ∫R|fn|=∫0−1n1+nx+∫1n01−nx=2n→0.∫R|fn|=∫−1n01+nx+∫01n1−nx=2n→0.

‍*I can brag because this particular example came from a friend. My own attempt at a solution was not nearly as intuitive.

Constructing the Tensor Product of Modules

The Basic Idea

Today we talk tensor products. Specifically this post covers the construction of the tensor product between two modules over a ring. But before jumping in, I think now’s a good time to ask, “What are tensor products good for?” Here’s a simple example where such a question might arise:

Suppose you have a vector space VV over a field FF. For concreteness, let’s consider the case when VV is the set of all 2×22×2 matrices with entries in RR and let F=RF=R. In this case we know what “FF-scalar multiplication” means: if M∈VM∈V is a matrix and c∈Rc∈R, then the new matrix cMcM makes perfect sense. But what if we want to multiply MM by complex scalars too? How can we make sense of something like (3+4i)M(3+4i)M? That’s precisely what the tensor product is for! We need to create a set of elements of the form(complex number) “times” (matrix)(complex number) “times” (matrix)so that the mathematics still makes sense. With a little massaging, this set will turn out to be C⊗RVC⊗RV.

So in general, if FF is  an arbitrary field and VV an FF-vector space, the tensor product answers the question “How can I define scalar multiplication by some larger field which contains FF?” And of course this holds if we replace the word “field” by “ring” and consider the same scenario with modules.

Now this isn’t the only thing tensor products are good for (far from it!), but I think it’s the most intuitive one since it is readily seen from the definition (which is given below).

So with this motivation in mind, let’s go!

‍From English to Math

Let RR be a ring with 1 and let MM be a right RR-module and NN a left RR-module and suppose AA is any abelian group. Our goal is to create an abelian group M⊗RNM⊗RN, called the tensor product of MM and NN, such that if there is an RR-balanced map i:M×N→M⊗RNi:M×N→M⊗RN and any RR-balanced map φ:M×N→Aφ:M×N→A, then there is a unique abelian group homomorphism Φ:M⊗RN→AΦ:M⊗RN→A such that φ=Φ∘iφ=Φ∘i, i.e. so the diagram below commutes.

Notice that the statement above has the same flavor as the universal mapping property of free groups!

Definition: Let XX be a set. A group FF is said to be a free group on XX if there is a function i:X→Fi:X→F such that for any group GG and any set map φ:X→Gφ:X→G, there exists a unique group homomorphism Φ:F→GΦ:F→G such that the following diagram commutes: (i.e. φ=Φ∘iφ=Φ∘i)

set map, so in particular we just want our’s to be RR-balanced:

: Let RR be a ring with 1. Let MM be a right RR-module, NN a left RR-module, and AA an abelian group. A map φ:M×N→Rφ:M×N→R is called RR-balanced if for all m,m1,m2∈Mm,m1,m2∈M, all n,n1,n2∈Nn,n1,n2∈N and all r∈Rr∈R,
φ(m1+m2,n)=φ(m1,n)+φ(m2,n)φ(m1+m2,n)=φ(m1,n)+φ(m2,n)φ(m,n1+n2)=φ(m,n1)+φ(m,n2)φ(m,n1+n2)=φ(m,n1)+φ(m,n2)φ(mr,n)=φ(m,rn)φ(mr,n)=φ(m,rn)

By “replacing” F by a certain quotient group F/HF/H! (We’ll define HH precisely below.)
These observations give us a road map to construct the tensor product. And so we begin:

‍Step 1

Let FF be a free abelian group generated by M×NM×N and let AA be an abelian group. Then by definition (of free groups), if φ:M×N→Aφ:M×N→A is any set map, and M×N↪FM×N↪F by inclusion, then there is a unique abelian group homomorphism Φ:F→AΦ:F→A so that the following diagram commutes.

Step 2

that the inclusion map M×N↪FM×N↪F is not RR-balanced! To fix this, we must “modify” the target space FF by replacing it with the quotient F/HF/H where H≤FH≤F is the subgroup of FF generated by elements of the form

(m1+m2,n)−(m1,n)−(m2,n)(m1+m2,n)−(m1,n)−(m2,n)

  • (m,n1+n2)−(m,n1)−(m,n2)(m,n1+n2)−(m,n1)−(m,n2)
  • (mr,n)−(m,rn)(mr,n)−(m,rn)

where m1,m2,m∈Mm1,m2,m∈M, n1,n2,n∈Nn1,n2,n∈N and r∈Rr∈R. Why elements of this form? Because if we define the map i:M×N→F/Hi:M×N→F/H byi(m,n)=(m,n)+H,i(m,n)=(m,n)+H,we’ll see that ii is indeed RR-balanced! Let’s check:

So, are we done now? Can we really just replace FF with F/HF/H and replace the inclusion map with the map ii, and still retain the existence of a unique homomorphism Φ:F/H→AΦ:F/H→A? No! Of course not. F/HF/H is not a free group generated by M×NM×N, so the diagram below is bogus, right?

Not totally. We haven’t actually disturbed any structure!

How can we relate the pink and blue lines? We’d really like them to be the same. But we’re in luck because they basically are!

‍Step 3

H⊆ker(f)H⊆ker⁡(f), that is as long as f(h)=0f(h)=0 for all h∈Hh∈H. And notice that this condition, f(H)=0f(H)=0, forces ff to be RR-balanced!

Let’s check:

Sooooo… homomorphisms f:F→Af:F→A such that H⊆ker(f)H⊆ker⁡(f) are the same as RR-balanced maps from M×NM×N to AA! (Technically, I should say homomorphisms ff restricted to M×NM×N.) In other words, we have

In conclusion, to say “abelian group homomorphisms from F/HF/H to AA are the same as (isomorphic to) RR-balanced maps from M×NM×N to AA” is the simply the hand-wavy way of saying

Whenever i:M×N→Fi:M×N→F is an RR-balanced map and φ:M×N→Aφ:M×N→A is an RR-balanced map where AA is an abelian group, there exists a unique abelian group homomorphism Φ:F/H→AΦ:F/H→A such that the following diagram commutes:

And this is just want we want! The last step is merely the final touch:

‍Step 4

the abelian quotient group F/HF/H to be the tensor product of MM and NN,

whose elements are cosets,

where m⊗nm⊗n for m∈Mm∈M and n∈Nn∈N is referred to as a simple tensor. And there you have it! The tensor product, constructed.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


On Constructing Functions, Part 2

This post is the second example in an ongoing list of various sequences of functions which converge to different things in different ways.

‍Example 2

A sequence of functions {fn:R→R}{fn:R→R} which converges to 0 uniformly but does not converge to 0 in L1L1.

This works because:  The sequence tends to 0 as n→∞n→∞ since the height of each function tends to 0 and the the region where fnfn is taking on this decreasing height is tending towards all of R+R+ ((0,n)(0,n) as n→∞n→∞) (and it’s already 0 on R−∪{0}R−∪{0}). The convergence is uniform because the number of times we have to keep “squishing” the rectangles until their height is less than ϵϵ does not depend on xx.

The details: Let ϵ>0ϵ>0 and choose N∈NN∈N so that N>1ϵN>1ϵ and let n>Nn>N. Fix x∈Rx∈R.

Case 1 (x≤0x≤0 or x≥nx≥n) Then fn(x)=0fn(x)=0 and so |fn(x)−0|=0<ϵ|fn(x)−0|=0<ϵ.

  • Case 2 (0<x<n0<x<n ) Then fn(x)=1nfn(x)=1n and so |fn(x)−0|=1n<1N<ϵ|fn(x)−0|=1n<1N<ϵ

Finally, fn↛0fn↛0 in L1L1 since∫R|fn|=∫(0,n)1n=1nλ((0,n))=1.∫R|fn|=∫(0,n)1n=1nλ((0,n))=1.

Remark: Here’s a question you could ask: wouldn’t fn=nχ(0,1n)fn=nχ(0,1n) work here too? Both are tending to 0 everywhere and both involve rectangles of area 1. The answer is “kinda.” The problem is that the convergence of nχ(0,1n)nχ(0,1n) is pointwise. BUT Egoroff’s Theorem gives us a way to actually “make” it uniform!.

‍On the notation above:   For a measurable set X⊂RX⊂R, denote the set of all Lebesgue integrable functions f:X→Rf:X→R by L1(X)L1(X). Then a sequence of functions {fn}{fn} is said to converge in L1L1  to a function ff if limn→∞∫|fn−f|=0limn→∞∫|fn−f|=0.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


On Constructing Functions, Part 1

Given a sequence of real-valued functions {fn}{fn}, the phrase, “fnfn converges to a function ff” can mean a few things:

  • fnfn converges uniformly
  • fnfn converges pointwise
  • fnfn converges almost everywhere (a.e.)
  • fnfn converges in L1L1 (set of Lebesgue integrable functions)
  • and so on…

Other factors come into play if the fnfn are required to be continuous, defined on a compact set, integrable, etc.. So since I do not have the memory of an elephant (whatever that phrase means…), I’ve decided to keep a list of different sequences that converge (or don’t converge) to different functions in different ways. With each example I’ll also include a little (and hopefully) intuitive explanation for why. Having these sequences close at hand is  especially useful when analysing the behavior of certain functions or constructing counterexamples.

The first sequence we’ll look at is one which converges almost everywhere, but does not converge in L1L1 (the set of Lebesgue integrable functions).

‍Example 1

A sequence of functions {fn:R→R}{fn:R→R} which converges to 0 almost everywhere but does not converge to 0 in L1L1.       

This works because: Recall that to say fn→0fn→0 almost everywhere means fn→0fn→0 pointwise on RR except for a set of measure 0. Here, the set of measure zero is the singleton set {0}{0} (at x=0x=0, fn(x)=nfn(x)=n and we can’t make this less than ϵϵ for any ϵ>0ϵ>0). So fnfn converges to 0 pointwise on (0,1](0,1]. This holds because if x<0x<0 or x>1x>1 then fn(x)=0fn(x)=0 for all nn. Otherwise, if x∈(0,1]x∈(0,1], we can choose nn appropriately:

The details:  Let ϵ>0ϵ>0 and x∈(0,1]x∈(0,1] and choose N∈NN∈N so that N>1xN>1x. Then whenever n>Nn>N, we have n>1xn>1x which implies x>1nx>1n and so fn(x)=0fn(x)=0. Hence |fnx−0|=0<ϵ|fnx−0|=0<ϵ.

Further*, fn↛0fn↛0 in L1L1 since∫R|fn|=∫[0,1n]n=nλ([0,1n])=1.∫R|fn|=∫[0,1n]n=nλ([0,1n])=1.

Remark: Notice that Egoroff’s theorem applies here! We just proved that fn→0fn→0 pointwise a.e. on RR, but Egoroff says that we can actually get uniform convergence a.e. on a bounded subset of RR, say (0,1](0,1].

In particular for each ϵ>0ϵ>0 we are guaranteed the existence of a subset E⊂(0,1]E⊂(0,1] such that fn→0fn→0 uniformly and λ((0,1]∖E)<ϵλ((0,1]∖E)<ϵ. In fact, it should be clear that that subset must be something like (ϵ2,1](ϵ2,1] (the “zero region” in the graph above). Then no matter where xx is in (0,1](0,1], we can always find nn large enough – namely all nn which satisfy 1n<ϵ21n<ϵ2 – so that fn(x)=0fn(x)=0, i.e. fn→ffn→f uniformly. And indeed, λ((0,1]∖(ϵ2,1]=ϵ/2<ϵλ((0,1]∖(ϵ2,1]=ϵ/2<ϵ as claimed.

‍On the notation above:   For a measurable set X⊂RX⊂R, denote the set of all Lebesgue integrable functions f:X→Rf:X→R by L1(X)L1(X). Then a sequence of functions {fn}{fn} is said to converge in L1L1  to a function ff if limn→∞∫|fn−f|=0limn→∞∫|fn−f|=0.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


Make mine a double: Moore’s Law and the future of mathematics

What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law.

Moore’s Law stems from a seminal 1965 article by Intel founder Gordon Moore. He wrote:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”

Moore noted that in 1965 engineering advances were enabling a doubling in semiconductor density every 12 months, but this rate was later modified to roughly 18 months. Informally, we may think of this as doubling computer performance.

In any event, Moore’s Law has now continued unabated for 45 years, defying several confident predictions it would soon come to a halt, and represents a sustained exponential rate of progress that is without peer in the history of human technology. Here is a graph of Moore’s Law, shown with the transistor count of various computer processors:

Where we’re at with Moore’s Law

At the present time, researchers are struggling to keep Moore’s Law on track. Processor clock rates have stalled, as chip designers have struggled to control energy costs and heat dissipation, but the industry’s response has been straightforward — simply increase the number of processor “cores” on a single chip, together with associated cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections.

The capacity of leading-edge DRAM main memory chips continues to advance apace with Moore’s Law. The current state of the art in computer memory devices is a 3D design, which will be jointly produced by IBM and Micron Technology, according to a December 2011 announcement by IBM representatives.

As things stand, the best bet for the future of Moore’s Law are nanotubes — submicroscopic tubes of carbon atoms that have remarkable properties.

According to a recent New York Times article, Stanford researchers have created prototype electronic devices by first growing billions of carbon nanotubes on a quartz surface, then coating them with an extremely fine layer of gold atoms. They then used a piece of tape (literally!) to pick the gold atoms up and transfer them to a silicon wafer. The researchers believe that commercial devices could be made with these components as early as 2017.

Moore’s Law in science and maths

So what does this mean for researchers in science and mathematics?

Plenty, as it turns out. A scientific laboratory typically uses hundreds of high-precision devices that rely crucially on electronic designs, and with each step of Moore’s Law, these devices become ever cheaper and more powerful. One prominent case is DNA sequencers. When scientists first completed sequencing a human genome in 2001, at a cost of several hundred million US dollars, observers were jubilant at the advances in equipment that had made this possible.

Now, only ten years later, researchers expect to reduce this cost to only US$1,000 within two years and genome sequencing may well become a standard part of medical practice. This astounding improvement is even faster than Moore’s Law!

Applied mathematicians have benefited from Moore’s Law in the form of scientific supercomputers, which typically employ hundreds of thousands of state-of-the-art components. These systems are used for tasks such as climate modelling, product design and biological structure calculations.

Today, the world’s most powerful system is a Japanese supercomputer that recently ran the industry-standard Linpack benchmark test at more than ten “petaflops,” or, in other words, 10 quadrillion floating-point operations per second.

Below is a graph of the Linpack performance of the world’s leading-edge systems over the time period 1993-2011, courtesy of the website Top 500. Note that over this 18-year period, the performance of the world’s number one system has advanced more than five orders of magnitude. The current number one system is more powerful than the sum of the world’s top 500 supercomputers just four years ago.

Linpack performance over time.

Pure mathematicians have been a relative latecomer to the world of high-performance computing. The present authors well remember the era, just a decade or two ago, when the prevailing opinion in the community was that “real mathematicians don’t compute.”

But thanks to a new generation of mathematical software tools, not to mention the ingenuity of thousands of young, computer-savvy mathematicians worldwide, remarkable progress has been achieved in this arena as well (see our 2011 AMS Notices article on exploratory experimentation in mathematics).

In 1963 Daniel Shanks, who had calculated pi to 100,000 digits, declared that computing one billion digits would be “forever impossible.” Yet this level was reached in 1989. In 1989, famous British physicist Roger Penrose, in the first edition of his best-selling book The Emperor’s New Mind, declared that humankind would likely never know whether a string of ten consecutive sevens occurs in the decimal expansion of pi. Yet this was found just eight years later, in 1997.

Computers are certainly being used for more than just computing and analysing digits of pi. In 2003, the American mathematician Thomas Hales completed a computer-based proof of Kepler’s conjecture, namely the long-hypothesised fact that the simple way the grocer stacks oranges is in fact the optimal packing for equal-diameter spheres. Many other examples could be cited.

Future prospects

So what does the future hold? Assuming that Moore’s Law continues unabated at approximately the same rate as the present, and that obstacles in areas such as power management and system software can be overcome, we will see, by the year 2021, large-scale supercomputers that are 1,000 times more powerful and capacious than today’s state-of-the-art systems — “exaflops” computers (see NAS Report). Applied mathematicians eagerly await these systems for calculations, such as advanced climate models, that cannot be done on today’s systems.

Pure mathematicians will use these systems as well to intuit patterns, compute integrals, search the space of mathematical identities, and solve intricate symbolic equations. If, as one of us discussed in a recent Conversation article, such facilities can be combined with machine intelligence, such as a variation of the hardware and software that enabled an IBM system to defeat the top human contestants in the North American TV game show Jeopardy! we may see a qualitative advance in mathematical discovery and even theory formation.

It is not a big leap to imagine that within the next ten years tailored and massively more powerful versions of Siri (Apple’s new iPhone assistant) will be an integral part of mathematics, not to mention medicine, law and just about every other part of human life.

Some observers, such as those in the Singularity movement, are even more expansive, predicting a time just a few decades hence when technology will advance so fast that at the present time we cannot possibly conceive or predict the outcome.

Your present authors do not subscribe to such optimistic projections, but even if more conservative predictions are realised, it is clear that the digital future looks very bright indeed. We will likely look back at the present day with the same technological disdain with which we currently view the 1960s.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis

 


Mathematical models may help shed light on body clock disruptions

Researchers are using mathematical models to better understand the effects of disruptions like daylight savings time, working night shifts, jet lag or even late-night phone scrolling on the body’s circadian rhythms.

The University of Waterloo and the University of Oxford researchers have developed a new model to help scientists better understand the resilience of the brain’s master clock: the cluster of neurons in the brain that coordinates the body’s other internal rhythms. They also hope to suggest ways to help improve this resilience in individuals with weak or impaired circadian rhythms. The study, “Can the Clocks Tick Together Despite the Noise? Stochastic Simulations and Analysis,” appears in the SIAM Journal on Applied Dynamical Systems.

Sustained disruptions to circadian rhythm have been linked to diabetes, memory loss, and many other disorders.

“Current society is experiencing a rapid increase in demand for work outside of traditional daylight hours,” said Stéphanie Abo, a Ph.D. student in applied mathematics and the study’s lead author. “This greatly disrupts how we are exposed to light, as well as other habits such as eating and sleeping patterns.”

Humans’ circadian rhythms, or internal clocks, are the roughly 24-hour cycles many body systems follow, usually alternating between wakefulness and rest. Scientists are still working to understand the cluster of neurons known as suprachiasmatic nucleus (SCN) or master clock.

Using mathematical modeling techniques and differential equations, the team of applied mathematics researchers modeled the SCN as a macroscopic, or big-picture, system comprised of a seemingly infinite number of neurons. They were especially interested in understanding the system’s couplings—the connections between neurons in the SCN that allow it to achieve a shared rhythm.

Frequent and sustained disturbances to the body’s circadian rhythms eliminated the shared rhythm, implying a weakening of the signals transmitted between SCN neurons.

Abo said they were surprised to find that “a small enough disruption can actually make the connections between neurons stronger.”

“Mathematical models allow you to manipulate body systems with specificity that cannot be easily or ethically achieved in the body or a petri dish,” Abo said. “This allows us to do research and develop good hypotheses at a lower cost.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Waterloo

 


Math teachers hold a bias against girls when the teachers think gender equality has been achieved, says study

Math teachers who believe women no longer face discrimination tend to be biased against girls’ ability in math. This is what we found through an experiment we conducted with over 400 elementary and middle school math teachers across the United States. Our findings were published in a peer-reviewed article that appeared in April 2023 in the International Journal of STEM Education.

For our experiment, we asked teachers to evaluate a set of student solutions to math problems. The teachers didn’t know that gender- and race-specific names, such as Tanisha and Connor, had been randomly assigned to the solutions. We did this so that if they evaluated identical student work differently, it would be because of the gender- and race-specific names they saw, not the differences in student work. The idea was to see if the teachers had any unconscious biases.

After the teachers evaluated the student solutions, we asked a series of questions about their beliefs and experiences. We asked if they felt society had achieved gender equality. We asked them whether they felt anxious about doing math. We asked whether they felt students’ ability in math was fixed or could be improved. We also asked teachers to think about their own experience as math students and to report how frequently they experienced feelings of unequal treatment because of their race or gender.

We then investigated if these beliefs and experiences were related to how they evaluated the math ability of students of different genders or racial groups.

Consistent with our prior work, we found that implicit bias against girls arises in ambiguous situations—in this case, when student solutions were not completely correct.

Further, for teachers who believed that U.S. society had achieved gender equality, they tended to rate a student’s ability higher when they saw a male student name than when they saw a female student name for the same student work.

Teachers’ unconscious gender biases in math classes have been documented repeatedly.

Our study identifies factors that underlie such biases; namely, that biases are stronger among teachers who believe that gender discrimination is not a problem in the United States. Understanding the relationship between teachers’ beliefs and biases can help teacher educators create effective and targeted interventions to remove such biases from classrooms.

Our findings also shed light on potential reasons that males tend to have higher confidence in math and stick with math-intensive college majors even when they’re not high performers.

One big remaining question is how to create targeted interventions to help teachersovercome such biases. Evidence suggests that unconscious biases come into play in situations where stereotypes might emerge. Further, research suggests that these unconscious biases can be suppressed only when people are aware of them and motivated to restrain them.

Since bias may take on different forms in different fields, a one-time, one-size-fits-all anti-bias training may not have a lasting effect. We think it’s worthwhile to investigate if it’s more effective to provide implicit bias training programs that are specific to the areas where bias is revealed.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Yasemin Copur-Gencturk, Ian Thacker and Joseph Cimpian, The Conver