New research analyses ‘Finnegans Wake’ for novel spacing between punctuation marks

Sequences of consecutive breakpoint distances for “Gates of Paradise” and “Finnegans Wake” in the same scale. Credit: Stanisław Drożdż

Statistical analysis of classic literature has shown that the way punctuation breaks up text obeys certain universal mathematical relationships. James Joyce’s tome “Finnegans Wake,” however, famously breaks the rules of normal prose through its unusual, dreamlike stream of consciousness. New work in chaos theory, published in the journal Chaos, takes a closer look at how Joyce’s challenging novel stands out, mathematically.

Researchers have compared the distribution of punctuation marks in various experimental novels to determine the underlying order of “Finnegans Wake.” By statistically analysing the texts, the team has found that the tome exhibits an unusual but statistically identifiable structure.

“‘Finnegans Wake’ exhibits the type of narrative that makes it possible to continue longer strings of words without the need for punctuation breaks,” said author Stanisław Drożdż. “This may indicate that this type of narrative is less taxing on the human perceptual and respiratory systems or, equivalently, that it resonates better with them.”

As word sequences run longer without punctuation marks, the higher the probability that a punctuation mark appears next. Such a relationship is called a Weibull distribution. Weibull distributions apply to anything from human diseases to “The Gates of Paradise,” a Polish novel written almost entirely in a single sentence spanning nearly 40,000 words.

Enter “Finnegans Wake,” which weaves together puns, phrases, and portmanteaus from up to 70 languages into a dreamlike stream of consciousness. The book typifies Joyce’s later works, some of the only known examples to appear to not adhere to the Weibull distribution in punctuation.

The team broke down 10 experimental novels by word counts between punctuation marks. These sets of numbers were compiled into a singularity spectrum for each book that described how orderly sentences of different lengths are proportioned. “Finnegans Wake” has a notoriously broad range of sentence lengths, making for a wide spectrum.

While most punctuation distributions skew toward shorter word sequences, the wide singularity spectrum in “Finnegans Wake” was perfectly symmetrical, meaning sentence length variability follows an orderly curve.

This level of symmetry is a rare feat in the real world, implying a well-organized, complex hierarchical structure that aligns perfectly with a phenomenon known as multifractality, systems represented by fractals within fractals.

“‘Finnegans Wake’ appears to have the unique property that the probability of interrupting a sequence of words with a punctuation character decreases with the length of the sequence,” Drożdż said. “This makes the narrative more flexible to create perfect, long-range correlated cascading patterns that better reflect the functioning of nature.”

Drożdż hopes the work helps large language models better capture long-range correlations in text. The team next looks to apply their work in this domain.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given   American Institute of Physics

 


Improved information and communication technology infrastructure leads to better math skills, research suggests

Students who are more digitally skilled also perform better in math. New research from Renae Loh and others at Radboud University shows that in countries with better availability of information and communication technology (ICT) in schools, math performance benefits greatly. It further suggests that improving the ICT environment in schools can reduce inequality in education between countries. The paper is published in European Educational Research Journal today.

For anyone growing up today, ICT skills play a tremendously important role. Today’s youth constantly come into contact with technology throughout their life, both in work and leisure. Though previous studies have shown the importance of ICT skills in students’ learning outcomes, a new study focuses specifically on its relevance to math and how that differs between countries.

“Both ICT and math rely on structural and logical thinking, which is why ICT skills overlap with and boosts math learning. But we were also curious to find out how much of that depends on a country’s ICT environment,” says Renae Loh, primary author of the paper and a sociologist at Radboud University.

Benefits of a strong ICT infrastructure

Loh and her colleagues used data from the 2018 PISA Study and compares 248,720 students aged 15 to 16 across 43 countries. Included in this data is information about the ICT skills of these students. They were asked whether they read new information on digital devices, and if they would try to solve problems with those devices themselves, among other questions. The more positively students responded to these questions, the more skilled in ICT the researchers judged these students to be.

Loh says, “What we found is that students get more educational benefit out of their digital skills in countries with a strong ICT infrastructure in education. This is likely because the more computers and other digital tools are available to them in their studies, the more they were able to put those skills to use, and the more valued these skills were. It is not a negligible difference either.”

“A strong ICT infrastructure in education could boost what math performance benefits students gain from their digital skills by about 60%. Differences in ICT infrastructure in education accounted for 25% of the differences between countries in how much math benefits students gain from their digital skills. It is also a better indicator than, for example, looking at a more general indicator of country wealth, because it is more pinpointed and more actionable.”

Reducing inequality

Especially notable to Loh and her colleagues was the difference that was apparent between countries with a strong ICT infrastructure, and countries without. “It was surprisingly straightforward, in some ways: the higher the computer-to-student ratio in a country, the stronger the math performance. This is consistent with the idea that these skills serve as a learning and signaling resource, at least for math, and students need opportunities to put these resources to use.”

Loh points out that there are limits to the insight offered by the data, however. “Our study doesn’t look at the process of how math is taught in these schools, specifically. Or how the ICT infrastructure is actually being used. Future research might also puzzle over how important math teachers themselves believe ICT skills to be, and if that belief and their subsequent teaching style influences the development of students, too.”

“There is still vast inequality in education around the world,” warns Loh. “And now there’s an added ICT dimension. Regardless of family background, gender, and so on, having limited access to ICT or a lack in digital skills is a disadvantage in schooling. What is clear is that the school environment is important here. More targeted investments in a robust ICT infrastructure in education would help in bridging the educational gap between countries and may also help to address inequalities in digital skills among students in those countries.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Radboud University


Window Patterns

Start with a square piece of paper (like a Post it note), fold and unfold it in half along a mid-line or along a diagonal. Take another identical square, and fold and unfold it the same way. Decide on some way to place the second square on the first, so that the second is somewhat rotated. Use only the edges of the square and the creases that you made to determine the placement. Make your placement precise so that your “rule” can be described exactly in terms of the edges and creases. Repeat this process, placing a third square on top of your second square using exactly the same rule. Repeat until your placing of papers leads you back to the first piece.

The resulting construction might look something like the one shown on the left below. If you take your papers, set them in place with some careful light gluing, and place them on a window, the sunlight passing through the overlapping papers creates a stained-glass effect that shows a variety of shapes.

This sort of construction is a simplified version of what William Gibbs describes in his book “Window Patterns.” In Gibbs’ treatment, the pattern is partially planned in advance, and then the dimensions of the rectangular pieces of paper that make up the pattern are determined using a little trigonometry. This process can be simplified by starting with a more limited range of options for paper dimension and placement. It turns out that a surprising number of window patterns can be created by only using squares, their mid-lines, and their diagonals, and that these patterns invariably have “special triangles” and related regular polygons and star-polygons embedded within them.

Here are a two more “placement rules” and the patterns that they give rise to.

The diagrams were created using Geometer’s Sketchpad – if you construct the rule using translations applied to a constructed square, you can use the iteration feature to create the final pattern. GSP provides a good environment for planning out the patterns prior to constructing them with paper, and building the plans in GSP is enjoyable and instructive as well.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Researchers find best routes to self-assembling 3-D shapes

This showas a few of the 2.3 million possible 2-D designs — planar nets — for a truncated octahedron (right column). The question is: Which net is best to make a self-assembling shape at the nanoscale?

Material chemists and engineers would love to figure out how to create self-assembling shells, containers or structures that could be used as tiny drug-carrying containers or to build 3-D sensors and electronic devices.

There have been some successes with simple 3-D shapes such as cubes, but the list of possible starting points that could yield the ideal self-assembly for more complex geometric configurations gets long fast. For example, while there are 11 2-D arrangements for a cube, there are 43,380 for a dodecahedron (12 equal pentagonal faces). Creating a truncated octahedron (14 total faces – six squares and eight hexagons) has 2.3 million possibilities.

“The issue is that one runs into a combinatorial explosion,” said Govind Menon, associate professor of applied mathematics at Brown University. “How do we search efficiently for the best solution within such a large dataset? This is where math can contribute to the problem.”

In a paper published in the Proceedings of National Academy of Sciences, researchers from Brown and Johns Hopkins University determined the best 2-D arrangements, called planar nets, to create self-folding polyhedra with dimensions of a few hundred microns, the size of a small dust particle. The strength of the analysis lies in the combination of theory and experiment. The team at Brown devised algorithms to cut through the myriad possibilities and identify the best planar nets to yield the self-folding 3-D structures. Researchers at Johns Hopkins then confirmed the nets’ design principles with experiments.

“Using a combination of theory and experiments, we uncovered design principles for optimum nets which self-assemble with high yields,” said David Gracias, associate professor in of chemical and biomolecular engineering at Johns Hopkins and a co-corresponding author on the paper. “In doing so, we uncovered striking geometric analogies between natural assembly of proteins and viruses and these polyhedra, which could provide insight into naturally occurring self-assembling processes and is a step toward the development of self-assembly as a viable manufacturing paradigm.”

“This is about creating basic tools in nanotechnology,” said Menon, co-corresponding author on the paper. “It’s important to explore what shapes you can build. The bigger your toolbox, the better off you are.”

While the approach has been used elsewhere to create smaller particles at the nanoscale, the researchers at Brown and Johns Hopkins used larger sizes to better understand the principles that govern self-folding polyhedra.

The researchers sought to figure out how to self-assemble structures that resemble the protein shells viruses use to protect their genetic material. As it turns out, the shells used by many viruses are shaped like dodecahedra (a simplified version of a geodesic dome like the Epcot Center at Disney World). But even a dodecahedron can be cut into 43,380 planar nets. The trick is to find the nets that yield the best self-assembly. Menon, with the help of Brown undergraduate students Margaret Ewing and Andrew “Drew” Kunas, sought to winnow the possibilities. The group built models and developed a computer code to seek out the optimal nets, finding just six that seemed to fit the algorithmic bill.

The students got acquainted with their assignment by playing with a set of children’s toys in various geometric shapes. They progressed quickly into more serious analysis. “We started randomly generating nets, trying to get all of them. It was like going fishing in a lake and trying to count all the species of fish,” said Kunas, whose concentration is in applied mathematics. After tabulating the nets and establishing metrics for the most successful folding maneuvers, “we got lists of nets with the best radius of gyration and vertex connections, discovering which nets would be the best for production for the icosahedron, dodecahedron, and truncated octahedron for the first time.”

Gracias and colleagues at Johns Hopkins, who have been working with self-assembling structures for years, tested the configurations from the Brown researchers. The nets are nickel plates with hinges that have been soldered together in various 2-D arrangements. Using the options presented by the Brown researchers, the Johns Hopkins’s group heated the nets to around 360 degrees Fahrenheit, the point at which surface tension between the solder and the nickel plate causes the hinges to fold upward, rotate and eventually form a polyhedron. “Quite remarkably, just on heating, these planar nets fold up and seal themselves into these complex 3-D geometries with specific fold angles,” Gracias said.

“What’s amazing is we have no control over the sequence of folds, but it still works,” Menon added.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Karolina Grabowska/Pexels,


How the 18th-century ‘probability revolution’ fueled the casino gambling craze

The first commercial gambling operations emerged, coincidentally or not, at the same time as the study of mathematical probability in the mid-1600s.

By the early 1700s, commercial gambling operations were widespread in European cities such as London and Paris. But in many of the games that were offered, players faced steep odds.

Then, in 1713, the brothers Johann and Jacob Bernoulli proved their “Golden Theorem,” known now as the law of large numbers or long averages.

But gambling entrepreneurs were slow to embrace this theorem, which showed how it could actually be an advantage for the house to have a smaller edge over a larger one.

The book “The Gambling Century: Commercial Gaming in Britain from Restoration to Regency,” WEexplain how it took government efforts to ban and regulate betting for gambling operators to finally understand just how much money could be made off a miniscule house edge.

The illusion of even odds in games that were the ancestors of roulette and blackjack proved immensely profitable, sparking a “probability revolution” that transformed gambling in Britain and beyond.

A new theorem points to sneaky big profits

The law of large numbers refers to events governed by chance.

When you flip a coin, for example, you have a 50% – or “even money” – chance of getting heads or tails. Were you to flip a coin 10 times, it’s quite possible that heads will turn up seven times and tails three times. But after 100, or 1000, or 10,000 flips, the ratio of “heads” to “tails” will be closer and closer to the mathematical “mean of probability” – that is, half heads and half tails.

Mathematicians Johann and Jacob Bernoulli developed what’s known today as the law of large numbers. Oxford Science Archive/Print Collector via Getty Images

This principle was popularized by writers such as Abraham De Moivre, who applied them to games of chance.

De Moivre explained how, over time, someone with even the smallest statistical “edge” would eventually win almost all of the money that was staked.

This is what happens in roulette. The game has 36 numbers, 18 of which are red and 18 of which are black. However, there are also two green house numbers – “0” and “00” – which, if the ball lands on them, means that the house can take everyone’s wager. This gives the house a small edge.

Imagine 10 players with $100 apiece. Half of them bet $10 on red and the other half bet $10 on black. Assuming that the wheel strictly aligns with the mean of probability, the house will break even for 18 of 19 spins. But on the 19th spin, the ball will land on one of the green “house numbers,” allowing the house to collect all the money staked from all bettors.

After 100 spins, the house will have won half of the players’ money. After 200 spins, they’ll have won all of it.

Even with a single house number – the single 0 on the roulette wheels introduced in Monte Carlo by the casino entrepreneur Louis Blanc – the house would win everything after 400 spins.

This eventuality, as De Moivre put it, “will seem almost incredible given the smallness of the odds.”

Hesitating to test the math

As De Moivre anticipated, gamblers and gambling operators were slow to adopt these findings.

De Moivre’s complex mathematical equations were over the heads of gamblers who hadn’t mastered simple arithmetic.

Gambling operators didn’t initially buy into the Golden Theorem, either, seeing it as unproven and therefore risky.

Instead, they played it safe by promoting games with long odds.

One was the Royal Oak Lottery, a game played with a polyhedral die with 32 faces, like a soccer ball. Players could bet on individual numbers or combinations of two or four numbers, giving them, at best, 7-to-1 odds of winning.

Faro was another popular game of chance in which the house, or “bank” as it was then known, gave players the opportunity to defer collecting their winnings for chances at larger payouts at increasingly steep odds.

Faro was a popular game of chance in which players could delay collecting their winnings for the chance to win even bigger sums. Boston Public Library

These games – and others played against a bank – were highly profitable to gambling entrepreneurs, who operated out of taverns, coffeehouses and other similar venues. “Keeping a common gaming house” was illegal, but with the law riddled with loopholes, enforcement was lax and uneven.

Public outcry against the Royal Oak Lottery was such that the Lottery Act of 1699 banned it. A series of laws enacted in the 1730s and 1740s classified faro and other games as illegal lotteries, on the grounds that the odds of winning or losing were not readily apparent to players.

The law of averages put into practice

Early writers on probability had asserted that the “house advantage” did not have to be very large for a gambling operation to profit enormously. The government’s effort to ban games of chance now obliged gaming operators to put the law of long averages into practice.

Further statutes outlawed games of chance played with dice, cards, wheels or any other device featuring “numbers or figures.”

None of these measures deterred gambling operators from the pursuit of profit.

Since this language did not explicitly include letters, the game of EO, standing for “even odd,” was introduced in the mid 1740s, after the last of these gambling statutes was enacted. It was played on a wheel with 40 slots, all but two of which were marked either “E” or “O.” As in roulette, an ivory ball was rolled along the edge of the wheel as it was spun. If the ball landed in one of the two blank “bar holes,” the house would automatically win, similar to the “0” and “00” in roulette.

EO’s defenders could argue that it was not an unlawful lottery because the odds of winning or losing were now readily apparent to players and appeared to be virtually equal. The key, of course, is that the bar holes ensured they weren’t truly equal.

Although this logic might not stand up in court, overburdened law enforcement was happy for a reason to look the other way. EO proliferated; legislation to outlaw it was proposed in 1782 but failed.

In the 19th century, roulette became a big draw at Monte Carlo’s casinos.Hulton Archive/Getty Images

The allure of ‘even money’

Gambling operators may have even realized that evening the odds drew more players, who, in turn, staked more.

After EO appeared in Britain, gambling operations both there and on the continent of Europe introduced “even money” betting options into both new and established games.

For example, the game of biribi, which was popular in France throughout the 18th century, involved players betting on numbers from 1 to 72, which were shown on a betting cloth. Numbered beads would then be drawn from a bag to determine the win.

In one iteration from around 1720, players could bet on individual numbers, on vertical columns of six numbers, or other options that promised large payouts against steeper odds.

By the end of the 18th century, however, one biribi cloth featured even money options: Players could bet on any number between 36 and 70 being drawn, or on any number between 1 and 35. Players could also select red or black numbers, making it a likely inspiration for roulette.

In Britain, the Victorian ethos of morality and respectabilityeventually won out. Parliament outlawed games of chance played for money in public or private in 1845, restrictions that were not lifted until 1960.

By 1845, however, British gamblers could travel by steamship and train to one of the many European resorts cropping up across the continent, where the probability revolution had transformed casino gambling into the formidable business enterprise it is today.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to The Conversation

 


Triangular Numbers and Euler’s Number Triangle

There is a nice identity stating that a square number can be written as the sum of two subsequent triangular numbers.

Here we are writing tdn for the nth triangular of dimension d (d=2 are the flat polygonals, d=3 for they pyramidal polygonals, etc.)

There is also a nice relationship that connects cubes to polygonal numbers. It turns out that a cube of spheres can be unfolded into a packed-hexagonal pyramid. The “packed hexagonals” or “centered hexagonals” are not quite the usual hexagonal numbers – instead these are hexagons of dots with the gaps filled in. The picuture below shows how square numbers fill the gaps of the hexagonals perfectly to form the “packed hexagonals,” and how these in turn can be stacked to form a cube. Here we are using phdn for “packed hexagonals” hdn for hexagonals, sdn for squares, and tdn for triangular numbers.

Combining this result with the “triangulation” identities we have:

This gives us three nice identities for powers of n:

It turns out that these identities generalize for other positive integer powers of n. Every nd can be written as a sum of tdi where i ranges from n to n+1−d. (for any i less than 1, these terms are zero)

1.Write out the sequence of nd for at least 2d−2 terms. Take the finite difference of this sequence d−2 times (this reduces the sequence down to “2-dimensional” numbers, allowing us to use the 2-dimensional triangular numbers in our calculations).

2.The first term of the new sequence should be 1. Eliminate the first term by subtracting t2n from this sequence. This means that our sum begins with tdn, with a coefficient of 1. Ensure that the t2n values are subtracted from the corresponding terms of the sequence.

3. Now, the sequence has a new first term which is A. Eliminate this term by subtracting A t2n from the sequence. A is the coefficient of tdn−1.

4. Repeat step 3, eliminating the first term of the sequence each time with a multiple of t2n, which provides the coefficient for the next value of tdi.

5.The process ends when all terms in the nd sequence is eliminated, which happens at the dth step.

Carrying out this process for a few more powers of n, we end up with:

In general, we seem to have:

where the coefficients A(i,k) have the nice properties:

The coefficients are naturally analogous to the binomial coefficients, and can be arranged in a triangle like Pascal’s.

These coefficients are known as Eulerian numbers, and the construction above is known as Euler’s Number Triangle (not to be confused with the geometric construction called the Euler Triangle).

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


The Sequence of Primes

As I make my way through Hardy & Wright’s An Introduction to the Theory of Numbers,  I am hoping to work it into my recreational math pursuits – coming up with interesting (but not too heavy) activities that correspond roughly to the material in the text.

The first two chapters are on the sequence of primes. Here’s the activity: obtain a list of primes, import them into Fathom, and construct plots that explore pn and pi(n) and other aspects of the sequence that manifest themselves in the first couple of thousand terms.

In my Fathom experiment, I imported the first 2262 prime numbers.

If you import a sequential list of primes into Fathom (under the attribute prime) and add another attribute n=caseindex, you can create two nice plots. Plot A should have prime as the x axis and n  as the y axis. This shows the function pi(n). To this plot you should add the function x/ln(x) and visually compare the two curves. Plot B should have the x and y axis reversed. On this graph, plotting the function y = x*ln(x) shows how closely this approximation for pn (the nth prime) comes to the actual values.

 

You can add further attributes to look at the distance between primes dist=prime-prev(prime), and also the frequency of twin primes is_twin = (dist=2)or(next(dist)=2).

You can also add attributes to keep a running count of twin_primes, and to keep a running average of the twin_primes. The plot above shows how the ratio of tiwn primes diminishes as the number of primes increases. The plot at the top of the post suggests the distribution of primes and twin primes (in blue) in the numbers up to the 2262nd prime.

For more such insights, log into www.international-maths-challenge.com.

Credit for article given to dan.mackinnon*


The Integral Domain Hierarchy, Part 2

In any area of math, it’s always good idea to keep a few counterexamples in your back pocket. Examples/non-examples from some of the different subsets of integral domains.

Z[i√5]Z[i5] is an integral domain which is not a UFD

That Z[i√5]Z[i5] an integral domain is easy to check (just computation).

  • It’s not a UFD since we can write 6=2⋅3=(1+i√5)(1−i√5)6=2⋅3=(1+i5)(1−i5) as two distinct facorizations into irreducibles*

‍Z[x]Z[x] is a UFD which is not a PID

We know Z[x]Z[x] is a UFD because ZZ is a UFD (recall, a commutative ring RR is a UFD iff R[x]R[x] is a UFD).

  • The ideal (2,x)={2f(x)+xg(x):f(x),g(x)∈Z[x]}(2,x)={2f(x)+xg(x):f(x),g(x)∈Z[x]} (polynomials with even constant term) is not principal**

‍Z[12+i√192]Z[12+i192] is a PID which is not a Euclidean domain

  • This is a PID since it has a Dedekind-Hasse norm (see Dummit and Foote, 3rd ed., §8.2§8.2).
  • It is not a Euclidean domain since it has no universal side divisors (ibid.).

ZZ is a Euclidean domain which is not a field

ZZ is a Euclidean domain via the absolute value norm (which gives the familiar division algorithm).

  • It is not a field since the only elements which are units are 11 and −1−1.

‍  (*) Check 2,3,1+i√52,3,1+i5, and 1−i√51−i5 are indeed irreducible in Z[i√5]Z[i5]:

Write 2=αβ2=αβ for α,β∈Z[i√5]α,β∈Z[i5]. Then α=a+ib√5α=a+ib5 and N(α)=a2+5b2N(α)=a2+5b2 for some integers a,ba,b. Since 4=N(2)=N(α)N(β)4=N(2)=N(α)N(β), we must have a2+5b2=1,2a2+5b2=1,2 or 44. Notice b=0b=0 must be true (since a2+5b2∉{1,2,4}a2+5b2∉{1,2,4} for b≥1b≥1 and for any aa). Hence either α=a=1α=a=1 or 22. If α=1α=1 then αα is a unit. If α=2α=2, then we must have β=1β=1 and so ββ is a unit.

  • Showing 3 is irreducible follows a similar argument.

‍Write 1+i√5=αβ1+i5=αβ with α=a+ib√5α=a+ib5 so that N(α)=a2+5b2∈{1,2,3,6}N(α)=a2+5b2∈{1,2,3,6} since 6=N(α)N(β)6=N(α)N(β). Consider two cases:  (case 1) If b=0b=0, then a2∈{1,2,3,6}a2∈{1,2,3,6} which is only true if a2=1a2=1 and so α=a=±1α=a=±1 is a unit. (case 2) If b>0b>0, we can only have b2=1b2=1 (since b2>1b2>1 gives a contradiction), and so a2+5∈{1,2,3,6}a2+5∈{1,2,3,6}, which implies a2=1a2=1. Hence α=±1±i√5α=±1±i5 and so N(α)=6N(α)=6. This implies N(β)=1N(β)=1 and so β=±1β=±1, which is a unit.

‍Showing 1−i√51−i5 is irreducible follows a similar argument.

principal in Z[x]Z[x]:

  • Suppose to the contrary (2,x)=(f(x))(2,x)=(f(x)) for some polynomial f(x)∈Z[x]f(x)∈Z[x]. Since 2∈(f(x))2∈(f(x)), we must have 2=f(x)p(x)2=f(x)p(x) for some p(x)∈Z[x]p(x)∈Z[x]. Hence 0=degf(x)+degp(x)0=deg⁡f(x)+deg⁡p(x) which implies both f(x)f(x) and p(x)p(x) are constants. In particular, since 2=±1⋅±22=±1⋅±2, we need f(x),p(x)∈{±1,±2}f(x),p(x)∈{±1,±2}. If f(x)=±1f(x)=±1, then (f(x))=Z[x](f(x))=Z[x] which is a contradiction since (f(x))=(2,x)(f(x))=(2,x) mustbe a proper ideal (not every polynomial over Z[x]Z[x] has even constant term). It follows that f(x)=±2f(x)=±2. But since x∈(f(x))x∈(f(x)) as well, x=2r(x)x=2r(x) for some r(x)∈Z[x]r(x)∈Z[x]. But of course this is impossible for any polynomial with integer coefficients, r(x)r(x). Thus (2,x)(2,x) is not principal.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


The Integral Domain Hierarchy, Part 1

Here is a list of some of the subsets of integral domains, along with the reasoning (a.k.a proofs) of why the bullseye below looks the way it does. Part 2 of this post will include back-pocket examples/non-examples of each.

Integral Domain: a commutative ring with 1 where the product of any two nonzero elements is always nonzero

Unique Factorization Domain (UFD): an integral domain where every nonzero element (which is not a unit) has a unique factorization into irreducibles

Principal Ideal Domain (PID): an integral domain where every ideal is generated by exactly one element

Euclidean Domain: an integral domain RR with a norm NN and a division algorithm (i.e. there is a norm NN so that for every a,b∈Ra,b∈R with b≠0b≠0, there are q,r∈Rq,r∈R so that a=bq+ra=bq+r with r=0r=0 or N(r)<N(b)N(r)<N(b))

Field: a commutative ring where every nonzero element has an inverse

Because… We can just choose the zero norm: N(r)=0N(r)=0 for all r∈Fr∈F.

Proof: Let FF be a field and define a norm NN so that N(r)=0N(r)=0 for all r∈Fr∈F. Then for any a,b∈Fa,b∈F with b≠0b≠0, we can writea=b(b−1a)+0.a=b(b−1a)+0.

Because… If I◃RI◃R is an arbitrary nonzero ideal in the Euclidean domain RR, then I=(d)I=(d), where d∈Id∈I such that dd has the smallest norm among all elements in II. Prove this using the division algorithm on dd and some a∈Ia∈I.

Proof: Let RR be a Euclidean domain with respect to the norm NN and let I◃RI◃R be an ideal. If I=(0)I=(0), then II is principle. Otherwise let d∈Id∈I be a nonzero element such that dd has the smallest norm among all elements in II. We claim I=(d)I=(d). That (d)⊂I(d)⊂I is clear so let a∈Ia∈I. Then by the division algorithm, there exist q,r∈Rq,r∈R so that a=dq+ra=dq+r with r=0r=0 or N(r)<N(d)N(r)<N(d). Then r=a−dq∈Ir=a−dq∈I since a,d∈Ia,d∈I. But my minimality of dd, this implies r=0r=0. Hence a=dq∈(d)a=dq∈(d) and so I⊂(d)I⊂(d).

Because…Every PID has the ascending chain condition (acc) on its ideals!* So to prove PID ⇒⇒ UFD, just recall that an integral domain RR is a UFD if and only if 1) it has the acc on principal ideals** and 2) every irreducible element is also prime.

Proof: Let RR be a PID. Then 1) RR has the ascending chain condition on principal ideals and 2) every irreducible element is also a prime element. Hence RR is a UFD.

Because… By definition.

Proof: By definition.

‍*Def: In general, an integral domain RR has the acc on its principal ideals if these two equivalent conditions are satisfied:

  1. Every sequence I1⊂I2⊂⋯⊂⋯I1⊂I2⊂⋯⊂⋯ of principal ideals is stationary (i.e. there is an integer n0≥1n0≥1 such that In=In0In=In0 for all n≥n0n≥n0).
  2. For every nonempty subset X⊂RX⊂R, there is an element m∈Xm∈X such that whenever a∈Xa∈X and (m)⊂(a)(m)⊂(a), then (m)=(a)(m)=(a).

**To see this, use part 1 of the definition above. If I1⊂I2⊂⋯I1⊂I2⊂⋯ is an acsending chain, consider their union I=⋃∞n=1InI=⋃n=1∞In. That guy must be a principal ideal (check!), say I=(m)I=(m). This implies that mm must live in some In0In0  for some n0≥1n0≥1 and so I=(m)⊂In0I=(m)⊂In0. But since II is the union, we have for all n≥n0n≥n0(m)=I⊃In⊃In0=(m).(m)=I⊃In⊃In0=(m).Voila!

Every field FF is a PID

because the only ideals in a field are (0)(0) and F=(1)F=(1)! And every field is vacuously a UFD since all elements are units. (Recall, RR is a UFD if every non-zero, non-invertible element (an element which is not a unit) has a unique factorzation into irreducibles).

In an integral domain, every maximal ideal is also a prime ideal. 

(Proof: Let RR be an integral domain and M◃RM◃R a maximal ideal. Then R/MR/M is a field and hence an integral domain, which implies M◃RM◃R is a prime ideal.)

Butut the converse is not true (see counterexample below). However, the converse is true in a PID because of the added structure!

(Proof: Let RR be a PID and (p)◃R(p)◃R a prime ideal for some p∈Rp∈R. Then pp is a prime – and hence an irreducible – element (prime ⇔⇔ irreducible in PIDs). Since in an integral domain a principal ideal is maximal whenever it is generated by an irreducible element, we conclude (p)(p) is maximal.)

This suggests that if you want to find a counterexample – an integral domain with a prime ideal which is not maximal – try to think of a ring which is not a PID:   In Z[x]Z[x], consider the ideal (p)(p) for a prime integer pp. Then (p)(p) is a prime ideal, yet it is not maximal since(p)⊂(p,x)⊂Z[x].(p)⊂(p,x)⊂Z[x].

If FF is a field, then F[x]F[x] – the ring of polynomials in xx with coefficients in FF – is a Euclidean domain with the norm N(p(x))=degp(x)N(p(x))=deg⁡p(x) where p(x)∈F[x]p(x)∈F[x].

By the integral domain hierarchy above, this implies every ideal in F[x]F[x] is of the form (p(x))(p(x)) (i.e. F[x]F[x] is a PID) and every polynomial can be factored uniquely into a product of prime polynomials (just like the integers)! The next bullet gives an “almost converse” statement.

If R[x]R[x] is a PID, the RR must be a field.

To see this, simply observe that R⊂R[x]R⊂R[x] and so RR must be an integral domain (since a subset of a integral domain inherets commutativity and the “no zero divisors” property). Since R[x]/(x)≅RR[x]/(x)≅R, it follows that R[x]/(x)R[x]/(x) is also an integral domain. This proves that (x)(x) is a prime ideal. But prime implies maximal in a PID! So R[x]/(x)R[x]/(x) – and therefore RR – is actually a field.

  • This is how we know, for example, that Z[x]Z[x] is not a PID (in the counterexample a few bullets up) – ZZ is not a field!

‍For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


Study uses topological data analysis to identify voting deserts

Credit: Unsplash/CC0 Public Domain

In past years, elections in the U.S. have been marked by stories of long waiting lines at the voting polls. Add other barriers, like long commutes and inadequate transportation, and voting can become inaccessible. But these voting deserts are difficult to quantify.

In a paper, “Persistent Homology for Resource Coverage: A Case Study of Access to Polling Sites” in SIAM Review, SFI External Professor Mason Porter (UCLA) and his students applied topological data analysis, which gives a set of mathematical tools that can quantify shape and structure in data, to the problem of quantifying voting deserts in LA County, Chicago, Atlanta, Jacksonville, New York City, and Salt Lake City.

Using a type of topological data analysis called persistent homology, Porter and his co-authors used estimates of average waiting times and commute times to examine where the voting deserts are located.

Applying persistent homology to a data set can reveal clusters and holes in that data, and it offers a way to measure how long those holes persist. The combination of waiting times and commute times in the data creates a pattern, with holes filling in as time passes.

The longer the hole takes to fill, the more inaccessible voting is to people in that area. “We are basically playing connect-the-dots in a more sophisticated way, trying to fill in what’s there,” says Porter.

Moving forward, Porter hopes to use this strategy to more accurately determine voting deserts. Finding voting deserts will hopefully be used to make voting more accessible, but it requires better-quality data than what was available to him and his students.

“This is a proof of concept,” Porter said. “We had to make some very severe approximations, in terms of what data we had access to.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to be given  Santa Fe Institute