Theorem of Everything: The Secret That Links Numbers and Shapes

For millennia mathematicians have struggled to unify arithmetic and geometry. Now one young genius could have brought them in sight of the ultimate prize.

IF JOEY was Chloe’s age when he was twice as old as Zoe was, how many times older will Zoe be when Chloe is twice as old as Joey is now?

Or try this one for size. Two farmers inherit a square field containing a crop planted in a circle. Without knowing the exact size of the field or crop, or the crop’s position within the field, how can they draw a single line to divide both the crop and field equally?

You’ve either fallen into a cold sweat or you’re sharpening your pencil (if you can’t wait for the answer, you can check the bottom of this page). Either way, although both problems count as “maths” – or “math” if you insist – they are clearly very different. One is arithmetic, which deals with the properties of whole numbers: 1, 2, 3 and so on as far as you can count. It cares about how many separate things there are, but not what they look like or how they behave. The other is geometry, a discipline built on ideas of continuity: of lines, shapes and other objects that can be measured, and the spatial relationships between them.

Mathematicians have long sought to build bridges between these two ancient subjects, and construct something like a “grand unified theory” of their discipline. Just recently, one brilliant young researcher might have brought them decisively closer. His radical new geometrical insights might not only unite mathematics, but also help solve one of the deepest number problems of them all: the riddle of the primes. With the biggest prizes in mathematics, the Fields medals, to be awarded this August, he is beginning to look like a shoo-in.

The ancient Greek philosopher and mathematician Aristotle once wrote, “We cannot… prove geometrical truths by arithmetic.” He left little doubt he believed geometry couldn’t help with numbers, either. It was hardly a controversial thought for the time. The geometrical proofs of Aristotle’s near-contemporary Euclid, often called the father of geometry, relied not on numbers, but logical axioms extended into proofs by drawing lines and shapes. Numbers existed on an entirely different, more abstract plane, inaccessible to geometers’ tools.

And so it largely remained until, in the 1600s, the Frenchman René Descartes used the techniques of algebra – of equation-solving and the manipulation of abstract symbols – to put Euclid’s geometry on a completely new footing. By introducing the notion that geometrical points, lines and shapes could all be described by numerical coordinates on an underlying grid, he allowed geometers to make use of arithmetic’s toolkit, and solve problems numerically.

This was a moonshot that let us, eventually, do things like send rockets into space or pinpoint positions to needle-sharp accuracy on Earth. But to a pure mathematician it is only a halfway house. A circle, for instance, can be perfectly encapsulated by an algebraic equation. But a circle drawn on graph paper, produced by plotting out the equation’s solutions, would only ever capture a fragment of that truth. Change the system of numbers you use, for example – as a pure mathematician might do – and the equation remains valid, while the drawing may no longer be helpful.

Wind forward to 1940 and another Frenchman was deeply exercised by the divide between geometry and numbers. André Weil was being held as a conscientious objector in a prison just outside Rouen, having refused to enlist in the months preceding the German occupation of France – a lucky break, as it turned out. In a letter to his wife, he wrote: “If it’s only in prison that I work so well, will I have to arrange to spend two or three months locked up every year?”

Weil hoped to find a Rosetta stone between algebra and geometry, a reference work that would allow truths in one field to be translated into the other. While behind bars, he found a fragment.

It had to do with the Riemann hypothesis, a notorious problem concerning how those most fascinating numbers, the primes, are distributed (see below). There had already been hints that the hypothesis might have geometrical parallels. Back in the 1930s, a variant had been proved for objects known as elliptic curves. Instead of trying to work out how prime numbers are distributed, says mathematician Ana Caraiani at Imperial College London, “you can relate it to asking how many points a curve has”.

Weil proved that this Riemann-hypothesis equivalent applied for a range of more complicated curves too. The wall that had stood between the two disciplines since Ancient Greek times finally seemed to be crumbling. “Weil’s proof marks the beginning of the science with the most un-Aristotelian name of arithmetic geometry,” says Michael Harris of Columbia University in New York.

The Riemann Hypothesis: The million-dollar question

The prime numbers are the atoms of the number system, integers indivisible into smaller whole numbers other than one. There are an infinite number of them and there is no discernible pattern to their appearance along the number line. But their frequency can be measured – and the Riemann hypothesis, formulated by Bernhard Riemann in 1859, predicts that this frequency follows a simple rule set out by a mathematical expression now known as the Riemann zeta function.

Since then, the validity of Riemann’s hypothesis has been demonstrated for the first 10 trillion primes, but an absolute proof has yet to emerge. As a mark of the problem’s importance, it was included in the list of seven Millennium Problems set by the Clay Mathematics Institute in New Hampshire in 2000. Any mathematician who can tame it stands to win $1 million.

In the post-war years, in the more comfortable setting of the University of Chicago, Weil tried to apply his insight to the broader riddle of the primes, without success. The torch was taken up by Alexander Grothendieck, a mathematician ranked as one of the greatest of the 20th century. In the 1960s, he redefined arithmetic geometry.

Among other innovations, Grothendieck gave the set of whole numbers what he called a “spectrum”, for short Spec(Z). The points of this undrawable geometrical entity were intimately connected to the prime numbers. If you could ever work out its overall shape, you might gain insights into the prime numbers’ distribution. You would have built a bridge between arithmetic and geometry that ran straight through the Riemann hypothesis.

The shape Grothendieck was seeking for Spec(Z) was entirely different from any geometrical form we might be familiar with: Euclid’s circles and triangles, or Descartes’s parabolas and ellipses drawn on graph paper. In a Euclidean or Cartesian plane, a point is just a dot on a flat surface, says Harris, “but a Grothendieck point is more like a way of thinking about the plane”. It encompasses all the potential uses to which a plane could be put, such as the possibility of drawing a triangle or an ellipse on its surface, or even wrapping it map-like around a sphere.

If that leaves you lost, you are in good company. Even Grothendieck didn’t manage to work out the geometry of Spec(Z), let alone solve the Riemann hypothesis. That’s where Peter Scholze enters the story.

“Even the majority of mathematicians find most of the work unintelligible”

Born in Dresden in what was then East Germany in 1987, Scholze is currently, at the age of 30, a professor at the University of Bonn. He laid the first bricks for his bridge linking arithmetic and geometry in his PhD dissertation, published in 2012 when he was 24. In it, he introduced an extension of Grothendieck-style geometry, which he termed perfectoid geometry. His construction is built on a system of numbers known as the p-adics that are intimately connected with the prime numbers (see “The p-adics: A different way of doing numbers”). The key point is that in Scholze’s perfectoid geometry, a prime number, represented by its associated p-adics, can be made to behave like a variable in an equation, allowing geometrical methods to be applied in an arithmetical setting.

It’s not easy to explain much more. Scholze’s innovation represents “one of the most difficult notions ever introduced in arithmetic geometry, which has a long tradition of difficult notions”, says Harris. Even the majority of working mathematicians find most of it unintelligible, he adds.

Be that as it may, in the past few years, Scholze and a few initiates have used the approach to solve or clarify many problems in arithmetic geometry, to great acclaim. “He’s really unique as a mathematician,” says Caraiani, who has been collaborating with him. “It’s very exciting to be a mathematician working in the same field.”

This August, the world’s mathematicians are set to gather in Rio de Janeiro, Brazil, for their latest international congress, a jamboree held every four years. A centrepiece of the event is the awarding of the Fields medals. Up to four of these awards are given each time to mathematicians under the age of 40, and this time round there is one name everyone expects to be on the list. “I suspect the only way he can escape getting a Fields medal this year is if the committee decides he’s young enough to wait another four years,” says Marcus du Sautoy at the University of Oxford.

 

Peter Scholze, 30, looks like a shoo-in for mathematics’s highest accolade this summer

With so many grand vistas opening up, the question of Spec(Z) and the Riemann hypothesis almost becomes a sideshow. But Scholze’s new methods have allowed him to study the geometry, in the sense Grothendieck pioneered, that you would see if you examined the curve Spec(Z) under a microscope around the point corresponding to a prime number p. That is still a long way from understanding the curve as a whole, or proving the Riemann hypothesis, but his work has given mathematicians hope that this distant goal might yet be reached. “Even this is a huge breakthrough,” says Caraiani.

Scholze’s perfectoid spaces have enabled bridges to be built in entirely different directions, too. A half-century ago, in 1967, the then 30-year-old Princeton mathematician Robert Langlands wrote a tentative letter to Weil outlining a grand new idea. “If you are willing to read it as pure speculation I would appreciate that,” he wrote. “If not – I am sure you have a waste basket handy.”

In his letter, Langlands suggested that two entirely distinct branches of mathematics, number theory and harmonic analysis, might be related. It contained the seeds of what became known as the Langlands program, a vastly influential series of conjectures some mathematicians have taken to calling a grand unified theory capable of linking the three core mathematical disciplines: arithmetic, geometry and analysis, a broad field that we encounter in school in the form of calculus. Hundreds of mathematicians around the world, including Scholze, are committed to its completion.

The full slate of Langlands conjectures is no more likely than the original Riemann hypothesis to be proved soon. But spectacular discoveries could lie in store: Fermat’s last theorem, which took 350 years to prove before the British mathematician Andrew Wiles finally did so in 1994, represents just one particular consequence of its conjectures. Recently, the French mathematician Laurent Fargues proposed a way to build on Scholze’s work to understand aspects of the Langlands program concerned with p-adics. It is rumoured that a partial solution could appear in time for the Rio meeting.

In March, Langlands won the other great mathematical award, the Abel prize, for his lifetime’s work. “It took a long time for the importance of Langlands’s ideas to be recognised,” says Caraiani, “and they were overdue for a major award.” Scholze seems unlikely to have to wait so long.

The p-adics: A different way of doing numbers

Key to the latest work in unifying arithmetic and geometry are p-adic numbers.

These are an alternative way of representing numbers in terms of any given prime number p. To make a p-adic number from any positive integer, for example, you write that number in base p, and reverse it. So to write 20 in 2-adic form, say, you take its binary, or base-2, representation – 10100 – and write it backwards, 00101. Similarly 20’s 3-adic equivalent is 202, and as a 4-adic it is written 011.

The rules for manipulating p-adics are a little different, too. Most notably, numbers become closer as their difference grows more divisible by whatever p is. In the 5-adic numbers, for example, the equivalents of 11 and 36 are very close because their difference is divisible by 5, whereas the equivalents of 10 and 11 are further apart.

For decades after their invention in the 1890s, the p-adics were just a pretty mathematical toy: fun to play with, but of no practical use. But in 1920, the German mathematician Helmut Hasse came across the concept in a pamphlet in a second-hand bookshop, and became fascinated. He realised that the p-adics provided a way of harnessing the unfactorisability of the primes – the fact they can’t be divided by other numbers – that turned into a shortcut to solving complicated proofs.

Since then, p-adics have played a pivotal part in the branch of maths called number theory. When Andrew Wiles proved Fermat’s infamous last theorem (that the equation xn + yn = zn has no solutions when x, y and z are positive integers and n is an integer greater than 2) in the early 1990s, practically every step in the proof involved p-adic numbers.

  • Answers: Zoe will be three times as old as she is now. The farmers should draw a line across the field that connects the centre points of the field and the crop.

This article appeared in print under the headline “The shape of numbers”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gilead Amit*


Infinity War: The Ongoing Battle Over The World’s Hardest Maths Proof

Is there an error in there somewhere?

It’s the stuff of Hollywood. Somebody somewhere is surely selling the movie rights to what’s become the biggest spat in maths: a misunderstood genius, a 500-page proof almost nobody can understand and a supporting cast squabbling over what it all means. At stake: nothing less than the future of pure mathematics.

In 2012, Shinichi Mochizuki at Kyoto University in Japan produced a proof of a long-standing problem called the ABC conjecture. Six years later the jury is still out on whether it’s correct. But in a new twist, Peter Scholze at the University of Bonn – who was awarded the Fields Medal, the highest honour in maths, in August – and Jakob Stix at Goethe University Frankfurt – who is an expert in the type of maths used by Mochizuki – claim to have found an error at the heart of Mochizuki’s proof.

Roll credits? Not so fast. The pairs’ reputation means that their claim is a serious blow for Mochizuki. And a handful of other mathematicians claim to have lost the thread of the proof at the same point Scholze and Stix say there is an error. But there is still room for dispute.

a + b = c?

The ABC conjecture was first proposed in the 1980s and concerns a fundamental property of numbers, based around the simple equation a + b = c. For a long time, mathematicians believed that the conjecture was true but nobody had ever been able to prove it.

To tackle the problem, Mochizuki had to invent a fiendish type of maths called Inter-universal Teichmüller (IUT) theory. In an effort to understand IUT better, Scholze and Stix spent a week with Mochizuki in Tokyo in March. By the end of the week, they claim to have found an error.

The alleged flaw comes in Conjecture 3.12, which many see as the crux of the proof. This section involves measuring an equivalence between different mathematical objects. In effect, Scholze and Stix claim that Mochizuki changes the length of the measuring stick in the middle of the process.

No proof

“We came to the conclusion that there is no proof,” they write in their report, which was posted online on 20 September.

But Ivan Fesenko at the University of Nottingham, UK, who says he is one of only 15 people around the world who actually understand Mochizuki’s theory, thinks Scholze and Stix are jumping the gun. “They spent much less time than all of us who have been studying this for many years,” says Fesenko.

Mochizuki has tried to help others understand his work, taking part in seminars and answering questions. Mochizuki was even the one who posted Scholze and Stix’s critical report. “We have this paradoxical situation in which the victim has published the report of the villain,” says Fesenko with a laugh. “This is an unprecedented event in mathematics.”

So is the proof wrong or just badly explained? Fesenko thinks that the six-year dispute exposes something rotten at the heart of pure mathematics. These days mathematicians work in very narrow niches, he says. “People just do not understand what the mathematician in the next office to you is doing.”

This means that mathematicians will increasingly have to accept others’ proofs without actually understanding them – something Fesenko describes as a fundamental problem for the future development of mathematics.

This suggests the story of Mochizuki’s proof may forever lack a satisfactory ending – becoming a war between mathematicians that is doomed to spiral into infinity. “My honest answer is that we will never have consensus about it,” says Fesenko.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Douglas Heaven*

 


Theoretical study offers proof that one parallel world cannot be extremely different from the other

Theoretical string theory in theoretical physics predicts the existence of parallel worlds (mirror symmetry prediction). These two worlds (A-side and B-side) are supposed to differ in terms of the six-dimensional spaces (A and B) hidden in each world.

However, as these spaces are extremely similar and invisible, theoretically, we cannot distinguish them from the world that we live in. Considerable research has been conducted on the properties of space A, and extreme changes (i.e., blowing up) do not occur in it under certain conditions.

However, recently it has been discovered that spaces A and B are transformed in certain ways and their apparently different objects correspond to each other. However, the nature and extent of this transformation are not well understood, and research on the properties of space B has not yet progressed.

In this new study, published in the Asian Journal of Mathematics, researchers mathematically investigated whether the behaviour of space B also has the same properties as that of space A. They transferred a known phenomenon from the A-side to the B-side and proved that blowing up does not occur even in space B under certain conditions.

This achievement provides mathematical proof for one of the previously intuitively expected similarities between the A-side and B-side. Although the researchers made some assumptions to prove this theorem, in the future, they aim to clarify whether the theorem holds even without these assumptions.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Tsukuba

 


Math professor’s side mirror that eliminates ‘blind spot’ receives US patent

A side-by-side comparison of a standard flat driver’s side mirror with the mirror designed by Dr. R. Andrew Hicks, mathematics professor at Drexel University. With minimal distortion, Hicks’s mirror shows a much wider field of view (the wide area to the left of the silver car seen in the distance, behind the tree, in this image). Hicks’s mirror has a field of view of about 45 degrees, compared to 15 to 17 degrees of view in a flat mirror. Hicks’s mirror received a US patent in May 2012.

A side mirror that eliminates the dangerous “blind spot” for drivers has now received a U.S. patent. The subtly curved mirror, invented by Drexel University mathematics professor Dr. R. Andrew Hicks, dramatically increases the field of view with minimal distortion.

Traditional flat mirrors on the driver’s side of a vehicle give drivers an accurate sense of the distance of cars behind them but have a very narrow field of view. As a result, there is a region of space behind the car, known as the blind spot, that drivers can’t see via either the side or rear-view mirror. It’s not hard to make a curved mirror that gives a wider field of view; no blind spot; but at the cost of visual distortion and making objects appear smaller and farther away.

Hicks’s driver’s side mirror has a field of view of about 45 degrees, compared to 15 to 17 degrees of view in a flat driver’s side mirror. Unlike in simple curved mirrors that can squash the perceived shape of objects and make straight lines appear curved, in Hicks’s mirror the visual distortions of shapes and straight lines are barely detectable.

Hicks, a professor in Drexel’s College of Arts and Sciences, designed his mirror using a mathematical algorithm that precisely controls the angle of light bouncing off of the curving mirror.

“Imagine that the mirror’s surface is made of many smaller mirrors turned to different angles, like a disco ball,” Hicks said. “The algorithm is a set of calculations to manipulate the direction of each face of the metaphorical disco ball so that each ray of light bouncing off the mirror shows the driver a wide, but not-too-distorted, picture of the scene behind him.”

Hicks noted that, in reality, the mirror does not look like a disco ball up close. There are tens of thousands of such calculations to produce a mirror that has a smooth, nonuniform curve.

Hicks first described the method used to develop this mirror in Optics Letters in 2008

In the United States, regulations dictate that cars coming off of the assembly line must have a flat mirror on the driver’s side. Curved mirrors are allowed for cars’ passenger-side mirrors only if they include the phrase “Objects in mirror are closer than they appear.”

Because of these regulations, Hicks’s mirrors will not be installed on new cars sold in the U.S. any time soon. The mirror may be manufactured and sold as an aftermarket product that drivers and mechanics can install on cars after purchase. Some countries in Europe and Asia do allow slightly curved mirrors on new cars. Hicks has received interest from investors and manufacturers who may pursue opportunities to license and produce the mirror.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Drexel University


Mathematicians use network theory to model champion Spanish soccer team’s style

Passing networks for the Netherlands and Spain drawn before the final game, using the passing data and tactical formations of the semi-finals. Image from arXiv:1206.6904v1

For years, sports fanatics have turned to statistics to help them gauge the relative strength or weaknesses of different teams, though some have been more amenable to the process than others. Baseball and football, for example, seem to have a statistic for every action that occurs on the field of play, with different players ranked and rated by their numbers. International football, aka soccer on the other hand has generally defied such attempts due to their being far fewer things to measure with the sport and the continuity of play. That may change however, as mathematicians Javier López Peña and Hugo Touchette of University College and Queen Mary University respectively, have applied network theory to the unique style of play of the European Championship 2012 victor, Spain. And as they describe in the paper they’ve uploaded to the preprint server arXiv, the graphic that results gives some clues as to why the team is considered one of the best of all time.

Anyone who has watched the Spanish team knows that their style of play is different from other teams. So much so it’s been given a name by fans: tiki-taka. It’s all about quick passes and exquisite teamwork. But trying to describe what the team does only leads to superlatives, which don’t really get to the heart of the matter. To help, Peña and Touchette turned to network theory, which makes sense, because soccer is played as a network of teammates working efficiently together.

Unfortunately, on paper, network theory tends to wind up looking like a bunch of hard to decipher equations, which wouldn’t help anyone except those that create them. To make it so that anyone could understand what their theories have turned up, the two used a simple drawing depicting players as nodes and their relationship to one another on the team, the amount of passing that is done between them, the way it is done and to whom, as lines between the nodes.

What shows up in the drawing first, is what everyone already knows, namely, that the team passes the ball among its players a lot. More than a lot actually. In one match during 2010’s World Cup between Spain and the Netherlands, the Spanish players out-passed their opponent 417 to 266. The drawing also highlights the fact that two players on the team are “well connected” i.e. easy for others to get to, versus just one for the opponent.

The graphic also shows what is known as “betweenness centrality,” which is a way of measuring the amount a network relies on a single node to operate at its optimum capacity. With soccer, it measures how much a team relies on an individual player. In this instance, the opponent appears far more vulnerable to disruption if that individual is covered adequately than happens with the Spanish team. Also implemented in the graphic is the notion of PageRank, developed by Google, which ranks the most popular pages by linkages. Applied to soccer it would mean the player who is passed the ball most often by teammates. With Spain, of course, it was central midfielder, Xavi.

In many ways the graphic confirms what most suspect, that Spain wins more because it relies more on precise teamwork rather than the special skills of one or two superstars. In other ways though, it shows that even soccer can be made to offer up statistics if someone looks hard enough.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bob Yirka , Phys.org


Making sports statistics more scientific

Whether it is the sprinter who finished first or the team that scored more points, it’s usually easy to determine who won a sporting event. But finding the statistics that explain why an athlete or team wins is more difficult — and major figures at the intersection of sports and numbers are determined to crack this problem.

Many statistics explain part of the picture, especially in team sports, such as the number of points scored by a point guard, a quarterback’s passing yards, or a slugger’s batting average. But many of these numbers — some of them sacred among sports fans — don’t directly address a player’s contribution to winning. This was a primary topic of discussion last weekend at the Sloan Sports Analytics Conference in Boston.

Organised by students from the MIT Sloan School of Management and sponsored by several sports-related companies, including media outlet ESPN, the conference brought together over 2,200 people to discuss player evaluation and other factors important to the business of sports.

Many of the research presentations and panel discussions described efforts to remove subjective judgments from sports statistics — and how to define new statistics more directly explain a player’s value.

“We have huge piles of statistics now,” said Bill James, Boston Red Sox official and baseball statistics pioneer, at a panel discussion about adding modern statistics to box scores. “What you have to do is reduce it to significant but small concepts,” said James.

New technology and analysis is only now making it possible to learn more about many fundamental events in several sports, which are not often addressed by traditional sports statistics.

“We’re going to talk about stats that work and stats that don’t work,” said John Walsh, executive vice president of ESPN, who moderated the box score panel discussion.

The panel, which also included three other experts, cited several examples of statistics that didn’t work: a receiver might drop a pass for one of several reasons — but rarely are drops broken down into categories; an assist in basketball is a judgment call with room for different interpretations; and fielding percentage in baseball only generally describes a defensive player’s ability.

In another session, Greg Moore, the director of baseball products for the sports graphics and visualization company Sportvision, described recent data-collection advances in baseball. When all the company’s systems are fully deployed in Major League Baseball stadiums, they plan to track the trajectory of each pitch thrown, the movement of all the players on the field and the speed of every swing and hit ball. Their systems, already fully installed in some ballparks, will collect over a million data points at every game. Some of this data is publicly available.

The data will make it possible to say not just that a player hit a double or that he hit a hard line drive, but that the ball left the bat at a certain speed and launch angle and a certain number of degrees from the foul line. No scout or official scorer can contaminate those kinds of measures with subjectivity. On the other hand, a string of objective data is not inherently more useful than a flawed statistic, which may contain useful wisdom.

During the box-score panel discussion, Dean Oliver, ESPN’s sports analytics director, said that collecting information this way opens a new frontier.

“It’s an immense amount of data, but you have to know what to do with it,” said Oliver.

The winner of the conference’s research paper competition found one way to make new data useful. Using SportVU, a basketball database collected by the company STATS, a team from the University of Southern California’s computer science department studied rebounding a basketball from its absolute first concepts. The data shows the movement of all the players and the ball, including rebounds, passes and other game events.

The research team showed empirically what was only previously accessible from inference and experience. They were able to show that by the time almost all rebounds travel 14 feet from the hoop they also drop below eight feet of elevation — easy reaching distance for a basketball player. The researchers were able to compare shot distance with rebound distance and to show where strategic changes might change offensive rebounding success.

Rajiv Maheswaran, the researcher who presented the paper, compared the effort to find new insights about sports to astronomy. Once you start looking at the stars, he said, you make discoveries, which lead to new hypotheses and more research.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article to Chris Gorski, Inside Science News Service


Is there a typical rate of cultural evolution?

Are cultural evolution rates similar across human societies? The emerging field of Cliodynamics uses mathematical models to study history.

Tobias Wand and colleagues used a data-driven approach to estimate the rates of cultural evolution in 23 geographic areas, using data from Seshat: Global History Databank, which records nine “complexity characteristics” for 370 polities over 10,000 years, ending in the nineteenth century. The work is published in the journal PNAS Nexus.

The complexity characteristics are polity population; extent of polity territory; the size of the largest urban center; hierarchical complexity; the presence of professional soldiers, officers, priests, bureaucrats, and judges; the variety of observable structures; characteristics of writing and recordkeeping; variety of types of texts; and monetary system.

The authors focused on 23 Natural Geographic Areas (NGAs) and restricted their analysis to the central part of each NGA’s timeline so that they could focus on periods of steady, uninterrupted growth rather than moments of invasion or civilization-ending crisis. A universal logistic growth model was fitted to a variable combining all nine complexity characteristics.

On average, it takes societies that develop in isolation about 2,500 years to reach high levels of social complexity, according to the authors. Full development of complexity was spread out over multiple cultural or institutional phases in every NGA.

According to the authors, the data suggest that there are common pathways in human social evolution across cultures.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to PNAS Nexus

 


Factor Lattices

The objects pictured above are interesting structures – they are derived from the prime factorization of a given number n. They can be described in a number of ways – for example, as directed graphs. Because they are nicely structured, they actually form something more special – a lattice. Accordingly, these structures are called factor lattices.
It’s easy to start drawing these by hand following the instructions below.

1. The first node is 1
2. Draw arrows out of this node for each of the prime factors of n.
3. The arrows that you just drew should connect to nodes labled with the prime factors of n.

Now, for each of the new nodes that you drew do the following:

4. Start from a node x that is not equal to n.
5. Draw arrows out of this node for each of the prime factors of n/x.
6. The arrows that you just drew (one for each p = n/x) should connect to nodes labled with the numbers p*x.

7. Now repeat 4,5, and 6 for each new node that you have drawn that is not equal to n.

This process is recursive, and ends when you have the complete lattice. The process is well suited for implementation as a computer program – the images above were created using SAGE using output from a Java program based on the algorithm above.

Manually trying out the steps out for a number like n = 24 goes something like this: First write out the prime factorization of 24, 24=(2*2*2)*3 = (2^3)*3. Starting with 1, draw arrows out to 2 and 3. Now looking at each node and following the algorithm, from the 2 you will get arrows out to 4 and 6. From the 3 you will get an arrow out to 6 as well. From 4 you will get arrows out to 8 or 12. From 6 you will get an arrow out to 12 as well. From 8 and from 12 you get arrows out to 24, and you are done.

In general, the algorithm produces a lattice that can be described as follows. Each node is a factor of the given number n. Two nodes are connected by an edge if their prime factorization differs by a single prime number. In other words, if a and b are nodes, and p = b/a, then there is an arrow p:a–>b.

It’s a good exercise to make the connections between the lattice structure and the prime factorization of a number n.

1. What does the factor lattice of a prime number look like?
2. If a number is just a power of a prime, what does its lattice look like?
3. If you know the factorization, can you find the number of nodes without drawing the lattice.

The answer to the last question (3) can be expressed as:

For example, if n = 24= 2^3*3, then the number of nodes will be (3+1)(1+1) = 8

That these structures can be thought of as “lattices”comes from the fact that you can think of the arrows as an ordering of the nodes, ab. The number 1 is always the least node in the factor lattice for n, while n itself is the greatest node. The property that actually makes these structures a “lattice” is that for any two nodes there is always a lower-bound for any pair of nodes in the lattice, and always an upper-bound for the pair (these are often referred to as meets and joins).

The Wolfram Demonstrations Project has a nice factor lattice demo that will draw factor lattices for a large number of integers for you. There is also a good Wikipedia entry for lattices in general.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


The case for ‘math-ish’ thinking

For everyone whose relationship with mathematics is distant or broken, Jo Boaler, a professor at Stanford Graduate School of Education (GSE), has ideas for repairing it. She particularly wants young people to feel comfortable with numbers from the start—to approach the subject with playfulness and curiosity, not anxiety or dread.

“Most people have only ever experienced what WEcall narrow mathematics—a set of procedures they need to follow, at speed,” Boaler says. “Mathematics should be flexible, conceptual, a place where we play with ideas and make connections. If we open it up and invite more creativity, more diverse thinking, we can completely transform the experience.”

Boaler, the Nomellini and Olivier Professor of Education at the GSE, is the co-founder and faculty director of Youcubed, a Stanford research center that provides resources for math learning that has reached more than 230 million students in over 140 countries. In 2013 Boaler, a former high school math teacher, produced How to Learn Math, the first massive open online course (MOOC) on mathematics education. She leads workshops and leadership summits for teachers and administrators, and her online courses have been taken by over a million users.

In her new book, “Math-ish: Finding Creativity, Diversity, and Meaning in Mathematics,” Boaler argues for a broad, inclusive approach to math education, offering strategies and activities for learners at any age. We spoke with her about why creativity is an important part of mathematics, the impact of representing numbers visually and physically, and how what she calls “ishing” a math problem can help students make better sense of the answer.

What do you mean by ‘math-ish’ thinking?

It’s a way of thinking about numbers in the real world, which are usually imprecise estimates. If someone asks how old you are, how warm it is outside, how long it takes to drive to the airport—these are generally answered with what WEcall “ish” numbers, and that’s very different from the way we use and learn numbers in school.

In the book WEshare an example of a multiple-choice question from a nationwide exam where students are asked to estimate the sum of two fractions: 12/13 + 7/8. They’re given four choices for the closest answer: 1, 2, 19, or 21. Each of the fractions in the question is very close to 1, so the answer would be 2—but the most common answer 13-year-olds gave was 19. The second most common was 21.

I’m not surprised, because when students learn fractions, they often don’t learn to think conceptually or to consider the relationship between the numerator or denominator. They learn rules about creating common denominators and adding or subtracting the numerators, without making sense of the fraction as a whole. But stepping back and judging whether a calculation is reasonable might be the most valuable mathematical skill a person can develop.

But don’t you also risk sending the message that mathematical precision isn’t important?

I’m not saying precision isn’t important. What I’m suggesting is that we ask students to estimate before they calculate, so when they come up with a precise answer, they’ll have a real sense for whether it makes sense. This also helps students learn how to move between big-picture and focused thinking, which are two different but equally important modes of reasoning.

Some people ask me, “Isn’t ‘ishing’ just estimating?” It is, but when we ask students to estimate, they often groan, thinking it’s yet another mathematical method. But when we ask them to “ish” a number, they’re more willing to offer their thinking.

Ishing helps students develop a sense for numbers and shapes. It can help soften the sharp edges in mathematics, making it easier for kids to jump in and engage. It can buffer students against the dangers of perfectionism, which we know can be a damaging mindset. WEthink we all need a little more ish in our lives.

You also argue that mathematics should be taught in more visual ways. What do you mean by that?

For most people, mathematics is an almost entirely symbolic, numerical experience. Any visuals are usually sterile images in a textbook, showing bisecting angles, or circles divided into slices. But the way we function in life is by developing models of things in our minds. Take a stapler: Knowing what it looks like, what it feels and sounds like, how to interact with it, how it changes things—all of that contributes to our understanding of how it works.

There’s an activity we do with middle-school students where we show them an image of a 4 x 4 x 4 cm cube made up of smaller 1 cm cubes, like a Rubik’s Cube. The larger cube is dipped into a can of blue paint, and we ask the students, if they could take apart the little cubes, how many sides would be painted blue? Sometimes we give the students sugar cubes and have them physically build a larger 4 x 4 x 4 cube. This is an activity that leads into algebraic thinking.

Some years back we were interviewing students a year after they’d done that activity in our summer camp and asked what had stayed with them. One student said, “I’m in geometry class now, and We still remember that sugar cube, what it looked like and felt like.” His class had been asked to estimate the volume of their shoes, and he said he’d imagined his shoes filled with 1 cm sugar cubes in order to solve that question. He had built a mental model of a cube.

When we learn about cubes, most of us don’t get to see and manipulate them. When we learn about square roots, we don’t take squares and look at their diagonals. We just manipulate numbers.

 

WEwonder if people consider the physical representations more appropriate for younger kids.

That’s the thing—elementary school teachers are amazing at giving kids those experiences, but it dies out in middle school, and by high school it’s all symbolic. There’s a myth that there’s a hierarchy of sophistication where you start out with visual and physical representations and then build up to the symbolic. But so much of high-level mathematical work now is visual. Here in Silicon Valley, if you look at Tesla engineers, they’re drawing, they’re sketching, they’re building models, and nobody says that’s elementary mathematics.

There’s an example in the book where you’ve asked students how they would calculate 38 x 5 in their heads, and they come up with several different ways of arriving at the same answer. The creativity is fascinating, but wouldn’t it be easier to teach students one standard method?

That narrow, rigid version of mathematics where there’s only one right approach is what most students experience, and it’s a big part of why people have such math trauma. It keeps them from realizing the full range and power of mathematics. When you only have students blindly memorizing math facts, they’re not developing number sense.

They don’t learn how to use numbers flexibly in different situations. It also makes students who think differently believe there’s something wrong with them.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Stanford University


Researchers use simulations to tackle finite sphere-packing problem and ‘sausage catastrophe’

Have you ever wondered about the best way to pack a finite number of identical spheres into a shape-shifting flexible container, like a convex hull?

Researchers from the University of Twente, Active Soft Matter Lab led by Dr. Hanumantha Rao Vutukuri in the TNW Faculty, along with Utrecht University, have investigated this fascinating mathematical sphere-packing problem by combining experiments and computer simulations. Their research has been published in Nature Communications.

An intuitively simple problem concerning the best way to pack a set of spheres has a long history of studies dating back to the 17th century. The British sailor Raleigh, for instance, contemplated this issue while trying to find an efficient method for stacking cannonballs on his ship.

Later, Kepler conjectured that the densest packing for an infinite number of spheres would be the face-centered cubic (FCC) crystal structure, akin to the hexagonal arrangement of oranges and apples seen in supermarkets. Remarkably, this hypothesis was only proven in the 21st century.

The ‘sausage catastrophe’

When you have a finite number of spheres, everything gets more complicated; surprisingly, packing the “finite” spheres in a compact cluster does not always yield the densest packing. Mathematicians already conjectured decades ago that a linear, sausage-like arrangement provides the best packing, however, not for all numbers of spheres.

There’s a peculiar phenomenon at play: The sausage-shaped arrangement is the most efficient packing, but only with up to 55 spheres. Beyond that number, a clustered arrangement becomes the best packing. This abrupt transition is known as the “sausage catastrophe.”

In three-dimensional space, packing up to 55 spheres linearly forms a “sausage” that is denser than any cluster arrangement. However, in four dimensions, this scenario changes dramatically. About 300,000 spheres are needed for the “sausage” to transform into a spherelike cluster.

Rao was curious about whether this fundamentally intriguing problem could be observed and resolved in the lab using a model system. This system includes micron-sized spherical particles (colloids) and giant unilamellar vesicles (GUVs), which serve as flexible containers that are the main ingredients in Vutukuri’s lab.

“This curiosity led us to explore the finite sphere packing problem through experiments in 3D real space, specifically using colloids in GUVs. By varying the number of particles and the volume of the vesicles, we were able to examine the different particle arrangements inside these vesicles using a confocal microscope.

“We identified stable arrangements for specific combinations of vesicle volume and particle number: 1D (sausage), 2D (plate, with particles in one plane), and 3D (cluster). Notably, we also observed bistability; the configurations alternated between 1D and 2D arrangements or between 2D and 3D structures. However, our experiments were limited to observing a maximum of nine particles, as packing a larger number of particles resulted in the rupture of the vesicles.”

Vutukuri says that they then contacted Dijkstra’s lab at Utrecht University to delve deeper into this problem using simulations. Strikingly, the simulations predicted that packing spheres in a sausage configuration is most efficient for up to 55 spheres. However, when they attempted to pack 56 spheres into a vesicle, they discovered that a compact three-dimensional cluster was the more efficient arrangement.

Remarkably, for 57 spheres, the packing reverted back to a sausage configuration. While mathematicians have previously determined that a sausage configuration is the most efficient for 58 and 64 spheres, their study contradicts this, demonstrating that compact clusters are more effective.

The findings show that the “sausage catastrophe,” a phenomenon previously described by mathematicians, is not just a theoretical scenario but can also be observed experimentally.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Twente