Study debunks myths about gender and math performance

A major study of recent international data on school mathematics performance casts doubt on some common assumptions about gender and math achievement — in particular, the idea that girls and women have less ability due to a difference in biology.

“We tested some recently proposed hypotheses that try to explain a supposed gender gap in math performance and found the data did not support them,” says Janet Mertz, senior author of the study and a professor of oncology at the University of Wisconsin-Madison.

Instead, the Wisconsin researchers linked differences in math performance to social and cultural factors.

The new study, by Mertz and Jonathan Kane, a professor of mathematical and computer sciences at the University of Wisconsin-Whitewater, was published in Dec 2011 in Notices of the American Mathematical Society. The study looked at data from 86 countries, which the authors used to test the “greater male variability hypothesis” famously expounded in 2005 by Lawrence Summers, then president of Harvard, as the primary reason for the scarcity of outstanding women mathematicians.

That hypothesis holds that males diverge more from the mean at both ends of the spectrum and, hence, are more represented in the highest-performing sector. But, using the international data, the Wisconsin authors observed that greater male variation in math achievement is not present in some countries, and is mostly due to boys with low scores in some other countries, indicating that it relates much more to culture than to biology.

The new study relied on data from the 2007 Trends in International Mathematics and Science Study and the 2009 Programme in International Student Assessment.

“People have looked at international data sets for many years”, Mertz says. “What has changed is that many more non-Western countries are now participating in these studies, enabling much better cross-cultural analysis.”

The Wisconsin study also debunked the idea proposed by Steven Levitt of “Freakonomics” fame that gender inequity does not hamper girls’ math performance in Muslim countries, where most students attend single-sex schools. Levitt claimed to have disproved a prior conclusion of others that gender inequity limits girls’ mathematics performance. He suggested, instead, that Muslim culture or single-sex classrooms benefit girls’ ability to learn mathematics.

By examining the data in detail, the Wisconsin authors noted other factors at work. “The girls living in some Middle Eastern countries, such as Bahrain and Oman, had, in fact, not scored very well, but their boys had scored even worse, a result found to be unrelated to either Muslim culture or schooling in single-gender classrooms,” says Kane.

He suggests that Bahraini boys may have low average math scores because some attend religious schools whose curricula include little mathematics. Also, some low-performing girls drop out of school, making the tested sample of eighth graders unrepresentative of the whole population.

“For these reasons, we believe it is much more reasonable to attribute differences in math performance primarily to country-specific social factors,” Kane says.

To measure the status of females relative to males within each country, the authors relied on a gender-gap index, which compares the genders in terms of income, education, health and political participation. Relating these indices to math scores, they concluded that math achievement at the low, average and high end for both boys and girls tends to be higher in countries where gender equity is better. In addition, in wealthier countries, women’s participation and salary in the paid labor force was the main factor linked to higher math scores for both genders.

“We found that boys — as well as girls — tend to do better in math when raised in countries where females have better equality, and that’s new and important,” says Kane. “It makes sense that when women are well-educated and earn a good income, the math scores of their children of both genders benefit.”

Mertz adds, “Many folks believe gender equity is a win-lose zero-sum game: If females are given more, males end up with less. Our results indicate that, at least for math achievement, gender equity is a win-win situation.”

U.S. students ranked only 31st on the 2009 Programme in International Student Assessment, below most Western and East-Asian countries. One proposed solution, creating single-sex classrooms, is not supported by the data. Instead, Mertz and Kane recommend increasing the number of math-certified teachers in middle and high schools, decreasing the number of children living in poverty and ensuring gender equality.

“These changes would help give all children an optimal chance to succeed,” says Mertz. “This is not a matter of biology: None of our findings suggest that an innate biological difference between the sexes is the primary reason for a gender gap in math performance at any level. Rather, these major international studies strongly suggest that the math-gender gap, where it occurs, is due to sociocultural factors that differ among countries, and that these factors can be changed.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Wisconsin-Madison


Millennium Prize: The Yang-Mills Existence and Mass Gap problem

There’s a contradiction between classical and quantum theories.

One of the outstanding discoveries made in the early part of the last century was that of the quantum behaviour of the physical world. At very short distances, such as the size of an atom and smaller, the world behaves very differently to the “classical” world we are used to.

Typical of the quantum world is so-called wave-particle duality: particles such as electrons behave sometimes as if they are point particles with a definite position, and sometimes as if they are spread out like waves.

This strange behaviour is not just of theoretical interest, since it is underpins much of our modern technology. It is fundamental to the behaviour of semiconductors in all our electronic devices, the behaviour of nano-materials, and the current rise of quantum computing.

Quantum theory is fundamental. It must govern not just the very small but also the classical realm. That means physicists and mathematicians have had to develop methods not just for understanding new quantum phenomena, but also for replacing classical theories by their quantum analogues.

This is the process of [quantization.](http://en.wikipedia.org/wiki/Quantization_(physics) When we have a finite number of degrees of freedom, such as for a finite collection of particles, although the quantum behaviour is often counter-intuitive, we have a well-developed mathematical machinery to handle this quantization called quantum mechanics.

This is well understood physically and mathematically. But when we move to study the electric and magnetic fields where we have an infinite number of degrees of freedom, the situation is much more complicated. With the development of so-called quantum field theory, a quantum theory for fields, physics has made progress that mathematically we do not completely understand.

What’s the problem?

Many field theories fall into a class called gauge field theories, where a particular collection of symmetries, called the gauge group, acts on the fields and particles. In the case that these symmetries all commute, so-called abelian gauge theories, we have a reasonable understanding of the quantization.

This includes the case of the electromagnetic field, quantum electrodynamics, for which the theory makes impressively accurate predictions.

The first example of a non-abelian theory that arose historically is the theory of the electro-weak interaction, which requires a mechanism to make the predicted particles massive as we observe them in nature. This involves the so-called Higgs boson, which is currently being searched for with the Large Hadron Collider (LHC) at CERN.

The notable feature of this theory for our present discussion is that the Higgs mechanism is classical and carries over to the quantum theory under the quantization process.

The case of interest in the Millennium Problem “Yang-Mills theory and Mass-Gap” is Yang-Mills gauge theory, a non-abelian theory which we expect to describe quarks and the strong force that binds the nucleus and powers the sun. Here we encounter a contradiction between the classical and quantum theories.

The classical theory predicts massless particles and long-range forces. The quantum theory has to match the real world with short-range forces and massive particles. Physicists expect various mathematical properties such as the “mass gap” and “asymptotic freedom” to explain the non-existence of massless particles in observations of the strong interactions.

As these properties are not visible in the classical theory and arise only in the quantum theory, understanding them means we need a rigorous approach to “quantum Yang-Mills theory”. Currently we do not have the mathematics to do this, although various approximations and simplifications can be done which suggest the quantum theory has the required properties.

The Millennium Problem seeks to establish by rigorous mathematics the existence of the “mass gap” – that is, the non-existence of massless particles in Yang-Mills theory. The solution of the problem would involve an approach to quantum field theory in four dimensions that is sophisticated enough to explain at least this feature of quantum non-abelian Yang-Mills gauge theory.

Doing the maths

Clearly this is of interest to physicists, but why is it of importance to mathematicians? It has become apparent in the last few decades that the tools that physicists have developed for doing quantum field theory, in particular path integrals, make precise predictions about geometry and topology, particularly in low dimensions.

But we don’t know mathematically what a path integral is, except in very simple cases. It is as if we are in a pre-Newtonian world – certain calculations can be done with certain tricks but Newton hasn’t developed calculus for us yet.

Analogously, there are calculations in geometry and topology that can be done non-rigorously using methods developed by physicists in quantum field theory which give the right answers. This suggests that there is a set of powerful techniques waiting to be discovered.

A solution to this Millennium Problem would shed light on what these new techniques are.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Michael Murray*

 


Researchers link patterns seen in spider silk, melodies

Using a new mathematical methodology, researchers at MIT have created a scientifically rigorous analogy that shows the similarities between the physical structure of spider silk and the sonic structure of a melody, proving that the structure of each relates to its function in an equivalent way.

The step-by-step comparison begins with the primary building blocks of each item — an amino acid and a sound wave — and moves up to the level of a beta sheet nanocomposite (the secondary structure of a protein consisting of repeated hierarchical patterns) and a musical riff (a repeated pattern of notes or chords). The study explains that structural patterns are directly related to the functional properties of lightweight strength in the spider silk and, in the riff, sonic tension that creates an emotional response in the listener.

While likening spider silk to musical composition may appear to be more novelty than breakthrough, the methodology behind it represents a new approach to comparing research findings from disparate scientific fields. Such analogies could help engineers develop materials that make use of the repeating patterns of simple building blocks found in many biological materials that, like spider silk, are lightweight yet extremely failure-resistant. The work also suggests that engineers may be able to gain new insights into biological systems through the study of the structure-function relationships found in music and other art forms.

The MIT researchers — David Spivak, a postdoc in the Department of Mathematics, Associate Professor Markus Buehler of the Department of Civil and Environmental Engineering (CEE) and CEE graduate student Tristan Giesa — published their findings in the December issue of BioNanoScience.

They created the analogy using ontology logs, or “ologs,” a concept introduced about a year ago by Spivak, who specializes in a branch of mathematics called category theory. Ologs provide an abstract means for categorizing the general properties of a system — be it a material, mathematical concept or phenomenon — and showing inherent relationships between function and structure.

To build the ologs, the researchers used information from Buehler’s previous studies of the nanostructure of spider silk and other biological materials.

“There is mounting evidence that similar patterns of material features at the nanoscale, such as clusters of hydrogen bonds or hierarchical structures, govern the behaviour of materials in the natural environment, yet we couldn’t mathematically show the analogy between different materials,” Buehler says. “The olog lets us compile information about how materials function in a mathematically rigorous way and identify those patterns that are universal to a very broad class of materials. Its potential for engineering the built environment — in the design of new materials, structures or infrastructure — is immense.”

“This work is very exciting because it brings forth an approach founded on category theory to bridge music (and potentially other aspects of the fine arts) to a new field of materiomics,” says Associate Professor of Biomedical Engineering Joyce Wong of Boston University, a biomaterials scientist and engineer, as well as a musician. “This approach is particularly appropriate for the hierarchical design of proteins, as they show in the silk example. What is particularly exciting is the opportunity to reveal new relationships between seemingly disparate fields with the aim of improving materials engineering and design.”

At first glance, an olog may look deceptively simple, much like a corporate organizational chart that shows reporting relationships using directional arrows. But ologs demand scientific rigor to break a system down into its most basic structural building blocks, define the functional properties of the building blocks with respect to one another, show how function emerges through the building blocks’ interactions, and do this in a self-consistent manner. With this structure, two or more systems can be formally compared.

“The fact that a spider’s thread is robust enough to avoid catastrophic failure even when a defect is present can be explained by the very distinct material makeup of spider-silk fibers,” Giesa says. “It’s exciting to see that music theoreticians observed the same phenomenon in their field, probably without any knowledge of the concept of damage tolerance in materials. Deleting single chords from a harmonic sequence often has only a minor effect on the harmonic quality of the whole sequence.”

“The seemingly incredible gap between spider silk and music is no wider than the gap between the two disparate mathematical fields of geometry — think of triangles and spheres — and algebra, which uses variables and equations,” Spivak says. “Yet category theory’s first success, in the 1940s, was to express a rigorous mathematical analogy between these two domains and use it to prove new theorems about complex geometric shapes by importing existing theorems from algebra. It remains to be seen whether our olog will yield such striking results; however, the foundation for such an inquiry is now in place.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Denise Brehm, Massachusetts Institute of Technology


Danger of Death: Are We Programmed to Miscalculate Risk?

Our best efforts to gauge threats may be counter-productive.

Assessing risk is something everyone must do every day. Yet few are very good at it, and there are significant consequences of the public’s collective inability to accurately assess risk.

As a first and very important example, most people presume, as an indisputable fact, that the past century has been the most violent in all history — two devastating world wars, the Holocaust, the Rawanda massacre, the September 11 attacks and more — and that we live in a highly dangerous time today.

And yet, as Canadian psychologist (now at Harvard) Steven Pinker has exhaustively documented in his new book The Better Angels of Our Nature: Why Violence Has Declined, the opposite is closer to the truth, particularly when normalised by population.

As Pinker himself puts it:

“Believe it or not — and I know most people do not — violence has been in decline over long stretches of time, and we may be living in the most peaceful time in our species’ existence. The decline of violence, to be sure, has not been steady; it has not brought violence down to zero (to put it mildly); and it is not guaranteed to continue.

“But I hope to convince you that it’s a persistent historical development, visible on scales from millennia to years, from the waging of wars and perpetration of genocides to the spanking of children and the treatment of animals.”

How could the public perception be so wrong? The news media is partly to blame — good news doesn’t sell much advertising space. But the problem might go even deeper: we may be psychologically disposed to miscalculate risk, perhaps as an evolutionary response to danger.

One well-known problem is the “conjunction fallacy” — the common predilection to assign greater probability to a more specialised risk.

One indication of our inability to objectively assess risk is the fanatical and often counter-productive measures taken by parents nowadays to protect children. Some 42 years years ago, 67% of American children walked or biked to school, but today only 10% do, in part stemming from a handful of highly publicised abduction incidents.

Yet the number of cases of real child abduction by strangers (as opposed to, say, a divorced parent) has dwindled from 200-300 per year in the 1990s to only about 100 per year in the US today.

Even if one assumes all of these children are harmed (which is not true), this is still only about 1/20 the risk of drowning and 1/40 of the risk of a fatal car accident.

Such considerations many not diminish the tragedy of an individual loss, but they do raise questions of priority in prevention. Governments worldwide often agonise over marginal levels of additives in certain products (agar in apples in the 1980s and asbestos insulation in well-protected ceilings), while refusing to spend money or legislate for clear social good (smoking in the developing world, gun control, infectious disease control, needle exchange programs and working conditions in coal mines).

One completely absurd example is the recent surge of opposition in the U.S. (supposedly on health concerns) to “smart meters,” which once an hour send usage statistics to the local electric or natural gas utility.

The microwave exposure for these meters, even if you are standing just two feet from a smart meter when it broadcasts its data, is 550 times less than standing in front of an active microwave oven, up to 4,600 times less than holding a walkie-talkie at your ear, and up to 1,100 times less than holding an active cell phone at your ear.

It is even less than sitting in a WiFi cyber cafe using a laptop computer.

A much more serious example is the ongoing hysteria, especially in the UK and the US, over childhood vaccinations. Back in 1998, a study was published in the British medical journal Lancet claiming that vaccination shots with a certain mercury compound may be linked to autism, but other studies showed no such link.

In the meantime, many jumped on the anti-vaccination bandwagon, and several childhood diseases began to reappear, including measles in England and Wales, and whooping cough in California. We should note the rate of autism is probably increasing.

Finally, in January 2011, Lancet formally acknowledged that the original study was not only bad science (which had been recognised for years), but further an “elaborate fraud”.

Yet nearly one year later, opposition to vaccination remains strong, and irresponsible politicians such as would-be-US-President Michele Bachmann cynically (or ignorantly?) milk it.

A related example is the worldwide reaction to the Fukushima reactor accident. This was truly a horrible incident, and we do not wish to detract from death and environmental devastation that occurred. But we question decisions such as that quickly made by Germany to discontinue and dismantle its nuclear program.

Was this decision made after a sober calculation of relative risk, or simply from populist political pressure? We note this decision inevitably will mean more consumption of fossil fuels, as well as the importation of electricity from France, which is 80% nuclear.

Is this a step forward, or a step backward? We also note that concern about global warming is, if anything, more acute than ever in light of accelerating carbon consumption.

This kind of over-reaction — to which many of us are prey — is exacerbated by cynical and exploitive individuals, such as Bill and Michelle Deagle and Jeff Rense, who profit from such fears by peddling bogus medical products, speaking at conspiracy conventions for hefty fees, and charging for elite information.

This is just one instance of a large, growing and dangerous co-evolution of creationist, climate-denial and other anti-science movements.

How do we protect against such misinformation and misperceptions? The complete answers are complex but several things are clear.

First of all, science education must be augmented to address the assessment of risk — this should be a standard part of high school mathematics, as should be more attention to the information needed to make informed assessment.

Second, the press needs to be significantly more vigilant in critically commenting on dubious claims of public risk by citing literature, consulting real experts, and so on. Ideally, we should anticipate scientifically trained and certified scientific journalists.

Third, mathematicians and scientists themselves need to recognise their responsibility to help the public understand risk. Failure to do so, well, poses a serious risk to society.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon)*

 


Heads up Kobe Bryant! Research shows that trying for another 3-pointer is a mistake

Basketball fans everywhere recognize the following scenario: Their favourite player scores a three-point shot. A short time later he regains control of the ball. But does the fact that he scored the last time make him more likely to try another three-pointer? Does it change the probability that he will score again?

New research by Dr. Yonatan Loewenstein and graduate student Tal Neiman at the Hebrew University in Jerusalem shatters the myth that a player who scores one or more three-pointers improves his odds of scoring another.

Dr. Loewenstein is at the Edmond and Lily Safra Center for Brain Sciences and the Department of Neurobiology at the Hebrew University.

Appearing in the latest issue of the journal Nature Communications, the report raises doubts about the ability of athletes in particular, and people in general, to predict future success based on past performance.

Loewenstein and Neiman examined more than 200,000 attempted shots from 291 leading players in the National Basketball Association (NBA) in the 2007-2008 and 2008-2009 regular seasons, and more than 15,000 attempted shots by 41 leading players in the Women’s National Basketball Association (WNBA) during the 2008 and 2009 regular seasons.

The researchers studied how scores or misses affected a player’s behaviour later in the game, and found that after a successful three-pointer, players were significantly more likely to attempt another three-pointer.

In other words, a successful three point shot provided players with positive reinforcement to attempt additional three point shots later in the game.

Surprisingly, the researchers discovered the exact opposite of what players and fans tend to believe: players who scored a three-pointer and then attempted another three-pointer were more likely to miss the follow-up shot.

On the other hand, players who missed a previous three-pointer were more likely to score with their next attempt.

According to Dr. Loewenstein, “The study shows that despite many years of intense training, even the best basketball players over-generalize from their most recent actions and their outcomes. They assume that even one shot is indicative of future performance, while not taking into account that the situation in which they previously scored is likely to be different than the current one.”

The behaviour of basketball players shows the limitations of learning from reinforcement, especially in a complex environment such as a basketball game.

“Learning from reinforcement may not improve performance, and may even damage it, if it is not based on an accurate model of the world,” explains Dr. Loewenstein. “This affects everyone’s behaviour: brokers make investments according to past market performance and commanders make military moves based on the results of past battles. Awareness of the limitations of this kind of learning can help them improve their decision-making processes — as well as those of basketball players.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Hebrew University of Jerusalem


Researchers find best routes to self-assembling 3-D shapes

This showas a few of the 2.3 million possible 2-D designs — planar nets — for a truncated octahedron (right column). The question is: Which net is best to make a self-assembling shape at the nanoscale?

Material chemists and engineers would love to figure out how to create self-assembling shells, containers or structures that could be used as tiny drug-carrying containers or to build 3-D sensors and electronic devices.

There have been some successes with simple 3-D shapes such as cubes, but the list of possible starting points that could yield the ideal self-assembly for more complex geometric configurations gets long fast. For example, while there are 11 2-D arrangements for a cube, there are 43,380 for a dodecahedron (12 equal pentagonal faces). Creating a truncated octahedron (14 total faces – six squares and eight hexagons) has 2.3 million possibilities.

“The issue is that one runs into a combinatorial explosion,” said Govind Menon, associate professor of applied mathematics at Brown University. “How do we search efficiently for the best solution within such a large dataset? This is where math can contribute to the problem.”

In a paper published in the Proceedings of National Academy of Sciences, researchers from Brown and Johns Hopkins University determined the best 2-D arrangements, called planar nets, to create self-folding polyhedra with dimensions of a few hundred microns, the size of a small dust particle. The strength of the analysis lies in the combination of theory and experiment. The team at Brown devised algorithms to cut through the myriad possibilities and identify the best planar nets to yield the self-folding 3-D structures. Researchers at Johns Hopkins then confirmed the nets’ design principles with experiments.

“Using a combination of theory and experiments, we uncovered design principles for optimum nets which self-assemble with high yields,” said David Gracias, associate professor in of chemical and biomolecular engineering at Johns Hopkins and a co-corresponding author on the paper. “In doing so, we uncovered striking geometric analogies between natural assembly of proteins and viruses and these polyhedra, which could provide insight into naturally occurring self-assembling processes and is a step toward the development of self-assembly as a viable manufacturing paradigm.”

“This is about creating basic tools in nanotechnology,” said Menon, co-corresponding author on the paper. “It’s important to explore what shapes you can build. The bigger your toolbox, the better off you are.”

While the approach has been used elsewhere to create smaller particles at the nanoscale, the researchers at Brown and Johns Hopkins used larger sizes to better understand the principles that govern self-folding polyhedra.

The researchers sought to figure out how to self-assemble structures that resemble the protein shells viruses use to protect their genetic material. As it turns out, the shells used by many viruses are shaped like dodecahedra (a simplified version of a geodesic dome like the Epcot Center at Disney World). But even a dodecahedron can be cut into 43,380 planar nets. The trick is to find the nets that yield the best self-assembly. Menon, with the help of Brown undergraduate students Margaret Ewing and Andrew “Drew” Kunas, sought to winnow the possibilities. The group built models and developed a computer code to seek out the optimal nets, finding just six that seemed to fit the algorithmic bill.

The students got acquainted with their assignment by playing with a set of children’s toys in various geometric shapes. They progressed quickly into more serious analysis. “We started randomly generating nets, trying to get all of them. It was like going fishing in a lake and trying to count all the species of fish,” said Kunas, whose concentration is in applied mathematics. After tabulating the nets and establishing metrics for the most successful folding maneuvers, “we got lists of nets with the best radius of gyration and vertex connections, discovering which nets would be the best for production for the icosahedron, dodecahedron, and truncated octahedron for the first time.”

Gracias and colleagues at Johns Hopkins, who have been working with self-assembling structures for years, tested the configurations from the Brown researchers. The nets are nickel plates with hinges that have been soldered together in various 2-D arrangements. Using the options presented by the Brown researchers, the Johns Hopkins’s group heated the nets to around 360 degrees Fahrenheit, the point at which surface tension between the solder and the nickel plate causes the hinges to fold upward, rotate and eventually form a polyhedron. “Quite remarkably, just on heating, these planar nets fold up and seal themselves into these complex 3-D geometries with specific fold angles,” Gracias said.

“What’s amazing is we have no control over the sequence of folds, but it still works,” Menon added.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Karolina Grabowska/Pexels,


Millennium Prize: The Poincaré Conjecture

The problem’s been solved … but the sweet treats were declined. Back to the Cutting Board

In 1904, French mathematician Henri Poincaré asked a key question about three-dimensional spaces (“manifolds”).

Imagine a piece of rope, so that firstly a knot is tied in the rope and then the ends are glued together. This is what mathematicians call a knot. A link is a collection of knots that are tangled together.

It has been observed that DNA, which is coiled up within cells, occurs in closed knotted form.

Complex molecules such as polymers are tangled in knotted forms. There are deep connections between knot theory and ideas in mathematical physics. The outsides of a knot or link in space give important examples of three-dimensional spaces.

Torus. Fropuff

Back to Poincaré and his conjecture. He asked if the 3-sphere (which can be formed by either adding a point at infinity to ordinary three-dimensional Euclidean space or by gluing two solid three-dimensional balls together along their boundary 2-spheres) was the only three-dimensional space in which every loop can be continuously shrunk to a point.

Poincaré had introduced important ideas in the structure and classification of surfaces and their higher dimensional analogues (“manifolds”), arising from his work on dynamical systems.

Donuts to go, please

A good way to visualise Poincaré’s conjecture is to examine the boundary of a ball (a two-dimensional sphere) and the boundary of a donut (called a torus). Any loop of string on a 2-sphere can be shrunk to a point while keeping it on the sphere, whereas if a loop goes around the hole in the donut, it cannot be shrunk without leaving the surface of the donut.

Many attempts were made on the Poincaré conjecture, until in 2003 a wonderful solution was announced by a young Russian mathematician, Grigori “Grisha” Perelman.

This is a brief account of the ideas used by Perelman, which built on work of two other outstanding mathematicians, Bill Thurston and Richard Hamilton.

3D spaces

Thurston made enormous strides in our understanding of three-dimensional spaces in the late 1970s. In particular, he realised that essentially all the work that had been done since Poincaré fitted into a single theme.

He observed that known three-dimensional spaces could be divided into pieces in a natural way, so that each piece had a uniform geometry, similar to the flat plane and the round sphere. (To see this geometry on a torus, one must embed it into four-dimensional space!).

Thurston made a bold “geometrisation conjecture” that this should be true for all three-dimensional spaces. He had many brilliant students who further developed his theories, not least by producing powerful computer programs that could test any given space to try to find its geometric structure.

Thurston made spectacular progress on the geometrisation conjecture, which includes the Poincaré conjecture as a special case. The geometrisation conjecture predicts that any three-dimensional space in which every loop shrinks to a point should have a round metric – it would be a 3-sphere and Poincaré’s conjecture would follow.

In 1982, Richard Hamilton published a beautiful paper introducing a new technique in geometric analysis which he called Ricci flow. Hamilton had been looking for analogues of a flow of functions, so that the energy of the function decreases until it reaches a minimum. This type of flow is closely related to the way heat spreads in a material.

Hamilton reasoned that there should be a similar flow for the geometric shape of a space, rather than a function between spaces. He used the Ricci tensor, a key feature of Einstein’s field equations for general relativity, as the driving force for his flow.

He showed that, for three-dimensional spaces where the Ricci curvature is positive, the flow gradually changes the shape until the metric satisfies Thurston’s geometrisation conjecture.

Hamilton attracted many outstanding young mathematicians to work in this area. Ricci flow and other similar flows have become a huge area of research with applications in areas such as moving interfaces, fluid mechanics and computer graphics.

Ricci flow. CBN

He outlined a marvellous program to use Ricci flow to attack Thurston’s geometrisation conjecture. The idea was to keep evolving the shape of a space under Ricci flow.

Hamilton and his collaborators found the space might form a singularity, where a narrow neck became thinner and thinner until the space splits into two smaller spaces.

Hamilton worked hard to try to fully understand this phenomenon and to allow the pieces to keep evolving under Ricci flow until the geometric structure predicted by Thurston could be found.

Perelman

This is when Perelman burst on to the scene. He had produced some brilliant results at a very young age and was a researcher at the famous Steklov Institute in St Petersburg. Perelman got a Miller fellowship to visit UC Berkeley for three years in the early 1990s.

I met him there around 1992. He then “disappeared” from the mathematical scene for nearly ten years and re-emerged to announce that he had completed Hamilton’s Ricci flow program, in a series of papers he posted on the electronic repository called ArXiv.

His papers created enormous excitement and within several months a number of groups had started to work through Perelman’s strategy.

Eventually everyone was convinced that Perelman had indeed succeeded and both the geometrisation and Poincaré conjecture had been solved.

Perelman was awarded both a Fields medal (the mathematical equivalent of a Nobel prize) and also offered a million dollars for solving one of the Millenium prizes from the Clay Institute.

He turned down both these awards, preferring to live a quiet life in St Petersburg. Mathematicians are still finding new ways to use the solution to the geometrisation conjecture, which is one of the outstanding mathematical results of this era.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Hyam Rubinstein*

 


Statistically significant

When the statistician for UC Irvine’s innovative Down syndrome program retired last year, its researchers were left in a bind. The group is studying ways to prevent or delay the onset of Alzheimer’s-type dementia in people with Down syndrome, including examining possible links between seizures and cognitive decline.

“We were mid-study when we found ourselves with no statistician and little budget with which to pay one,” explains program manager Eric Doran.

Statistical analysis for the project was critical and especially difficult. Some of the subjects’ dementia had progressed to the point that they could no longer be tested on performance-based cognitive measures. They couldn’t respond to questions, making it hard for clinicians to evaluate them. But that resulted in missing data. How, then, could the team accurately quantify change over time and see whether seizures might play a role?

Enter Vinh Nguyen, then a doctoral student in statistics at the Donald Bren School of Information & Computer Sciences and now the new head of the UCI Center for Statistical Consulting, which aims to help researchers across campus and Orange County with such challenges. He proposed a model to gauge how quickly people were becoming untestable, instead of how fast they declined. Rather than including test scores – which would have been zero for those who couldn’t be quizzed – Nguyen designed a variable to show when they became unable to respond.

“My part of it was to help them find a way to look at patients with and without seizures, to see if those with seizures might have a shorter time before they became untestable,” he says. “That’s what we found.”Although the findings are preliminary, without his involvement they wouldn’t have been possible. The work resulted in a paper that has been accepted for publication in the Journal of Alzheimer’s Disease. Nguyen, as of October an assistant professor-in-residence of statistics, is a co-author.

“We’re very fortunate to have Vinh’s assistance,” Doran says. “Quite frankly, some of the statistical analysis he’s doing goes well beyond the skill level of even the most seasoned investigators. Vinh was able to pick up where our previous statistician left off, and he was pretty ingenious. His creative look at the data enabled us to complete our analysis.”

Nguyen was glad to help: “I’m excited to be involved in studies that not only advance science but also make a meaningful impact in people’s lives.”

He looks forward to doing more such work through the center, providing state-of-the-art statistical expertise in grant preparation, the design of studies and experiments, and data analysis. The center this spring will offer free statistical consulting for campus researchers via a course taught by Dr. Nguyen. Graduate students in the class will be assigned to projects based on their interests and skills.

“It’s a huge benefit to the university because it’s free, and it’s a huge benefit to the statistics graduate program because it gives our master’s and Ph.D. students a chance to exercise their knowledge and training in real-world applications,” Nguyen says. “Learning how to communicate, how to collaborate with folks outside your field – you can’t just lecture about that. It’s got to be a hands-on experience.”

Colleagues say Nguyen, 26 – whose research interests include survival analysis, robust statistical methods, sequential clinical trials and prediction – was the right choice to run the center.

“It’s a big set of responsibilities for someone so young, but he’s got the ability and maturity level to succeed,” says associate professor of statistics Dan Gillen, who directs statistics research at the Institute for Memory Impairments & Neurological Disorders. It was Gillen who introduced Nguyen, whom he was advising on his doctoral thesis, to the Down syndrome team. “Vinh understands the role of statistics across multiple branches of science, and he’s extremely good at translating a seemingly vague hypothesis into a precise statistical framework.”

A native of Vietnam, Nguyen immigrated to the United States at age 5 and grew up in Garden Grove. A true-blue Anteater, he earned all his degrees at UCI, graduating magna cum laude with a B.S. in mathematics and a B.A. in economics, then obtaining an M.S. and a Ph.D. in statistics. In 2010, he received an Achievement Rewards for College Scientists scholar award, which recognizes UCI’s academically superior doctoral students who exhibit outstanding promise as scientists, researchers and public leaders.

“I feel very fortunate to be here,” Nguyen says. “I’m honoured to be given this opportunity to lead the center and help it grow, and to work in a field and a setting that allow me to apply my knowledge.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Rizza Barnes, University of California, Irvine


Millennium Prize: P vs NP

Deciding whether a statement is true is a computational head-scratcher.

In the 1930s, Alan Turing showed there are basic tasks that are impossible to achieve by algorithmic means. In modern lingo, what he showed was that there can be no general computer program that answers yes or no to the question of whether another computer program will eventually stop when it is run.

The amazing unsolvability of this Halting Problem contains a further perplexing subtlety. While we have no way of finding in advance if a program will halt, there is an obvious way, in principle, to demonstrate that it halts if it is a halting program: run it, wait, and witness it halting!

In other words, Turing showed that, at the broadest level, deciding whether a statement is true is computationally harder than demonstrating that it’s true when it is.

A question of efficiency

Turing’s work was a pivotal moment in the history of computing. Some 80 years later, computing devices have pervaded almost every facet of society. Turing’s original “what is computable?” question has been mostly replaced by the more pertinent, “what is efficiently computable?”

But while Turing’s Halting Problem can be proved impossible in a few magical lines, the boundary between “efficient” and “inefficient” seems far more elusive. P versus NP is the most famous of a huge swathe of unresolved questions to have emerged from this modern take on Turing’s question.

So what is this NP thing?

Roughly speaking, P (standing for “polynomial time”), corresponds to the collection of computational problems that have an efficient solution. It’s only an abstract formulation of “efficient”, but it works fairly well in practice.

The class NP corresponds to the problems for which, when the answer is “yes”, there is an efficient demonstration that the answer is yes (the “N” stands for “nondeterministic”, but the description taken here is more intuitive). P versus NP simply asks if these two classes of computational problems are the same.

It’s just the “deciding versus demonstrating” issue in Turing’s original Halting Problem, but with the added condition of efficiency.

A puzzler

P certainly doesn’t look to be the same as NP. Puzzles are good examples of the general intuition here. Crossword puzzles are popular because it’s a challenge to find the solution, and humans like challenge. But no-one spends their lunchtime checking already completed crosswords: checking someone else’s solution offers nowhere near the same challenge.

Even clearer is Sudoku: again it is a genuine challenge to solve, but checking an existing solution for correctness is so routine it is devoid of entertainment value.

The P=NP possibility is like discovering that the “finding” part of these puzzles is only of the same difficulty to the “checking” part. That seems hard to believe, but the truth is we do not know for sure.

This same intuition pervades an enormous array of important computational tasks for which we don’t currently have efficient algorithms. One particularly tantalising feature is that, more often than not, these problems can be shown to be maximally hard among NP problems.

These so-called “NP-complete” problems are test cases for P versus NP: if any one of them has an efficient algorithmic solution then they all do (and efficient checking is no harder than efficient finding).

But if even just one single one can be shown to have no efficient solution, then P does not equal NP (and efficient finding really is, in general, harder than efficient checking).

Here are some classic examples of NP-complete problems.

  • Partition (the dilemma of the alien pick-pockets). On an alien planet, two pick-pockets steal a wallet. To share the proceeds, they must evenly divide the money: can they do it? Standard Earth currencies evolved to have coin values designed to make this task easy, but in general this task is NP-complete. It’s in NP because, if there is an equal division of the coins, this can be easily demonstrated by simply showing the division. (Finding it is the hard part!)
  • Timetabling. Finding if a clash-free timetable exists is NP-complete. The problem is in NP because we can efficiently check a correct, clash-free timetable to be clash-free.
  • Travelling Salesman. A travelling salesman must visit each of some number of cities. To save costs, the salesman wants to find the shortest route that passes through all of the cities. For some given target distance “n”, is there a route of length at most “n”?
  • Short proofs. Is there a short proof for your favourite mathematical statement (a Millennium Prize problem perhaps)? With a suitable formulation of “short”, this is NP-complete. It is in NP because checking formal proofs can be done efficiently: the hard part is finding them (at least, we think that’s the hard part!).

In every case, we know of no efficient exact algorithm, and the nonexistence of such an algorithm is equivalent to proving P not equal to NP.

So are we close to a solution? It seems the best we know is that we don’t know much! Arguably, the most substantial advances in the P versus NP saga are curiously negative: they mostly show we cannot possibly hope to resolve P as different to NP by familiar techniques.

We know Turing’s approach cannot work. In 2007, Alexander Razborov and Steven Rudich were awarded the Gödel Prize (often touted as the Nobel Prize of Computer Science) for their work showing that no “natural proof” can prove P unequal to NP.

Of course, we’ll keep looking!

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Marcel Jackson*

 


Millennium Prize: The Hodge Conjecture

If one grossly divides mathematics into two parts they would be: tools for measuring and tools for recognition.

To use an analogy, tools for measuring are the technologies for collecting data about an object, the process of “taking a blurry photograph”. Tools for recognition deal with the following: if you are given a pile of data or a blurry photograph, how can the object that it came from be recognised from the data?

The Hodge Conjecture – a major unsolved problem in algebraic geometry – deals with recognition.

William Vallance Douglas Hodge was a professor at Cambridge who, in the 1940s, worked on developing a refined version of cohomology – tools for measuring flow and flux across boundaries of surfaces (for example, fluid flow across membranes).

The classical versions of cohomology are used for the understanding of the flow and dispersion of electricity and magnetism (for example, Maxwell’s equations, which describe how electric charges and currents act as origins for electric and magnetic fields). These were refined by Hodge in what is now called the “Hodge decomposition of cohomology”.

Hodge recognised that the actual measurements of flow across regions always contribute to a particular part of the Hodge decomposition, known as the (p,p) part. He conjectured that any time the data displays a contribution to the (p,p) part of the Hodge decomposition, the measurements could have come from a realistic scenario of a system of flux and change across a region.

Or, to put this as an analogy, one could say Hodge found a criterion to test for fraudulent data.

If Hodge’s test comes back positive, you can be sure the data is fraudulent. The question in the Hodge conjecture is whether there is any fraudulent data which Hodge’s test will not detect. So far, Hodge’s test seems to work.

But we haven’t understood well enough why it works, and so the possibility is open that there could be a way to circumvent Hodge’s security scheme.

Hodge made his conjecture in 1950, and many of the leaders in the development of geometry have worked on this basic recognition problem. The problem itself has stimulated many other refined techniques for measuring flow, flux and dispersion.

Tate’s 1963 conjecture is another similar recognition question coming out of another measurement technique, the l-adic cohomology developed by Alexander Grothendieck.

The strongest evidence in favour of the Hodge conjecture is a 1995 result of Cattani, Deligne & Kaplan which studies how the Hodge decomposition behaves as a region mutates.

Classical cohomology measurements are not affected by small mutations, but the Hodge decomposition does register mutations. The study of the Hodge decomposition across mutations provides great insight into the patterns in data that must occur in true measurements.

In the 1960s, Grothendieck initiated a powerful theory generalising the usual concept of “region” to include “virtual regions” (the theory of motives on which one could measure “virtual temperatures” and “virtual magnetic fields”.

In a vague sense, the theory of motives is trying to attack the problem by trying to think like a hacker. The “Standard Conjectures” of Grothendieck are far-reaching generalisations of the Hodge conjecture, which try to explain which virtual regions are indistinguishable from realistic scenarios.

The question in the Hodge conjecture has stimulated the development of revolutionary tools and techniques for measurement and analysis of data across regions. These tools have been, and continue to be, fundamental for modern development.

Imagine trying to building a mobile phone without an understanding of how to measure, analyse and control electricity and magnetism. Alternatively, imagine trying to sustain an environment without a way to measure, analyse and detect the spread of toxins across regions and in waterways.

Of course, the tantalising intrigue around recognition and detection problems makes them thrilling. Great minds are drawn in and produce great advances in an effort to understand what makes it all work.

One might, very reasonably, claim that the longer the Hodge conjecture remains an unsolved problem the more good it will do for humanity, driving more and more refined techniques for measurement and analysis and stimulating the development of better and better methods for recognition of objects from the data.

The Clay Mathematics Institute was wise in pinpointing the Hodge conjecture as a problem that has the capacity to stimulate extensive development of new methods and technologies and including it as one of the Millennium problems.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Arun Ram*