Coin tosses are not 50/50: Researchers find a slight bias

Want to get a slight edge during a coin toss? Check out which side is facing upwards before the coin is flipped –- then call that same side.

This tactic will win 50.8 percent of the time, according to researchers who conducted 350,757-coin flips.

For the preprint study, which was published on the arXiv database last week and has not yet been peer-reviewed, 48 people tossed coins of 46 different currencies.

They were told to flip the coins with their thumb and catch it in their hand—if the coins fell on a flat surface that could introduce other factors such as bouncing or spinning.

Frantisek Bartos, of the University of Amsterdam in the Netherlands, told AFP that the work was inspired by 2007 research led by Stanford University mathematician Persi Diaconis—who is also a former magician.

Diaconis’ model proposed that there was a “wobble” and a slight off-axis tilt that occurs when humans flip coins with their thumb, Bartos said.

Because of this bias, they proposed it would land on the side facing upwards when it was flipped 51 percent of the time—almost exactly the same figure borne out by Bartos’ research.

While that may not seem like a significant advantage, Bartos said it was more of an edge that casinos have against “optimal” blackjack players.

It does depend on the technique of the flipper. Some people had almost no bias while others had much more than 50.8 percent, Bartos said.

For people committed to choosing either heads or tails before every toss, there was no bias for either side, the researchers found.

None of the many different coins showed any sign of bias either.

Happily, achieving a fair coin flip is simple: just make sure the person calling heads or tails cannot see which side is facing up before the toss.

‘It’s fun to do stupid stuff’

Bartos first heard of the bias theory while studying Bayesian statistics during his master’s degree and decided to test it on a massive scale.

But there was a problem: he needed people willing to toss a lot of coins.

At first, he tried to persuade his friends to flip coins over the weekend while watching “Lord of the Rings”.

“But nobody was really down for that,” he said.

Eventually Bartos managed to convince some colleagues and students to flip coins whenever possible, during lunch breaks, even while on holiday.

“It will be terrible,” he told them. “But it’s fun to do some stupid stuff from time to time.”

The flippers even held weekend-long events where they tossed coins from 9am to 9pm. A massage gun was deployed to soothe sore shoulders.

Countless decisions have been made by coin tosses throughout human history.

While writing his paper, Bartos visited the British Museum and learned that the Wright brothers used one to determine who would attempt the first plane flight.

Coin tosses have also decided numerous political races, including a tied 2013 mayoral election in the Philippines.

But they are probably most common in the field of sport. During the current Cricket World Cup, coin tosses decide which side gets to choose whether to bat or field first.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Daniel Lawler


How far away is everybody? Climbing the cosmic distance ladder

Let’s talk numbers for a moment.

The moon is approximately 384,000 kilometres away, and the sun is approximately 150 million kilometres away. The mean distance between Earth and the sun is known as the “astronomical unit” (AU). Neptune, the most distant planet, then, is 30 AU from the sun.

The nearest stars to Earth are 1,000 times more distant, roughly 4.3 light-years away (one light-year being the distance that light travels in 365.25 days – just under 10 trillion kilometres).

The Milky Way galaxy consists of some 300 billion stars in a spiral-shaped disk roughly 100,000 light-years across.

The Andromeda Galaxy, which can be seen with many home telescopes, is 2.54 million light years away. There are hundreds of billions of galaxies in the observable universe.

At present, the most distant observed galaxy is some 13.2 billion light-years away, formed not long after the Big Bang, 13.75 billion years ago (plus or minus 0.011 billion years).

The scope of the universe was illustrated by the astrophysicist Geraint Lewis in a recent Conversation article.

He noted that, if the entire Milky Way galaxy was represented by a small coin one centimetre across, the Andromeda Galaxy would be another small coin 25 centimetres away.

Going by this scale, the observable universe would extend for 5 kilometres in every direction, encompassing some 300 billion galaxies.

But how can scientists possibly calculate these enormous distances with any confidence?

Parallax

One technique is known as parallax. If you cover one eye and note the position of a nearby object, compared with more distant objects, the nearby object “moves” when you view it with the other eye. This is parallax (see below).

The same principle is used in astronomy. As Earth travels around the sun, relatively close stars are observed to move slightly, with respect to other fixed stars that are more distant.

Distance measurements can be made in this way for stars up to about 1,000 light-years away.

Standard candles

For more distant objects such as galaxies, astronomers rely on “standard candles” – bright objects that are known to have a fixed absolute luminosity (brightness).

Since light flux falls off as the square of the distance, by measuring the actual brightness observed on Earth astronomers can calculate the distance.

One type of standard candle, which has been used since the 1920s, is Cepheid variable stars.

Distances determined using this scheme are believed accurate to within about 7% for more nearby galaxies, and 15-20% for the most distant galaxies.

Type Ia supernovas

In recent years scientists have used Type Ia supernovae. These occur in a binary star system when a white dwarf star starts to attract matter from a larger red dwarf star.

As the white dwarf gains more and more matter, it eventually undergoes a runaway nuclear explosion that may briefly outshine an entire galaxy.

Because this process can occur only within a very narrow range of total mass, the absolute luminosity of Type Ia supernovas is very predictable. The uncertainty in these measurements is typically 5%.

In August, worldwide attention was focused on a Type Ia supernova that exploded in the Pinwheel Galaxy (known as M101), a beautiful spiral galaxy located just above the handle of the Big Dipper in the Northern Hemisphere. This is the closest supernova to the earth since the 1987 supernova, which was visible in the Southern Hemisphere.

These and other techniques for astronomical measurements, collectively known as the “cosmic distance ladder”, are described in an excellent Wikipedia article. Such multiple schemes lend an additional measure of reliability to these measurements.

In short, distances to astronomical objects have been measured with a high degree of reliability, using calculations that mostly employ only high-school mathematics.

Thus the overall conclusion of a universe consisting of billions of galaxies, most of them many millions or even billions of light-years away, is now considered beyond reasonable doubt.

Right tools for the job

The kind of distances we’re dealing with above do cause consternation for some since, as we peer millions of light-years into space, we are also peering millions of years into the past.

Some creationists, for instance, have theorised that, in about 4,000 BCE, a Creator placed quadrillions of photons in space en route to Earth, with patterns suggestive of supernova explosions and other events millions of years ago.

Needless to say, most observers reject this notion. Kenneth Miller of Brown University commented, “Their [Creationists’] version of God is one who has filled the universe with so much bogus evidence that the tools of science can give us nothing more than a phony version of reality.”

There are plenty of things in the universe to marvel at, and plenty of tools to help us understand them. That should be enough to keep us engaged for now.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Jonathan Borwein (Jon), University of Newcastle and David H. Bailey, University of California, Davis


Hot And Bothered: The Uncertain Mathematics Of Global Warming

Uncertainty exists – but that’s no excuse for a lack of action.

These are painful times for those hoping to see an international consensus and substantive action on global warming.

In the US, Republican presidential front-runner Mitt Romney said in June 2011: “The world is getting warmer” and “humans have contributed” but in October 2011 he backtracked to: “My view is that we don’t know what’s causing climate change on this planet.”

His Republican challenger Rick Santorum added: “We have learned to be sceptical of ‘scientific’ claims, particularly those at war with our common sense” and Rick Perry, who suspended his campaign to become the Republican presidential candidate last month, stated flatly: “It’s all one contrived phony mess that is falling apart under its own weight.”

Meanwhile, the scientific consensus has moved in the opposite direction. In a study published in October 2011, 97% of climate scientists surveyed agreed global temperatures have risen over the past 100 years. Only 5% disagreed that human activity is a significant cause of global warming.

The study concluded in the following way: “We found disagreement over the future effects of climate change, but not over the existence of anthropogenic global warming.

“Indeed, it is possible that the growing public perception of scientific disagreement over the existence of anthropocentric warming, which was stimulated by press accounts of [the UK’s] ”Climategate“ is actually a misperception of the normal range of disagreements that may persist within a broad scientific consensus.”

More progress has been made in Europe, where the EU has established targets to reduce emissions by 20% (from 1990 levels) by 2020. The UK, which has been beset by similar denial movements, was nonetheless able to establish, as a legally binding target, an 80% reduction by 2050 and is a world leader on abatement.

In Australia, any prospect for consensus was lost when Tony Abbott used opposition to the Labor government’s proposed carbon market to replace Malcolm Turnbull as leader of the Federal Opposition in late 2009.

It used to be possible to hear right-wing politicians in Australia or the USA echo the Democratic congressman Henry Waxman who said last year:

“If my doctor told me I had cancer, I wouldn’t scour the country to find someone to tell me that I don’t need to worry about it.”

But such rationality has largely left the debate in both the US and Oz. In Australia, a reformulated carbon tax policy was enacted in November only after a highly partisan debate.

In Canada, the debate is a tad more balanced. The centre-right Liberal government in British Columbia passed the first carbon tax in North America in 2008, but the governing Federal Conservative party now offers a reliable “anti-Kyoto” partnership with Washington.

Overviews of the evidence for global warming, together with responses to common questions, are available from various sources, including:

  • Seven Answers to Climate Contrarian Nonsense, in Scientific American
  • Climate change: A Guide for the Perplexed, in New Scientist
  • Cooling the Warming Debate: Major New Analysis Confirms That Global Warming Is Real, in Science Daily
  • Remind me again: how does climate change work?, on The Conversation

It should be acknowledged in these analyses that all projections are based on mathematical models with a significant level of uncertainty regarding highly complex and only partially understood systems.

As 2011 Australian Nobel-Prize-winner Brian Schmidt explained while addressing a National Forum on Mathematical Education:

“Climate models have uncertainty and the earth has natural variation … which not only varies year to year, but correlates decade to decade and even century to century. It is really hard to design a figure that shows this in a fair way — our brain cannot deal with the correlations easily.

“But we do have mathematical ways of dealing with this problem. The Australian academy reports currently indicate that the models with the effects of CO₂ are with 90% statistical certainty better at explaining the data than those without.

“Most of us who work with uncertainty know that 90% statistical uncertainty cannot be easily shown within a figure — it is too hard to see …”

“ … Since predicting the exact effects of climate change is not yet possible, we have to live with uncertainty and take the consensus view that warming can cover a wide range of possibilities, and that the view might change as we learn more.”

But uncertainty is no excuse for inaction. The proposed counter-measures (e.g. infrastructure renewal and modernisation, large-scale solar and wind power, better soil remediation and water management, not to mention carbon taxation) are affordable and most can be justified on their own merits, while the worst-case scenario — do nothing while the oceans rise and the climate changes wildly — is unthinkable.

Some in the first world protest that any green energy efforts are dwarfed by expanding energy consumption in China and elsewhere. Sure, China’s future energy needs are prodigious, but China also now leads the world in green energy investment.

By blaiming others and focusing the debate on the level of human responsibility for warming and about the accuracy of predictions, the deniers have managed to derail long-term action in favour of short-term economic policies.

Who in the scientific community is promoting the denial of global warming? As it turns out, the leading figures in this movement have ties to conservative research institutes funded mostly by large corporations, and have a history of opposing the scientific consensus on issues such as tobacco and acid rain.

What’s more, those who lead the global warming denial movement – along with creationists, intelligent design writers and the “mathematicians” who flood our email inboxes with claims that pi is rational or other similar nonsense – are operating well outside the established boundaries of peer-reviewed science.

Austrian-born American physicist Fred Singer, arguably the leading figure of the denial movement, has only six peer-reviewed publications in the climate science field, and none since 1997.

After all, when issues such as these are “debated” in any setting other than a peer-reviewed journal or conference, one must ask: “If the author really has a solid argument, why isn’t he or she back in the office furiously writing up this material for submission to a leading journal, thereby assuring worldwide fame and glory, not to mention influence?”

In most cases, those who attempt to grab public attention through other means are themselves aware they are short-circuiting the normal process, and that they do not yet have the sort of solid data and airtight arguments that could withstand the withering scrutiny of scientific peer review.

When they press their views in public to a populace that does not understand how the scientific enterprise operates, they are being disingenuous.

With regards to claims scientists are engaged in a “conspiracy” to hide the “truth” on an issue such as global warming or evolution, one should ask how a secret “conspiracy” could be maintained in a worldwide, multicultural community of hundreds of thousands of competitive researchers.

As Benjamin Franklin wrote in his Poor Richard’s Almanac: “Three can keep a secret, provided two of them are dead.” Or as one of your present authors quipped, tongue-in-cheek, in response to a state legislator who was skeptical of evolution: “You have no idea how humiliating this is to me — there is a secret conspiracy among leading scientists, but no-one deemed me important enough to be included!”

There’s another way to think about such claims: we have tens-of-thousands of senior scientists in their late-fifties or early-sixties who have seen their retirement savings decimated by the recent stock market plunge. These are scientists who now wonder if the day will ever come when they are financially well-off-enough to do their research without the constant stress and distraction of applying for grants (the majority of which are never funded).

All one of these scientists has to do to garner both worldwide fame and considerable fortune (through book contracts, the lecture circuit and TV deals) is to call a news conference and expose “the truth”. So why isn’t this happening?

The system of peer-reviewed journals and conferences sponsored by major professional societies is the only proper forum for the presentation and debate of new ideas, in any field of science or mathematics.

It has been stunningly successful: errors have been uncovered, fraud has been rooted out and bogus scientific claims (such as the 1903 N-ray claim, the 1989 cold fusion claim, and the more-recent assertion of an autism-vaccination link) have been debunked.

This all occurs with a level of reliability and at a speed that is hard to imagine in other human endeavours. Those who attempt to short-circuit this system are doing potentially irreparable harm to the integrity of the system.

They may enrich themselves or their friends, but they are doing grievous damage to society at large.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Jonathan Borwein (Jon) and David H. Bailey*

 


Pi Color Map

John Sims has created a number of pi-related art works. One, the Pi Color Map, can be recreated effectively using TinkerPlots. The image above is one such Pi Color Map, using 2281 digits of pi.

Here are some instructions for creating a Pi Color Map in Tinkerplots.

1. Obtain a listing of the digits of pi – up to a reasonable number. You can get the digits from several sites, including the pi day site.

2. Paste your listing to a text document, and get them arranged into a single column. One strategy for doing this is and use the find/replace feature of a word-processor to replace each number with the number itself plus a line-break(e.g. in Word, replace 2 with 2^l, etc.).

3. If you’ve included the decimal point, remove it. For the first line of your document, provide a heading like pi_expansion. This will be your TinkerPlots attribute.

3. Import the text file into TinkerPlots using the File>Import menu.

4. Create a new attribute called digit whose formula is digit=concat(“”,pi_expansion). This creates a categorical data type that TinkerPlots won’t treat numerically. This is what you will use as your color key. Using the pi_expansion attribute gives a spectrum of color, rather than distinct colors for each number.

5. Create a new attribute called place, whose formula is place=caseIndex. This is what you will order your plot by.

6. Create a new plot, lock the color key on the digit attribute. Select the place attribute and press the Order By button.

7. Change your icon type to small squares, and stack the cases.

You can play with different options to get different effects for your color map.

One nice thing about doing this in TinkerPlots is that you can investigate the data further. The color map plot highlights the apparent randomness of the pi expansion, but you can also create other attributes and plots to investigate things like the running average of the digits, occurrences of consecutive digits, and the overall distribution of the digits (it should be uniform).

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


Good at Sudoku? Here’s Some You’ll Never Complete

There’s far more to the popular maths puzzle than putting numbers in a box.

Last month, a team led by Gary McGuire from University College Dublin in Ireland made an announcement: they had proven you can’t have a solvable Sudoku puzzle with less than 17 numbers already filled in.

Unlike most mathematical announcements, this was quickly picked up by the popular scientific media. Within a few days, the new finding had been announced in Nature and other outlets.

So where did this problem come from and why is its resolution interesting?

As you probably know, the aim of a Sudoku puzzle is to complete a partially-filled nine-by-nine grid of numbers. There are some guidelines: the numbers one to nine must appear exactly once each in every row, column and three-by-three sub-grid.

As with a crossword, a valid Sudoku puzzle must have a unique solution. There’s only one way to go from the initial configuration (with some numbers already filled in) to a completed grid.

Newspapers often grade their puzzles as easy, medium or hard, which will depend on how easy it is at every stage of solving the puzzle to fill in the “next” number. While a puzzle with a huge number of initial clues will usually be easy, it is not necessarily the case that a puzzle with few initial clues is difficult.

Reckon you can complete a 17-clue Sudoku puzzle? (answer below) Gordon Royle

When Sudoku-mania swept the globe in the mid-2000s, many mathematicians, programmers and computer scientists – amateur and professional – started to investigate Sudoku itself. They were less interested in solving individual puzzles, and more focused on asking and answering mathematical and/or computational questions about the entire universe of Sudoku puzzles and solutions.

As a mathematician specialising in the area of combinatorics (which can very loosely be defined as the mathematics of counting configurations and patterns), I was drawn to combinatorial questions about Sudoku.

I was particularly interested in the question of the smallest number of clues possible in a valid puzzle (that is, a puzzle with a unique solution).

In early 2005, I found a handful of 17-clue puzzles on a long-since forgotten Japanese-language website. By slightly altering these initial puzzles, I found a few more, then more, and gradually built up a “library” of 17-clue Sudoku puzzles which I made available online at the time.

Other people started to send me their 17-clue puzzles and I added any new ones to the list until, after a few years, I had collected more than 49,000 different 17-clue Sudoku puzzles.

By this time, new ones were few and far between, and I was convinced we had found almost all of the 17-clue puzzles. I was also convinced there was no 16-clue puzzle. I thought that demonstrating this would either require some new theoretical insight or clever programming combined with massive computational power, or both.

Either way, I thought proving the non-existence of a 16-clue puzzle was likely to be too difficult a challenge.

They key to McGuire’s approach was to tackle the problem indirectly. The total number of completed puzzles (that is, completely filled-in grids) is astronomical – 5,472,730,538 – and trying to test each of these to see if any choice of 16 cells from the completed grid forms a valid puzzle is far too time-consuming.

Instead, McGuire and colleagues used a different, indirect approach.

An “unavoidable set” in a completed Sudoku grid is a subset of the clues whose entries can be rearranged to leave another valid completed Sudoku grid. For a puzzle to be uniquely completable, it must contain at least one entry from every unavoidable set.

If a completed grid contains the ten-clue configuration in the left picture, then any valid Sudoku puzzle must contain at least one of those ten clues. If it did not, then in any completed puzzle, those ten positions could either contain the left-hand configuration or the right-hand configuration and so the solution would not be unique.

Gordon Royle

While finding all the unavoidable sets in a given grid is difficult, it’s only necessary to find enough unavoidable sets to show that no 16 clues can “hit” them all. In the process of resolving this question, McGuire’s team developed new techniques for solving the “hitting set” problem.

It’s a problem that has many other applications – any situation in which a small set of resources must be allocated while still ensuring that all needs are met by at least one of the selected resources (i.e. “hit”) can be modelled as a hitting set problem.

Once the theory and software was in place, it was then a matter of running the programs for each of the 5.5 billion completed grids. As you can imagine, this required substantial computing power.

After 7 million core-CPU hours on a supercomputer (the equivalent of a single computer running for 7 million hours) and a year of actual elapsed time, the result was announced a few weeks ago, on New Year’s Day.

So is it correct?

The results of any huge computation should be evaluated with some caution, if not outright suspicion, especially when the answer is simply “no, doesn’t exist”, because there are many possible sources of error.

But in this case, I feel the result is far more likely to be correct than otherwise, and I expect it to be independently-verified before too long. In addition, McGuire’s team built on many different ideas, discussions and computer programs that were thrashed out between interested contributors to various online forums devoted to the mathematics of Sudoku. In this respect, many of the basic components of their work have already been thoroughly tested.

Solution to the 17-clue Sudoku puzzle, above. Gordon Royle

And so back to the question: why is the resolution of this problem interesting? And is it important?

Certainly, knowing that the smallest Sudoku puzzles have 17 clues is not in itself important. But the immense popularity of Sudoku meant that this question was popularised in a way that many similar questions have never been, and so it took on a special role as a “challenge question” testing the limits of human knowledge.

The school students to whom I often give outreach talks have no real concept of the limitations of computers and mathematics. In my past talks, these students were almost always astonished to know that the answer to such a simple question was just not known.

And now, in my future outreach talks, I will describe how online collaboration, theoretical development and significant computational power were combined to solve this problem, and how this process promises to play an increasing role in the future development of mathematics.

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Gordon Royle*

 


Wrappers delight: The Easter egg equation you never knew you needed

This Easter season, as you tear open those chocolate eggs, have you ever wondered why they’re snugly wrapped in foil? Turns out the answer lies within the easter egg equation.

Mathematician Dr. Saul Schleimer, from the University of Warwick, sheds light on the delightful connection between Easter egg wrapping and mathematical curvature.

“When you wrap an egg with foil, there are always wrinkles in the foil. This doesn’t happen when you wrap a box. The reason is that foil has zero Gaussian curvature (a measure of flatness), while an egg has (variable) positive curvature. Perfect wrapping (without wrinkles) requires that the curvatures match,” explains Professor Schleimer.

So, unlike flat surfaces, eggs have variable positive curvature, making them challenging to wrap without creases or distortions. Foil, with its flat surface and zero Gaussian curvature, contrasts sharply with the egg’s curved shape.

Attempting to wrap an egg with paper, which also lacks the required curvature, would result in unsightly wrinkles and a less-than-ideal presentation. Therefore, by using tin foil, we can harmonize the egg’s curvature with the wrapping material, achieving a snug fit without compromising its shape, thus showcasing the delightful intersection of mathematics and Easter traditions.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Warwick

 


Math anxiety’ causes students to disengage, says study

A new Sussex study has revealed that “math anxiety” can lead to disengagement and create significant barriers to learning. According to charity National Numeracy, more than one-third of adults in the U.K. report feeling worried or stressed when faced with math, a condition known as math anxiety.

The new paper, titled “Understanding mathematics anxiety: loss aversion and student engagement” and published in Teaching Mathematics and its Applications finds that teaching which relies on negative framing, such as punishing students for failure, or humiliating them for being disengaged, is more likely to exacerbate math anxiety and disengagement.

The paper says that in order to successfully engage students in math, educators and parents must build a safe environment for trial and error and allow students space to make mistakes and stop learners from reaching the point where the threat of failure becomes debilitating.

Author Dr. C. Rashaad Shabab, Reader in Economics at the University of Sussex Business School, said, “As the government seeks to implement universal math education throughout higher secondary school, potentially a million more people will be required to study math who might otherwise have chosen not to.

“The results of this study deliver important guiding principles and interventions to educators and parents alike who face the prospect of teaching math to children who might be a little scared of it and so are at heightened risk of developing mathematics anxiety.

“Teachers should tell students to look at math as a puzzle, or a game. If we put a piece of a puzzle in the wrong place, we just pick it up and try again. That’s how math should feel. Students should be told that it’s okay to get it wrong, and in fact that getting it wrong is part of how we learn math. They should be encouraged to track their own improvement over time, rather than comparing their achievements with other classmates.

“All of these interventions, basically take the ‘sting’ out of getting it wrong, and it’s the fear of that ‘sting’ that keeps students from disengaging. The findings could pave the way for tailored interventions to support students who find themselves overwhelmed by the fear of failure.”

Using behavioural economics, which combines elements of economics and psychology to understand how and why people behave the way they do, the research, from the University of Sussex’s Business School, identifies math anxiety as a reason why even dedicated students can become disengaged. This often results in significant barriers to learning, both for the individual in question and others in the classroom.

The paper goes on to say that modern technology and elements of video game design can help those struggling with mathematics anxiety through a technique called “dynamic difficulty adjustment.” This would allow the development of specialist mathematics education computer programs to match the difficulty of math exercises to the ability of each student. Such a technique, if adopted, would keep the problems simple enough to avoid triggering anxiety, but challenging enough to improve learning.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Tom Walters, University of Sussex

 


Counting by tens shows a sophistication of young children’s understanding of number concepts, study finds

Understanding how children learn to count can have profound impacts on the kinds of instructional materials used in the classroom. And the way those materials are designed can shape the strategies children use to learn, according to a new paper led by Concordia researchers.

Writing in the journal School Science and Mathematics, the authors study how young children, mostly in the first grade, used a hundreds table to perform age-appropriate counting tasks. Hundreds tables, as the name suggests, are charts divided into rows and columns of 10, with each square containing a number from one to 100. The researchers discovered that the children who counted left-to-right, top-to-bottom outperformed children who counted left-to-right, bottom-to-top.

In this study, children used the tables on a screen to solve addition problems. One group of children used a top-down table, where the top left corner was marked 1 and bottom left corner was marked 100. Another group of children used a bottom-up, where one occupied the bottom left and 100 the top right. A third group of children used a bottom-up table with a visual cue of a cylinder next to it. The cylinder was designed to show the “up-is-more” relation as it filled with water when the numbers increased when moving up in the table.

“We found that children using the top-down chart used a more sophisticated strategy of counting by 10 and moving vertically, rather than using the more simplistic strategy of counting by one and moving horizontally,” says Vera Wagner. She co-authored the paper with Helena Osana, a professor in the Department of Education in the Faculty of Arts and Science and Jairo Navarrete-Ulloa of O’Higgins University in Chile.

The authors believe the benefits of the top-down table could be related to the way children learn to read and that they are applying the same approach to base-ten concepts.

“We were working with young children, so reading instruction is likely at the forefront of their attention,” says Wagner, who now teaches elementary students at a Montreal-area school. “The structure of moving in that particular way might be more ingrained.”

The power of spatial configuration

Osana notes that the practice of counting by 10s rather than by ones—which is a more efficient method of arriving at the same answer—is an example of unitizing, in which multiples of one unit form a new unit representing a larger number.

“From a theoretical perspective, the study shows that the spatial configuration of instructional materials can actually support this more sophisticated understanding of numbers and the unitizing aspect that goes along with it,” she says.

While the researchers are not suggesting children will automatically gravitate toward the top-down chart under every circumstance, they do think the study’s results provide educators with a sense of the ways their students process numbers and addition.

“It is important for teachers to be aware of how children are thinking about the tools we are giving them,” says Osana, principal investigator of the Mathematics Teaching and Learning Lab. “We are not saying that teachers have to use the top-down hundreds chart every time, but they should think about the strategies their students are using and why they use them with one particular instructional tool and not another.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Patrick Lejtenyi, Concordia University

 


Viewing Matrices & Probability as Graphs

Today I’d like to share an idea. It’s a very simple idea. It’s not fancy and it’s certainly not new. In fact, I’m sure many of you have thought about it already. But if you haven’t—and even if you have!—I hope you’ll take a few minutes to enjoy it with me. Here’s the idea:

So simple! But we can get a lot of mileage out of it.

To start, I’ll be a little more precise: every matrix corresponds to a weighted bipartite graph. By “graph” I mean a collection of vertices (dots) and edges; by “bipartite” I mean that the dots come in two different types/colors; by “weighted” I mean each edge is labeled with a number.

The graph above corresponds to a 3×23×2 matrix MM. You’ll notice I’ve drawn three greengreen dots—one for each row of MM—and two pinkpink dots—one for each column of MM. I’ve also drawn an edge between a green dot and a pink dot if the corresponding entry in MM is non-zero.

For example, there’s an edge between the second green dot and the first pink dot because M21=4M21=4, the entry in the second row, first column of MM, is not zero. Moreover, I’ve labeled that edge by that non-zero number. On the other hand, there is no edge between the first green dot and the second pink dot because M12M12, the entry in the first row, second column of the matrix, is zero.

Allow me to describe the general set-up a little more explicitly.

Any matrix MM is an array of n×mn×m numbers. That’s old news, of course. But such an array can also be viewed as a function M:X×Y→RM:X×Y→R where X={x1,…,xn}X={x1,…,xn} is a set of nn elements and Y={y1,…,ym}Y={y1,…,ym} is a set of mm elements. Indeed, if I want to describe the matrix MM to you, then I need to tell you what each of its ijijth entries are. In other words, for each pair of indices (i,j)(i,j), I need to give you a real number MijMij. But that’s precisely what a function does! A function M:X×Y→RM:X×Y→R associates for every pair (xi,yj)(xi,yj) (if you like, just drop the letters and think of this as (i,j)(i,j)) a real number M(xi,yj)M(xi,yj). So simply write MijMij for M(xi,yj)M(xi,yj).

Et voila. A matrix is a function.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*


Time to abandon null hypothesis significance testing? Moving beyond the default approach

Researchers from Northwestern University, University of Pennsylvania, and University of Colorado published a new Journal of Marketing study that proposes abandoning null hypothesis significance testing (NHST) as the default approach to statistical analysis and reporting.

The study is titled “‘Statistical Significance’ and Statistical Reporting: Moving Beyond Binary” and is authored by Blakeley B. McShane, Eric T. Bradlow, John G. Lynch, Jr., and Robert J. Meyer.

Null hypothesis significance testing (NHST) is the default approach to statistical analysis and reporting in marketing and, more broadly, in the biomedical and social sciences. As practiced, NHST involves

  1. assuming that the intervention under investigation has no effect along with other assumptions,
  2. computing a statistical measure known as a P-value based on these assumptions, and
  3. comparing the computed P-value to the arbitrary threshold value of 0.05.

If the P-value is less than 0.05, the effect is declared “statistically significant,” the assumption of no effect is rejected, and it is concluded that the intervention has an effect in the real world. If the P-value is above 0.05, the effect is declared “statistically nonsignificant,” the assumption of no effect is not rejected, and it is concluded that the intervention has no effect in the real world.

Criticisms of NHST

Despite its default role, NHST has long been criticized by both statisticians and applied researchers, including those within marketing. The most prominent criticisms relate to the dichotomization of results into “statistically significant” and “statistically nonsignificant.”

For example, authors, editors, and reviewers use “statistical (non)significance” as a filter to select which results to publish. Meyer says that “this creates a distorted literature because the effects of published interventions are biased upward in magnitude. It also encourages harmful research practices that yield results that attain so-called statistical significance.”

Lynch adds that “NHST has no basis because no intervention has precisely zero effect in the real world and small P-values and ‘statistical significance’ are guaranteed with sufficient sample sizes. Put differently, there is no need to reject a hypothesis of zero effect when it is already known to be false.”

Perhaps the most widespread abuse of statistics is to ascertain where some statistical measure such as a P-value stands relative to 0.05 and take it as a basis to declare “statistical (non)significance” and to make general and certain conclusions from a single study.

“Single studies are never definitive and thus can never demonstrate an effect or no effect. The aim of studies should be to report results in an unfiltered manner so that they can later be used to make more general conclusions based on cumulative evidence from multiple studies. NHST leads researchers to wrongly make general and certain conclusions and to wrongly filter results,” says Bradlow.

P-values naturally vary a great deal from study to study,” explains McShane. As an example, a “statistically significant” original study with an observed P-value of p = 0.005 (far below the 0.05 threshold) and a “statistically nonsignificant” replication study with an observed P-value of p = 0.194 (far above the 0.05 threshold) are highly compatible with one another in the sense that the observed P-value, assuming no difference between them, is p= 0.289.

He adds that “however when viewed through the lens of ‘statistical (non)significance,’ these two studies appear categorically different and are thus in contradiction because they are categorized differently.”

Recommended changes to statistical analysis

The authors propose a major transition in statistical analysis and reporting. Specifically, they propose abandoning NHST—and the P-value thresholds intrinsic to it—as the default approach to statistical analysis and reporting. Their recommendations are as follows:

  • “Statistical (non)significance” should never be used as a basis to make general and certain conclusions.
  • “Statistical (non)significance” should also never be used as a filter to select which results to publish.
  • Instead, all studies should be published in some form or another.
  • Reporting should focus on quantifying study results via point and interval estimates. All of the values inside conventional interval estimates are at least reasonably compatible with the data given all of the assumptions used to compute them; therefore, it makes no sense to single out a specific value, such as the null value.
  • General conclusions should be made based on the cumulative evidence from multiple studies.
  • Studies need to treat P-values continuously and as just one factor among many—including prior evidence, the plausibility of mechanism, study design, data quality, and others that vary by research domain—that require joint consideration and holistic integration.
  • Researchers must also respect the fact that such conclusions are necessarily tentative and subject to revision as new studies are conducted.

Decisions are seldom necessary in scientific reporting and are best left to end-users such as managers and clinicians when necessary.

In such cases, they should be made using a decision analysis that integrates the costs, benefits, and probabilities of all possible consequences via a loss function (which typically varies dramatically across stakeholders)—not via arbitrary thresholds applied to statistical summaries such as P-values (“statistical (non)significance”) which, outside of certain specialized applications such as industrial quality control, are insufficient for this purpose.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to American Marketing Association