How Many Knots Exist? A New Computing Trick Is Untangling The Answer

Finding how many knots there are for a given number of string crossings was thought to be an impossibly complex task. Now, algorithm engineering is cracking it – and showing us how to solve other fiendishly intricate maths problems.

IT USED to be one of the most frustrating part of any journey on public transport. You squeeze your way past the other bodies, sit down and fish your earphones out of your pocket. You didn’t bother to wind up the wires into a neat loop the last time you used them, and so – sigh – you now need to spend the next 5 minutes untangling this knot. Thank goodness for the invention of wireless earbuds.

Knots aren’t just an everyday annoyance, though. They are also a source of endless inspiration for researchers. Take mathematician Benjamin Burton, who is fascinated by one simple question: how many knots are there? “There is something tantalising about problems that you can describe to a 10-year-old, but that mathematicians have not yet solved,” he says.

Taking a census of knots is one of those problems that ought to be impossible to solve because of its complexity. There are so many ways the strings can be crossed and looped that even the fastest computer could never catalogue them all. Yet Burton has been giving it a shot, and along the way he is showing that, with a few clever computational tricks, many maths problems that seem insurmountable might not be.

Knots and science have been, ahem, entangled for quite a while. In the dying decades of the 19th century, scientists were grappling with how to understand atoms. One hypothesis saw them as little vortices of fluid that became stable when knotted. Lord Kelvin, who went on to become the president of the UK’s Royal Society, was the first to suggest that each chemical element corresponded to a different type of knot.

The idea was abandoned after the discovery of the electron, but at the time it seemed vital to understand knots. Physicist Peter Guthrie Tait was the first to make a stab at creating a comprehensive list of them. Cards on the table: there are an infinite number of possible knots, because you can keep on adding extra knotty flourishes forever. The question mathematicians are interested in is more subtle. A defining feature of a knot is its crossing number, the number of times the strings cross. The question is, for a given number of crossings, how many different knots are possible? In 1885, Tait, working with mathematician Thomas Kirkman, considered all the knots with up to and including 10 crossings. Drawing them all by hand, he tabulated 364 configurations.

Try to go further and things quickly get a lot more difficult. As you allow more crossings, the number of possible knots rapidly increases. The last major extension to the knot tables was published in 1998. Mathematicians Jim Hoste, Morwen Thistlethwaite and Jeff Weeks recorded all the knots up to and including 16 crossings – all 1.7 million of them.

Going beyond this has, until recently, been unfeasible. To see why, we need to know a bit more about how mathematicians think about knots (see “When is a knot not a knot?“). Unlike real-world knots that are often pieces of string with loose ends, mathematical knots are closed. Imagine tying a knot in a piece of spaghetti, then melding the free ends.

Stretch, twist, bend

Mathematicians treat knots according to the rules of topology, a branch of geometry. These say that a knot remains fundamentally the same if you stretch, twist or bend the strands – but making cuts or new joins is a no-no. This leads to the concept of a prime knot, a knot that can’t be mathematically broken down into two or more simpler knots.

The job of tabulating knots, then, boils down to comparing elaborate tangles to see if they are really the same prime knot. Don’t underestimate how tricky this can be. For 75 years, two knots with 10 crossings were listed separately in tables. But in 1973, Kenneth Perko, a New York lawyer who had studied maths, realised that if you take the mirror image of one, you can manipulate it to become equivalent to the other. The knots are known as the “Perko pair”.

When dealing with millions of knots, the job of identifying doppelgangers becomes so time-consuming that even the fastest supercomputers shouldn’t be theoretically capable of it in a reasonable amount of time. Still, in 2019, Burton, who is based at the University of Queensland in Australia, decided the time was ripe to take a punt.

He knew there is a difference between a knot-sorting algorithm in the abstract and how that algorithm is implemented in computer code. The art of bridging this gap is known as algorithm engineering and Burton thought that with the right tricks, the problem wouldn’t be as hard as it seemed. “Part of what motivated me was creating a showpiece to see if the tools were good enough,” he says.

He began by setting a computer the task of dreaming up all the possible ways that strings can be knotted with up to and including 19 crossings. This is a comparatively simple task and the computer spat out 21 billion knot candidates after just a few days.

The job was then to check each knot was prime and distinct. This involves storing a full topological description of the knots in a computer’s working memory or RAM. A data set of 21 billion knots would require more than 1 terabyte of RAM to store, and computers with that amount are rare. To get around this, Burton made use of a software package he has developed called Regina. It can convert knots into “knot signatures”, strings of letters that capture each tangle’s defining topological properties. The signatures can be stored far more economically than a knot description. Plus, knots that are tangled differently but are really equivalent have the same signature, making it easy to weed out duplicates. Burton managed to whittle down the candidate knots to about 7 billion in a day.

This method wasn’t powerful enough to identify every last duplicate, however. Burton’s next tactic involved calculating what is known as a knot invariant for each candidate knot. Invariants are mathematical objects that capture the essence of a knot – if two knots have different invariants, they are different knots. Invariants are more powerful than signatures, but they are harder to compute: it would have taken Burton’s computer more than a year to calculate these for all the remaining knots. By sorting them into groups and running them on parallel supercomputers, he got through them in days. The pool was down to 370 million.

Burton also had to grapple with non-hyperbolic knots, an especially challenging category. But after several more rounds of sorting, the supposedly impossible became possible. Burton’s census extended the knot tables to cover everything up to and including 19 crossings. The final tally: 352,152,252 knots.

Ian Agol at the University of California, Berkeley, says he is impressed with the calculations. “It will likely be useful to mathematicians searching for knots with specific properties, or who have a knot that they would like to identify,” he says.

He and other mathematicians think that Burton’s work is also an impressive example of algorithm engineering. “This is a growing trend in pure mathematics, where computational methods are being used in more creative ways,” says Hans Boden at McMaster University in Ontario, Canada.

Lorries and cameras

There are plenty of practical problems where algorithm engineering is used to find efficient solutions to hard problems. Some of the most important are in logistics, where optimised routing can save time and huge sums of money. Others include working out how to cover a space with CCTV cameras. In many cases, a perfect solution is impossible to obtain in a reasonable time frame.

This is also the case when it comes to mapping complex evolutionary histories, particularly for plants and bacteria. Algorithms are used to link DNA data in phylogenetic trees, graphs that group closely related species more closely than distantly related ones. However, sometimes genes don’t pass down from parent to offspring, but rather through what is known as horizontal gene transfer from one species to another. Algorithms designed to map these transfers can be very slow.

One algorithm engineering hack to get around this involves parametrising data inputs to make them smaller. For instance, if you fix the number of potential mutations you are considering, the problem can be solved efficiently. Recently, Edwin Jacox, now at the University of California, Santa Cruz, and his colleagues applied this method to cyanobacteria phylogenetic trees. Cyanobacteria played a major role in Earth’s changing biosphere more than 2 billion years ago. The researchers developed a parametrised algorithm that rebuilds the cyanobacteria phylogenetic trees in just 15 seconds.

Whether it is reconstructing evolutionary trees or listing knots, it is clear than the toughest computing problems can be made tractable. “With algorithm engineering and heuristics, things that are slow in practice turn out to be remarkably, surprisingly fast,” says Burton. It means that even the trickiest problems don’t have to leave us in an inescapable tangle.

When is a knot not a knot?

Here is a problem that needs unpicking: if you pull at a messy tangle of wires, how do you know if it will just get more tangled or come apart into a simple loop? This is a problem mathematicans call “unknot recognition” and it is one we might soon solve. Unknot recognition fascinates mathematicians and computer scientists alike because it is part of the famous “P vs NP” question.

To see how it works, take the classic travelling salesperson problem, where we must work out the shortest route for a salesperson to visit a number of cities. If an algorithm designed to solve this problem doesn’t get exponentially harder as more cities are included, it is said to be “P”. This means it is solvable in reasonable – or polynomial, in maths speak – amounts of time. Once you have an answer, you need to check it is correct. If it is easy to check your answer, the problem is classed as “NP”. The big question for mathematicians is whether all NP problems are also P. This is one of the Millennium Prize Problems – answer it conclusively and you will win $1 million.

Unknot recognition dwells in the twilight zone of P vs NP. We already know that this problem is definitely “NP”, or easy to check. If your algorithm can produce an unknotted loop, you can immediately see it has worked. Mathematicians also think it is likely to be P, and we are tantalisingly close to proving it.

In 2014, Benjamin Burton at the University of Queensland in Australia developed an unknotting algorithm that, in practice, solves in polynomial time. His algorithm has held up “for every knot we’ve thrown at it so far”, he says. More recently, Marc Lackenby at the University of Oxford developed an unknot recognition algorithm that is “quasi-P” (which, in maths parlance, means “almost there”). It is unlikely to be converted into executable computer code because it is so complicated, but Lackenby is confident that a simplified version “is going to be genuinely practical”.

Showing that unknot recognition is both P and NP won’t solve the wider Millennium Prize Problem, though it could give us useful pointers. Still, it is an important milestone and mathematicians will be celebrating once we get there.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Larissa Fedunik*


Pascal’s Triangle and Polygonal opsgf’s

The diagram above illustrates a surprising correspondence – if you take the reciprocal of a particular polynomial whose coefficients match the d+1 row of Pascal’s Triangle, you obtain a formal power series whose coefficients are precisely the elements in the dth column of the triangle.

The case for the third row is shown above (row and column numbering starts at 0 in Pascal’s Triangle), and the case for the fourth row is shown in the diagram below. To actually work out the power series that comes from the reciprocals (well, at least the first few coefficients), the grid method of dividing polynomials works well (I think).

In this way, the rows of the triangle  provide all the information required to obtain the columns – a nice correspondence between a finite sequence and an infinite one.

You’ll see this correspondence if you spend time investigating the “ordinary power series generating functions” (opsgf’s) for the higher dimensional triangular numbers. The row polynomials are just (1-x)^(d+1), while the columns correspond to formal power series whose coefficients are the d-dimensional triangular numbers. A great place to learn about these opsgf’s is H. Wilf’s book generatingfunctionology.

You can uncover the general opsgf for higher dimensional triangular numbers by starting with the (1-x)^(-1) rational function, and applying Wilf’s ‘rule 5’ from chapter 2 of generating functionology.

Applying a few other rules, you can obtain an opsgf for the k-polygonal numbers (= 4 is squares, = 5 is pentagonals, = 6 hexagonals, etc.).

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*


Study Finds Cooperation Can Still Evolve Even With Limited Payoff Memory

Direct reciprocity facilitates cooperation in repeated social interactions. Traditional models suggest that individuals learn to adopt conditionally cooperative strategies if they have multiple encounters with their partner. However, most existing models make rather strong assumptions about how individuals decide to keep or change their strategies. They assume individuals make these decisions based on a strategy’s average performance. This in turn suggests that individuals would remember their exact payoffs against everyone else.

In a recent study, researchers from the Max Planck Institute for Evolutionary Biology, the School of Data Science and Society, and the Department of Mathematics at the University of North Carolina at Chapel Hill examine the effects of realistic memory constraints. They find that cooperation can evolve even with minimal memory capacities. The research is published in the journal Proceedings of the Royal Society B: Biological Sciences.

Direct reciprocity is based on repeated interactions between two individuals. This concept, often described as “you scratch my back, I’ll scratch yours,” has proven to be a pivotal mechanism in maintaining cooperation within groups or societies.

While models of direct reciprocity have deepened our understanding of cooperation, they frequently make strong assumptions about individuals’ memory and decision-making processes. For example, when strategies are updated through social learning, it is commonly assumed that individuals compare their average payoffs.

This would require them to compute (or remember) their payoffs against everyone else in the population. To understand how more realistic constraints influence direct reciprocity, the current study considers the evolution of conditional behaviours when individuals learn based on more recent experiences.

Two extreme scenarios

This study first compares the classical modeling approach with another extreme approach. In the classical approach, individuals update their strategies based on their expected payoffs, considering every single interaction with each member of the population (perfect memory). Conversely, the opposite extreme is considering only the very last interaction (limited memory).

Comparing these two scenarios shows that individuals with limited payoff memory tend to adopt less generous strategies. They are less forgiving when someone defects against them. Yet, moderate levels of cooperation can still evolve.

Intermediate cases

The study also considers intermediate cases, where individuals consider their last two or three or four recent experiences. The results show that cooperation rates quickly approach the levels observed under perfect payoff memory.

Overall, this study contributes to a wider literature that explores which kinds of cognitive capacities are required for reciprocal altruism to be feasible. While more memory is always favourable, reciprocal cooperation can already be sustained if individuals have a record of two or three past outcomes.

This work’s results have been derived entirely within a theoretical model. The authors feel that such studies are crucial for making model-informed deductions about reciprocity in natural systems.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Michael Hesse, Max Planck Society


How Spiral Search Patterns And Lateral Thinking Cracked Our Puzzle

Rob Eastaway and Brian Hobbs take over our maths column to reveal who solved their puzzle and won a copy of their New Scientist puzzle book, Headscratchers.

When solving problems in the real world, it is rare that the solution is purely mathematical, but maths is often a key ingredient. The puzzle we set a few weeks ago (New Scientist, 30 September, p 45) embraced this by encouraging readers to come up with ingenious solutions that didn’t have to be exclusively maths-based.

Here is a reminder of the problem: Prince Golightly found himself tied to a chair near the centre of a square room, in the dark, with chained monsters in the four corners and an escape door in the middle of one wall. With him, he had a broom, a dictionary, some duct tape, a kitchen clock and a bucket of water with a russet fish.

John Offord was one of several readers to spot an ambiguity in our wording. Four monsters in each corner? Did this mean 16 monsters? John suggested the dictionary might help the captors brush up on their grammar.

The russet fish was deliberately inserted as a red herring (geddit?), but we loved that some readers found ways to incorporate it, either as a way of distracting the monsters or as a source of valuable protein for a hungry prince. Dave Wilson contrived a delightful monster detector, while Glenn Reid composed a limerick with the solution of turning off the computer game and going to bed.

And so to more practical solutions. Arlo Harding and Ed Schulz both suggested ways of creating a torch by igniting assorted materials with an electric spark from the light cable. But Ben Haller and Chris Armstrong had the cleverest mathematical approach. After locating the light fitting in the room’s centre with the broom, they used duct tape and rope to circle the centre, increasing the radius until they touched the wall at what must be its centre, and then continued circling to each wall till they found the escape door. Meanwhile, the duo of Denise and Emory (aged 11) used Pythagoras’s theorem to confirm the monsters in the corners would be safely beyond reach. They, plus Ben and Chris, win a copy of our New Scientist puzzle book Headscratchers.

It is unlikely you will ever have to escape monsters in this way, but spiral search patterns when visibility is limited are employed in various real-world scenarios: rescuers probing for survivors in avalanches, divers performing underwater searches and detectives examining crime scenes, for example. Some telescopes have automated spiral search algorithms that help locate celestial objects. These patterns allow for thorough searches while ensuring you don’t stray too far from your starting point.

Of course, like all real-world problems, mathematical nous isn’t enough. As our readers have displayed, lateral thinking and the ability to improvise are human skills that help us find the creative solutions an algorithm would miss.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Rob Eastaway and Brian Hobbs*


Here’s why you should (almost) never use a pie chart for your data

Our lives are becoming increasingly data driven. Our phones monitor our time and internet usage and online surveys discern our opinions and likes. These data harvests are used for telling us how well we’ve slept or what we might like to buy.

Numbers are becoming more important for everyday life, yet people’s numerical skills are falling behind. For example, the percentage of Year 12 schoolchildren in Australia taking higher and intermediate mathematics has been declining for decades.

To help the average person understand big data and numbers, we often use visual summaries, such as pie charts. But while non-numerate folk will avoid numbers, most numerate folk will avoid pie charts. Here’s why.

What is a pie chart?

A pie chart is a circular diagram that represents numerical percentages. The circle is divided into slices, with the size of each slice proportional to the category it represents. It is named because it resembles a sliced pie and can be “served” in many different ways.

An example pie chart below shows Australia’s two-party preferred vote before the last election, with Labor on 55% and the the Coalition on 45%. The two near semi-circles show the relatively tight race—this is a useful example of a pie chart.

What’s wrong with pie charts?

Once we have more than two categories, pie charts can easily misrepresent percentages and become hard to read.

The three charts below are a good example—it is very hard to work out which of the five areas is the largest. The pie chart’s circularity means the areas lack a common reference point.

Pie charts also do badly when there are lots of categories. For example, this chart from a study on data sources used for COVID data visualization shows hundreds of categories in one pie.

The tiny slices, lack of clear labeling and the kaleidoscope of colors make interpretation difficult for anyone.

It’s even harder for a color blind person. For example, this is a simulation of what the above chart would look like to a person with deuteranomaly or reduced sensitivity to green light. This is the most common type of color blindness, affecting roughly 4.6% of the population.

It can get even worse if we take pie charts and make them three-dimensional. This can lead to egregious misrepresentations of data.

Below, the yellow, red and green areas are all the same size (one-third), but appear to be different based on the angle and which slice is placed at the bottom of the pie.

So why are pie charts everywhere?

Despite the well known problems with pie charts, they are everywhere. They are in journal articles, Ph.D. theses, political polling, books, newspapers and government reports. They’ve even been used by the Australian Bureau of Statistics.

While statisticians have criticized them for decades, it’s hard to argue with this logic: “If pie charts are so bad, why are there so many of them?”

Possibly they are popular because they are popular, which is a circular argument that suits a pie chart.

What’s a good alternative to pie charts?

There’s a simple fix that can effectively summarize big data in a small space and still allow creative color schemes.

It’s the humble bar chart. Remember the brain-aching pie chart example above with the five categories? Here’s the same example using bars—we can now instantly see which category is the largest.

Linear bars are easier on the eye than the non-linear segments of a pie chart. But beware the temptation to make a humble bar chart look more interesting by adding a 3D effect. As you already saw, 3D charts distort perception and make it harder to find a reference point.

Below is a standard bar chart and a 3D alternative of the number of voters in the 1992 US presidential election split by family income (from under US$15K to over $75k). Using the 3D version, can you tell the number of voters for each candidate in the highest income category? Not easily.

Is it ever okay to use a pie chart?

We’ve shown some of the worst examples of pie charts to make a point. Pie charts can be okay when there are just a few categories and the percentages are dissimilar, for example with one large and one small category.

Overall, it is best to use pie charts sparingly, especially when there is a more “digestible” alternative—the bar chart.

Whenever we see pie charts, we think one of two things: their creators don’t know what they’re doing, or they know what they are doing and are deliberately trying to mislead.

A graphical summary aims to easily and quickly communicate the data. If you feel the need to spruce it up, you’re likely reducing understanding without meaning to do so.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Adrian Barnett and Victor Oguoma, The Conversation

 

 


Cutting cake (and eating it too) – the sticky maths of fair division

I work on the mathematics of sharing resources, which has led me to consider emotions such as envy, behaviour such as risk-taking and the best way to cut a cake.

Like, I suspect, many women, my wife enjoys eating dessert but not ordering it. I therefore dutifully order what I think she’ll like, cut it in half and invite her to choose a piece.

This is a sure-fire recipe for marital accord. Indeed, many mathematicians, economists, political scientists and others have studied this protocol and would agree. The protocol is known as the “cut-and-choose” procedure. I cut. You choose.

Cut-and-choose

Cut-and-choose is not limited to the dining table – it dates back to antiquity. It appears nearly 3,000 years ago in Hesiod’s poem Theogeny where Prometheus divides a cow and Zeus selects the part he prefers.

In more recent times, cut-and-choose has been enshrined in the UN’s 1982 Convention of the Law of the Sea where it was proposed as a mechanism to resolve disputes when dividing the seabed for mining.

To study the division of cake, cows and the seabed in a more formal way, various mathematical models have been developed. As with all models, these need to make a number of simplifying assumptions.

One typical assumption is that the people employing the cut-and-choose method are risk-averse. They won’t adopt a risky strategy that may give them less cake than a more conservative strategy.

With such assumptions in place, we can then prove what properties cake cutting procedures have and don’t have. For instance, cut-and-choose is envy free.

You won’t envy the cake I have, otherwise you would have taken this piece. And I won’t envy the piece you have, as the only risk-averse strategy is for me to cut the cake into two parts that I value equally.

On the other hand, the cutting of the cake is not totally equitable since the player who chooses can get cake that has more than half the total value for them.

With two players, it’s hard to do better than cut-and-choose. But I should record that my wife argues with me about this.

She believes it favours the second player since the first player inevitably can’t divide the cake perfectly and the second player can capitalise on this. This is the sort of assumption ignored in our mathematical models.

My wife might prefer the moving-knife procedure which doesn’t favour either player. A knife is moved over the cake, and either player calls “cut” when they are happy with the slice.

Again, this will divide the cake in such a way that neither player will envy the other (else they would have called “cut” themselves).

Three’s a crowd

Unfortunately, moving beyond two players increases the complexity of cutting cake significantly.

With two players, we needed just one cut to get to an envy free state. With three players, a complex series of five cuts of the cake might be needed. Of course, only two cuts are needed to get three slices.

The other three cuts are needed to remove any envy. And with four players, the problem explodes in our face.

An infinite number of cuts may be required to get to a situation where no one envies another’s cake. I’m sure there’s some moral here about too many cake cutters spoiling the dessert.

There are many interesting extensions of the problem. One such extension is to indivisible goods.

Suppose you have a bag of toys to divide between two children. How do you divide them fairly? As a twin myself, I know that the best solution is to ensure you buy two of everything.

It’s much more difficult when your great aunt gives you one Zhu Zhu pet, one Bratz doll and three Silly Bandz bracelets to share.

Online

More recently, I have been studying a version of the problem applicable to online settings. In such problems, not all players may be available all of the time. Consider, for instance, allocating time on a large telescope.

Astronomers will have different preferences for when to use the telescope depending on what objects are visible, the position of the sun, etcetera. How do we design a web-based reservation system so that astronomers can choose observation times that is fair to all?

We don’t want to insist all astronomers log in at the same time to decide an allocation. And we might have to start allocating time on the telescope now, before everyone has expressed their preferences. We can view this as a cake-cutting problem where the cake is made up of the time slots for observations.

The online nature of such cake-cutting problems poses some interesting new challenges.

How can we ensure that late-arriving players don’t envy cake already given to earlier players? The bad news is that we cannot now achieve even a simple property like envy freeness.

No procedure can guarantee situations where players don’t envy one another. But more relaxed properties are possible, such as not envying cake allocated whilst you are participating in the cutting of the cake.

Ham sandwich

There’s a brilliantly named piece of mathematics due to Arthur H. Stone and John Tukey. The Ham Sandwich Theorem which proves we can always cut a three-layered cake perfectly with a single cut.

Suppose we have three objects. Let’s call them “the top slice of bread”, “the ham filling” and “the bottom slice of bread”. Or if you prefer “the top layer” of the cake, “the middle layer” and “the bottom layer”.

The ham sandwich theorem proves a single slice can always perfectly bisect the three objects. Actually, the ham sandwich theorem works in any number of dimensions: any n objects in n-dimensional space can be simultaneously bisected by a single (n − 1) dimensional hyperplane.

So, in the case of the three-layered cake, n = 3, and the three-layered cake can be bisected (or cut) using a single, two-dimensional “hyperplane”. Such as, say, a knife.

Who would have thought that cutting cake would lead to higher dimensions of mathematics by way of a ham sandwich?

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Toby Walsh


Higher Polygonal Numbers and Pascal’s Triangle

The third diagonal column in Pascal’s Triangle (r = 2 in the usual way of labeling and numbering) consists of the triangular numbers (1, 3, 6, 10, …) – numbers that can be arranged in 2-dimensional triangular patterns. The fourth column of Pascal’s triangle gives us triangular-based pyramidal numbers (1, 4, 10, 20, …), built by stacking the triangular numbers. The columns further out give “higher dimensional” triangular numbers that arise from stacking the triangular numbers from the previous dimension.

It is not by coincidence that the triangular and higher-dimensional triangular numbers appear in Pascal’s Triangle. If you think about layering of polygonal numbers in terms of equations, you get

In the above equation p^d_(k,n) is the nth k-polygonal number of dimension d. Triangular numbers are the 3-polygonal numbers of dimension 2, square numbers are the 4-polygonal numbers of dimension 2, “square based pyramidal numbers” would be denoted as p^3_(4,n).
from the sum above, you can obtain this equation:

Which looks very much like the Pascal Identity C(n,r) = C(n-1,r-1) + C(n-1,r), except for some translation of the variables. To be precise, if we consider the case where k=3 and use r = d and n‘ = n+d-1 we can translate the triangular numbers into the appropriate positions in Pascal’s Triangle.

Along with the definitions for the end columns, the Pascal Identity allows us to generate the whole triangle. This suggests the following strategy for calculating the higher k-Polygonal numbers: create a modified Pascal’s Triangle whose first column is equal to k-2 (instead of 1), and whose last column is equal to 1 (as usual). This modified Pascal’s Triangle is generated using these initial values and the usual Pascal Identity.

Here is an example with k=5, which sets the first column values equal to 3 (except for the top value, which we keep as 1) and yields the pentagonal numbers (column 3) and the higher pentagonal numbers.

The formula for these modified Pascal Triangles is given by this equation:

If we apply the change of variables mentioned above, we can obtain this general formula for the higher polygonal numbers in terms of combinations:

This formula illustrates how polygonal numbers are built out of triangular numbers. It says that the nth d-dimensional k-polygonal number is equal to the nth d-dimensional triangular number, plus (k-3) copies of the n-1 d-dimensional triangular number. This is a little easier to understand when you forget about the higher-dimensions and look at the regular 2-dimensional polygonal number.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to dan.mackinnon*

 


Algebraic Elements Are Like Limit Points!

When you hear the word closure, what do you think of? I think of wholeness – you know, tying loose ends, wrapping things up, filling in the missing parts. This same idea is behind the mathematician’s notion of closure, as in the phrase “taking the closure” of a set. Intuitively this just means adding in any missing pieces so that the result is complete, whole. For instance, the circle on the left is open because it’s missing its boundary. But when we take its closure and include the boundary, we say the circle is closed.

As another example, consider the set of all real numbers strictly between 0 and 1, i.e. the open interval (0,1). Notice that we can get arbitrarily close to 0, but we can’t quite reach it. In some sense, we feel that 0 might as well be included in set, right? I mean, come on, 0.0000000000000000000000000000000000000001 is basically 0, right? So by not considering 0 as an element in our set, we feel like something’s missing. The same goes for 1.

We say an element is a limit point of a given set if that element is “close” to the set,* and we say the set’s closure is the set together with its limit points. (So 0 and 1 are both limit points of (0,1) and its closure is [0,1].) It turns out the word closure is also used in algebra, specifically the algebraic closure of a field, but there it has a completely different definition which has to do with roots of polynomials, called algebraic elementsNow why would mathematicians use the same word to describe two seemingly different things? The purpose of today’s post is to make the observation that they’re not so different after all! This may be somewhat obvious, but it wasn’t until after a recent conversation with a friend that I saw the connection:

 

‍algebraic elements of a field

are like

limit points of a sequence!

(Note: I’m not claiming any theorems here, this is just a student’s simple observation.)

 

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Tai-Danae Bradley*

 


Wire-Cut Forensic Examinations Currently Too Unreliable For Court, New Study Says

A research article published June 10 in the Proceedings of the National Academy of Sciences highlights the importance of careful application of high-tech forensic science to avoid wrongful convictions.

In a study with implications for an array of forensic examinations that rely on “vast databases and efficient algorithms,” researchers found the odds of a false match significantly increase when examiners make millions of comparisons in a quest to match wires found at a crime scene with the tools allegedly used to cut them.

The rate of mistaken identifications could be as high as one in 10 or more, concluded the researchers, who are affiliated with the Center for Statistics and Applications in Forensic Evidence (CSAFE), based in Ames, Iowa.

“It is somewhat of a counterintuition,” said co-author Susan VanderPlas, an assistant professor of statistics at the University of Nebraska-Lincoln. “You are more likely to find the right match—but you’re also more likely to find the wrong match.”

VanderPlas worked as a research professor at CSAFE before moving to Nebraska in 2020. Co-authors of the study, “Hidden Multiple Comparisons Increase Forensic Error Rates,” were Heike Hoffmann and Alicia Carriquiry, both affiliated with CSAFE and Iowa State University’s Department of Statistics.

Wire cuts and tool marks are used frequently as evidence in robberies, bombings, and other crimes. In the case of wire cuts, tiny striations on the cut ends of a wire may be matched to one of many available tools in a toolbox or garage. Comparing the evidence to more tools increases the chances that similar striations may be found on unrelated tools, resulting in a false accusation and conviction.

Wire-cutting evidence has been at issue in at least two cases that garnered national attention, including one where the accused was linked to a bombing based on a small piece of wire, a tiny fraction of an inch in diameter, that was matched to a tool found among the suspect’s belongings.

“Wire-cutting evidence is used in court and, based on our findings, it shouldn’t be—at least not without presenting additional information about how many comparisons were made,” VanderPlas said.

Wire cutting evidence is evaluated by comparing the striations found on the cut end of a piece of wire against the cutting blades of tools suspected to have been used in the crime. In a manual test, the examiner slides the end of the wire along the path created along another piece of material cut by the same tool to see where the pattern of striations match.

An automated process uses a comparison microscope and pattern-matching algorithms, to find possible matches pixel by pixel.

This can result in thousands upon thousands of individual comparisons, depending upon the length of the cutting blade, diameter of the wire, and even the number of tools checked.

For example, VanderPlas said she and her husband tallied the various tin snips, wire cutters, pliers and similar tools stored in their garage and came up with a total of 7 meters in blade length.

Examiners may not even be aware of the number of comparisons they are making as they search for a matching pattern, because those comparisons are hidden in the algorithms.

“This often-ignored issue increases the false discovery rate, and can contribute to the erosion of public trust in the justice system through conviction of innocent individuals,” the study authors wrote.

Forensic examiners typically testify based upon subjective rules about how much similarity is required to make an identification, the study explained. The researchers could not obtain error rate studies for wire-cut examinations and used published error rates for ballistics examinations to estimate possible false discovery rates for wire-cut examinations.

Before wire-cut examinations are used as evidence in court, the researchers recommended that:

  • Examiners report the overall length or area of materials used in the examination process, including blade length and wirediameter. This would enable examination-wide error rates to be calculated.
  • Studies be conducted to assess both false discovery and false elimination error rates when examiners are making difficult comparisons. Studies should link the length and area of comparison to error rates.
  • The number of items searched, comparisons made and results returned should be reported when a database is used at any stage of the forensic evidence evaluation process.

The VanderPlas article joins other reports calling for improvements in forensic science in America. The National Academies Press, publisher of the PNAS journal and other publications of the National Academies of Sciences, Engineering and Medicine, also published the landmark 2009 report “Strengthening Forensic Science in the United States: A Path Forward.”

 

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to University of Nebraska-Lincoln


Students understand calculus better when the lessons are active

College students learn more calculus in an active learning course in which students solve problems during class than in a traditional lecture-based course. That’s according to a peer-reviewed study my colleagues and I published in science. We also found that college students better understood complex calculus concepts and earned better grades in the active learning course.

The findings held across racial and ethnic groups, genders and college majors, and for both first-time college and transfer students—thus, promoting success for all students. Students in the active learning course had an associated 11% higher pass rate.

If you apply that rate to the current 300,000students taking calculus each year in the U.S., it could mean an additional 33,000 pass their class.

Our experimental trial ran over three semesters—fall 2018 through fall 2019—and involved 811 undergraduate students at a public university that has been designated as a Hispanic-serving institution. The study evaluated the impact of an engagement-focused active learning calculus teaching method by randomly placing students into either a traditional lecture-based class or the active learning calculus class.

The active learning intervention promoted development of calculus understanding during class, with students working through exercises designed to build calculus knowledge and with faculty monitoring and guiding the process.

This differs from the lecture setting where students passively listen to the instructor and develop their understanding outside of class, often on their own.

An active learning approach allows students to work together to solve problems and explain ideas to each other. Active learning is about understanding the “why” behind a subject versus merely trying to memorize it.

Along the way, students experiment with their ideas, learn from their mistakes and ultimately make sense of calculus. In this way, they replicate the practices of mathematicians, including making and testing educated guesses, sense-making and explaining their reasoning to colleagues. Faculty are a critical part of the process. They guide the process through probing questions, demonstrating mathematical strategies, monitoring group progress and adapting pace and activities to foster student learning.

Florida International University made a short video to accompany a research paper on how active learning improves outcomes for calculus students.

Why it matters

Calculus is a foundational discipline for science, technology, engineering and mathematics, as it provides the skills for designing systems as well as for studying and predicting change.

But historically it’s been a barrier that has ended the opportunity for many students to achieve their goal of a STEM career. Only 40% of undergraduate students intending to earn a STEM degree complete their degree, and calculus plays a role in that loss. The reasons vary depending on the student. Failing calculus can be a final straw for some.

And it is particularly concerning for historically underrepresented groups. The odds of female students leaving a STEM major after calculus is 1.5 times higher than it is for men. And Hispanic and Black students have a 50% higher failure rate than white students in calculus. These losses deprive the individual students of STEM aspirations, career dreams and financial security. And it deprives society of their potentially innovative contributions to solving challenging problems, such as climate resilience, energy independence, infrastructure and more.

What still isn’t known

A vexing challenge in calculus instruction—and across the STEM disciplines—is broad adoption of active learning strategies that work. We started this research to provide compelling evidence to show that this model works and to drive further change. The next step is addressing the barriers, including lack of time, questions about effectiveness and institutional policies that don’t provide an incentive for faculty to bring active learning to their classrooms.

A crucial next step is improving the evidence-based instructional change strategies that will promote adoption of active learning instruction in the classroom.

What’s next

Our latest results are motivating our team to further delve into the underlying instructional strategies that drive student understanding in calculus. We’re also looking for opportunities to replicate the experiment at a variety of institutions, including high schools, which will provide more insight into how to expand adoption across the nation.

We hope that this paper increases the rate of change of all faculty adopting active learning in their classrooms.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Laird Kramer, The Conversation