Time for a change. Scholars say the calendar needs a serious overhaul!

Researchers at The Johns Hopkins University have discovered a way to make time stand still — at least when it comes to the yearly calendar.

Using computer programs and mathematical formulas, Richard Conn Henry, an astrophysicist in the Krieger School of Arts and Sciences, and Steve H. Hanke, an applied economist in the Whiting School of Engineering, have created a new calendar in which each new 12-month period is identical to the one which came before, and remains that way from one year to the next in perpetuity.

Under the Hanke-Henry Permanent Calendar, for instance, if Christmas fell on a Sunday in 2012 (and it would), it would also fall on a Sunday in 2013, 2014 and beyond. In addition, under the new calendar, the rhyme “30 days hath September, April, June and November,” would no longer apply, because September would have 31 days, as would March, June and December. All the rest would have 30. (Try creating a rhyme using that.)

“Our plan offers a stable calendar that is absolutely identical from year to year and which allows the permanent, rational planning of annual activities, from school to work holidays,” says Henry, who is also director of the Maryland Space Grant Consortium. “Think about how much time and effort are expended each year in redesigning the calendar of every single organization in the world and it becomes obvious that our calendar would make life much simpler and would have noteworthy benefits.”

Among the practical advantages would be the convenience afforded by birthdays and holidays (as well as work holidays) falling on the same day of the week every year. But the economic benefits are even more profound, according to Hanke, an expert in international economics, including monetary policy.

“Our calendar would simplify financial calculations and eliminate what we call the ‘rip off’ factor,'” explains Hanke. “Determining how much interest accrues on mortgages, bonds, forward rate agreements, swaps and others, day counts are required. Our current calendar is full of anomalies that have led to the establishment of a wide range of conventions that attempt to simplify interest calculations. Our proposed permanent calendar has a predictable 91-day quarterly pattern of two months of 30 days and a third month of 31 days, which does away with the need for artificial day count conventions.”

According to Hanke and Henry, their calendar is an improvement on the dozens of rival reform calendars proffered by individuals and institutions over the last century.

“Attempts at reform have failed in the past because all of the major ones have involved breaking the seven-day cycle of the week, which is not acceptable to many people because it violates the Fourth Commandment about keeping the Sabbath Day,” Henry explains. “Our version never breaks that cycle.”

Henry posits that his team’s version is far more convenient, sensible and easier to use than the current Gregorian calendar, which has been in place for four centuries – ever since 1582, when Pope Gregory altered a calendar that was instituted in 46 BC by Julius Caesar.

In an effort to bring Caesar’s calendar in synch with the seasons, the pope’s team removed 11 days from the calendar in October, so that Oct. 4 was followed immediately by Oct. 15. This adjustment was necessary in order to deal with the same knotty problem that makes designing an effective and practical new calendar such a challenge: the fact that each Earth year is 365.2422 days long.

Hanke and Henry deal with those extra “pieces” of days by dropping leap years entirely in favour of an extra week added at the end of December every five or six years. This brings the calendar in sync with the seasonal changes as the Earth circles the sun.

In addition to advocating the adoption of this new calendar, Hanke and Henry encourage the abolition of world time zones and the adoption of “Universal Time” (formerly known as Greenwich Mean Time) in order to synchronize dates and times worldwide, streamlining international business.

“One time throughout the world, one date throughout the world,” they write in a January 2012 Global Asia article about their proposals. “Business meetings, sports schedules and school calendars would be identical every year. Today’s cacophony of time zones, daylight savings times and calendar fluctuations, year after year, would be over. The economy — that’s all of us — would receive a permanent ‘harmonization’ dividend.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Johns Hopkins University


Everything you need to know about statistics (but were afraid to ask)

Does the thought of p-values and regressions make you break out in a cold sweat? Never fear – read on for answers to some of those burning statistical questions that keep you up 87.9% of the night.

  • What are my hypotheses?

There are two types of hypothesis you need to get your head around: null and alternative. The null hypothesis always states the status quo: there is no difference between two populations, there is no effect of adding fertiliser, there is no relationship between weather and growth rates.

Basically, nothing interesting is happening. Generally, scientists conduct an experiment seeking to disprove the null hypothesis. We build up evidence, through data collection, against the null, and if the evidence is sufficient we can say with a degree of probability that the null hypothesis is not true.

We then accept the alternative hypothesis. This hypothesis states the opposite of the null: there is a difference, there is an effect, there is a relationship.

  • What’s so special about 5%?

One of the most common numbers you stumble across in statistics is alpha = 0.05 (or in some fields 0.01 or 0.10). Alpha denotes the fixed significance level for a given hypothesis test. Before starting any statistical analyses, along with stating hypotheses, you choose a significance level you’re testing at.

This states the threshold at which you are prepared to accept the possibility of a Type I Error – otherwise known as a false positive – rejecting a null hypothesis that is actually true.

  • Type what error?

Most often we are concerned primarily with reducing the chance of a Type I Error over its counterpart (Type II Error – accepting a false null hypothesis). It all depends on what the impact of either error will be.

Take a pharmaceutical company testing a new drug; if the drug actually doesn’t work (a true null hypothesis) then rejecting this null and asserting that the drug does work could have huge repercussions – particularly if patients are given this drug over one that actually does work. The pharmaceutical company would be concerned primarily with reducing the likelihood of a Type I Error.

Sometimes, a Type II Error could be more important. Environmental testing is one such example; if the effect of toxins on water quality is examined, and in truth the null hypothesis is false (that is, the presence of toxins does affect water quality) a Type II Error would mean accepting a false null hypothesis, and concluding there is no effect of toxins.

The down-stream issues could be dire, if toxin levels are allowed to remain high and there is some health effect on people using that water.

  • What is a p-value, really?

Because p-values are thrown about in science like confetti, it’s important to understand what they do and don’t mean. A p-value expresses the probability of getting a given result from a hypothesis test, or a more extreme result, if the null hypothesis were true.

Given we are trying to reject the null hypothesis, what this tells us is the odds of getting our experimental data if the null hypothesis is correct. If the odds are sufficiently low we feel confident in rejecting the null and accepting the alternative hypothesis.

What is sufficiently low? As mentioned above, the typical fixed significance level is 0.05. So if the probability portrayed by the p-value is less than 5% you reject the null hypothesis. But a fixed significance level can be deceiving: if 5% is significant, why is 6% not?

It pays to remember that such probabilities are continuous, and any given significance level is arbitrary. In other words, don’t throw your data away simply because you get a p-value of 6-10%.

  • How much replication do I have?

This is probably the biggest issue when it comes to experimental design, in which the focus is on ensuring the right type of data, in large enough quantities, is available to answer given questions as clearly and efficiently as possible.

Pseudoreplication refers to the over-inflation of degrees of freedom (a mathematical restriction put in place when we calculate a parameter – e.g. a mean – from a sample). How would this work in practice?

Say you’re researching cholesterol levels by taking blood from 20 male participants.

Each male is tested twice, giving 40 test results. But the level of replication is not 40, it’s actually only 20 – a requisite for replication is that each replicate is independent of all others. In this case, two blood tests from the same person are intricately linked.

If you were to analyse the data with a sample size of 40, you would be committing the sin of pseudoreplication: inflating your degrees of freedom (which incidentally helps to create a significant test result). Thus, if you start an experiment understanding the concept of independent replication, you can avoid this pitfall.

  • How do I know what analysis to do?

There is a key piece of prior knowledge that will help you determine how to analyse your data. What kind of variable are you dealing with? There are two most common types of variable:

  • Continuous variables. These can take any value. Were you to you measure the time until a reaction was complete, the results might be 30 seconds, two minutes and 13 seconds, or three minutes and 50 seconds.
  • Categorical variables. These fit into – you guessed it – categories. For instance, you might have three different field sites, or four brands of fertiliser. All continuous variables can be converted into categorical variables.

With the above example we could categorise the results into less than one minute, one to three minutes, and greater than three minutes. Categorical variables cannot be converted back to continuous variables, so it’s generally best to record data as “continuous” where possible to give yourself more options for analysis.

Deciding which to use between the two main types of analysis is easy once you know what variables you have:

ANOVA (Analysis of Variance) is used to compare a categorical variable with a continuous variable – for instance, fertiliser treatment versus plant growth in centimetres.

Linear Regression is used when comparing two continuous variables – for instance, time versus growth in centimetres.

Though there are many analysis tools available, ANOVA and linear regression will get you a long way in looking at your data. So if you can start by working out what variables you have, it’s an easy second step to choose the relevant analysis.

Ok, so perhaps that’s not everything you need to know about statistics, but it’s a start. Go forth and analyse!

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Sarah-Jane O’Connor, University of Canterbury


Paddlefish sensors tuned to detect signals from zooplankton prey

In 1997, scientists at the Center for Neurodynamics at the University of Missouri – St. Louis demonstrated that special sensors covering the elongated snout of paddlefish are electroreceptors that help the fish detect prey by responding to the weak voltage gradients that swimming zooplankton create in the surrounding water.

Now some of the same researchers have found that the electroreceptors contain oscillators, which generate rhythmical firing of electrosensory neurons.

The oscillators allow the electroreceptors to create a dynamical code to most effectively respond to electrical signals emitted naturally by zooplankton.

The results are presented in a paper appearing in the AIP’s journal Chaos.

To test the response of paddlefish electroreceptors to different stimuli, the researchers recorded signals from electrosensory neurons of live fish, while applying weak electric fields to the water in the form of computer-generated artificial stimuli or signals obtained previously from swimming zooplankton.

The team then analysed the power contained in different frequency ranges for the noisy input signals and the corresponding electroreceptor responses and compared the two.

In addition to finding that the paddlefish sensors best encode the signals emitted by zooplankton, the team also found that as the strength of the input signal was raised, the firing of the fish’s sensory neurons transitioned from a steady beat to a noisy pattern of intermittent bursts.

This bursting pattern became synchronized across different groups of electroreceptors, increasing the likelihood of the signal reaching higher-order neurons.

This provides a plausible mechanism to explain how reliable information about the nearness of prey is transferred to the fish’s brain, the researchers write.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to American Institute of Physics


Are pigeons as smart as primates? You can count on it

We humans have long been interested in defining the abilities that set us apart from other species. Along with capabilities such as language, the ability to recognise and manipulate numbers (“numerical competence”) has long been seen as a hallmark of human cognition.

In reality, a number of animal species are numerically competent and according to new research from psychologists at the University of Otago in New Zealand, the humble pigeon could be one such species.

Damian Scarf, Harlene Hayne and Michael Colombo found that pigeons possess far greater numerical abilities than was previously thought, actually putting them on par with primates.

More on pigeons in a moment, but first: why would non-human animals even need to be numerically competent? Would they encounter numerical problems in day-to-day life?

In fact, there are many reports indicating that number is an important factor in the way many species behave.

Brown cowbirds are nest parasites – they lay their eggs in the nests of “host” species; species that are then landed with the job of raising a young cowbird.

Cowbirds are sensitive to the number of eggs in the host nest, preferring to lay in nests with three host eggs rather than one. This presumably ensures the host parent is close to the end of laying a complete clutch and will begin incubating shortly after the parasite egg has been added.

Crows identify individuals by the number of caw sounds in their vocalisations, while lionesses appear to evaluate the risk of approaching intruder lions based on how many individuals they hear roaring.

But numerical competence is about more than an ability to count. In fact, it’s three distinct abilities:

  • the “cardinal” aspect: the ability to evaluate quantity (eg. counting the number of eggs already in a nest)
  • the “ordinal” aspect: the ability to put an arbitrary collection of items in their correct order or rank (eg. ordering a list of animals based on the number of legs they have, or ordering the letters of the alphabet)
  • the “symbolic” aspect: the ability to symbolically represent a given numerical quantity (eg. the number “3” or the word “three” are symbols that represent the quantity 3).

We know that humans are capable of all three aspects of numerical competence, but what about other animals?

For a start, we already know that the cowbird, lion and crow possess the cardinal aspect of numerical competency – they are all able to count. Pigeons possess the cardinal aspect too (as was reported as early as 1941) as do several other vertebrate and invertebrate species.

And in 1998, Elizabeth Brannon and Herbert Terrace showed that rhesus monkeys have the ability to order arrays of objects according to the number of items contained within these arrays. After learning to order sets of one, two and three items, the monkeys were able to order any three sets containing from one to nine items.

This discovery represented a clear progression in complexity, since ranking according to numerical quantity is an abstract ability – the ordinal aspect.

The new research by Scarf, Hayne and Colombo – which was published in Science in late December – has pushed, even further, our understanding of numerical abilities in the animal kingdom.

So what did they do?

Well, first they trained pigeons to peck three “stimulus arrays” – collections of objects on a touch screen. These arrays contained one, two or three objects and to receive a reward, the pigeon had to peck the arrays in order – the array with one object first, the array with two objects second, the array with three objects third.

Once this basic requirement was learned, the pigeons were presented with different object sets – one set containing arrays with one to three objects, and sets containing up to nine objects.

Having been presented with these novel object sets, the pigeons were once again required to peck the sets in ascending order. Pigeons solved the task successfully, even though they had never been trained with arrays containing more than three items.

In fact, they performed on par with rhesus monkeys, demonstrating that both pigeons and monkeys are able to identify and order the numbers from one to nine. This is significant because it shows these complex numerical abilities are not confined to the primates (and that pigeons are smarter than many people think!)

So, if non-human animals possess the cardinal and ordinal aspects of numerical competency, that means it’s the symbolic representation of numbers that makes humans unique, right?

As it turns out, no.

It’s been shown that red wood ants (Formica polyctena) can not only count up to several tens (20, 30 etc.), but can also communicate this numerical information to their brethren.

It would seem, therefore, that not even the symbolic representation of numerical information is specific to humans.

Of course, we still have much more to discover and understand within this fascinating field of research. In the meantime, you might want to think twice before dismissing pigeons as “stupid birds”.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Guez, University of Newcastle and Andrea S. Griffin, University of Newcastle

 


Calls for a posthumous pardon … but who was Alan Turing?

You may have read the British Government is being petitioned to grant a posthumous pardon to one of the world’s greatest mathematicians and most successful codebreakers, Alan Turing. You may also have read that Turing was convicted of gross indecency in 1952 and died tragically two years later.

But who, exactly, was he?

Born in London in 1912, Turing helped lay the foundations of the “information age” we live in.

He did his first degree at King’s College, Cambridge, and then became a Fellow there. His first big contribution was his development of a mathematical model of computation in 1936. This became known as the Turing Machine.

It was not the first time a computer had been envisaged: that distinction belonged to Charles Babbage, a 19th century mathematician who designed a computer based on mechanical technology and built parts of it (some of which may be seen at the Science Museum in London or Powerhouse Museum in Sydney, for example).

But Babbage’s design was necessarily complicated, as he aimed for a working device using specific technology. Turing’s design was independent of any particular technology and was not intended to be built.

It was very simple and would be very inefficient and impractical as a device for doing real computations. But its simplicity meant it could be used to do mathematical reasoning about computation.

Turing used his abstract machines to investigate what kinds of things could be computed. He found some tasks which, although perfectly well defined and mathematically precise, are uncomputable. The first of these is known as the halting problem, which asks, for any given computation, whether it will ever stop. Turing showed that this was uncomputable: there is no systematic method that always gives the right answer.

So, if you have ever wanted a program that can run on your laptop and test all your other software to determine which of them might cause your laptop to “hang” or get stuck in a never-ending loop, the bad news is such a comprehensive testing program cannot be written.

Uncomputability is not confined to questions about the behaviour of computer programs. Since Turing’s work, many problems in mainstream mathematics have been found to be uncomputable. For example, the Russian mathematician and computer scientist, Yuri Matiyasevich, showed in 1970 that determining if a polynomial equation with several variables has a solution consisting only of whole numbers is also an uncomputable problem.

Turing machines have been used to define measures of the efficiency of computations. They underpin formal statements of the P vs NP problem, one of the Millennium Prize problems.

Another important feature of Turing’s model is its capacity to treat programs as data. This means the programs that tell computers what to do can themselves, after being represented in symbolic form, be given as input to other programs. Turing Machines that can take any program as input, and run that program on some input data, are called Universal Turing Machines.

These are really conceptual precursors of today’s computers, which are stored-program computers, in that they can treat programs as data in this sense. The oldest surviving intact computer in the world, in this most complete sense of the term, is CSIRAC at Melbourne Museum.

It seems a mathematical model of computation was an idea whose time had come. In 1936, the year of Turing’s result, another model of computation was published by Alonzo Church of Princeton University. Although Turing and Church took quite different routes, they ended up at the same place, in that the two models give exactly the same notion of computability.

In other words, the classification of tasks into computable and uncomputable is independent of which of these two models is used.

Other models of computation have been proposed, but mostly they seem to lead to the same view of what is and is not computable. The Church-Turing Thesis states that this class of computable functions does indeed capture exactly those things which can be computed in principle (say by a human with unlimited time, paper and ink, who works methodically and makes no mistakes).

It implies Turing Machines give a faithful mathematical model of computation. This is not a formal mathematical result, but rather a working assumption which is now widely accepted.

Turing went to Princeton and completed his PhD under Church, returning to Britain in 1938.

Early in the Second World War, Turing joined the British codebreaking operation at Bletchley Park, north-west of London. He became one of its most valuable assets. He was known by the nickname “Prof” and was described by colleague Jack Good as “a deep rather than a fast thinker”.

At the time, Germany was using an encryption device known as Enigma for much of its communications. This was widely regarded as completely secure. The British had already obtained an Enigma machine, from the Poles, and building on their work, Turing and colleague Gordon Welchman worked out how the Enigma-encrypted messages collected by the British could be decrypted.

Turing designed a machine called the Bombe, named after a Polish ice cream, which worked by testing large numbers of combinations of Enigma machine configurations, in order to help decrypt secret messages. These messages yielded information of incalculable value to the British. Winston Churchill described the Bletchley Park codebreakers as “geese that laid the golden eggs but never cackled”.

In 1945, after the war, Turing joined the National Physical Laboratory (NPL), where he wrote a report on how to construct an electronic computer, this time a general-purpose one unlike the machines dedicated to cryptanalysis which he helped to design at Bletchley Park.

This report led to the construction of an early computer (Pilot ACE) at NPL in 1950. By then, Turing had already moved on to Manchester University, where he worked on the first general-purpose stored-program computer in the world, the Manchester “Baby”.

In their early days, computers were often called “electronic brains”. Turing began to consider whether a computer could be programmed to simulate human intelligence, which remains a major research challenge today and helped to initiate the field of artificial intelligence.

A fundamental issue in such research is: how do you know if you have succeeded? What test can you apply to a program to determine if it has intelligence? Turing proposed that a program be deemed intelligent if, in its interaction with a human, the human is unable to detect whether he or she is communicating with another human or a computer program. (The test requires a controlled setting, for example where all communication with the human tester is by typed text.)

His paper on this topic – Computing Machinery and Intelligence – was published in 1950. The artificial intelligence community holds regular competitions to see how good researchers’ programs are at the Turing test.

The honours Turing received during his lifetime included an OBE in 1945 and becoming a Fellow of the Royal Society in 1951.

His wartime contributions remained secret throughout his life and for many years afterwards.

In 1952 he was arrested for homosexuality, which was illegal in Britain at the time. Turing was found guilty and required to undergo “treatment” with drugs. This conviction also meant he lost his security clearance.

In 1954 he ingested some cyanide, probably via an apple, and died. An inquest classified his death as suicide, and this is generally accepted today. But some at the time, including his mother, contended his death was an accidental consequence of poor handling of chemicals during some experiments he was conducting at home in his spare time.

The irony of Turing losing his security clearance – after the advantage his work had given Britain in the war, in extraordinary secrecy – is clear.

The magnitude of what was done to him has become increasingly plain over time, helped by greater availability of information about the work at Bletchley Park and changing social attitudes to homosexuality.

Next year, 2012, will be the centenary of Turing’s birth – with events planned globally to celebrate the man and his contribution. As this year approached, a movement developed to recognise Turing’s contribution and atone for what was done to him. In 2009, British Prime Minister, Gordon Brown, responding to a petition, issued a formal apology on behalf of the British government for the way Turing was treated.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Graham Farr, Monash University