The faster-than-fast Fourier transform

The Fourier transform is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things.

The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found.

At the Association for Computing Machinery’s Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.

Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies.

“Weighted” means that some of those frequencies count more toward the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That’s why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality.

Heavyweight division

Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called “sparse.” The new algorithm determines the weights of a signal’s most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety.

“In nature, most of the normal signals are sparse,” says Dina Katabi, one of the developers of the new algorithm. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn’t be sparse — but neither would it be a signal that anyone cares about.

The new algorithm — which associate professor Katabi and professor Piotr Indyk, both of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh — relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight.

In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly.

If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can’t be identified. So the researchers’ first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp.

Zeroing in

Once they’ve isolated a slice of spectrum, however, the researchers still have to identify the most heavily weighted frequency in that slice. In the SODA paper, they do this by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power is concentrated. But in an as-yet-unpublished paper, they describe a much more efficient technique, which borrows a signal-processing strategy from 4G cellular networks. Frequencies are generally represented as up-and-down squiggles, but they can also be though of as oscillations; by sampling the same slice of bandwidth at different times, the researchers can determine where the dominant frequency is in its oscillatory cycle.

Two University of Michigan researchers — Anna Gilbert, a professor of mathematics, and Martin Strauss, an associate professor of mathematics and of electrical engineering and computer science — had previously proposed an algorithm that improved on the FFT for very sparse signals. “Some of the previous work, including my own with Anna Gilbert and so on, would improve upon the fast Fourier transform algorithm, but only if the sparsity k” — the number of heavily weighted frequencies — “was considerably smaller than the input size n,” Strauss says. The MIT researchers’ algorithm, however, “greatly expands the number of circumstances where one can beat the traditional FFT,” Strauss says. “Even if that number k is starting to get close to n — to all of them being important — this algorithm still gives some improvement over FFT.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Larry Hardesty, Massachusetts Institute of Technology

 


Are Pigeons as Smart as Primates? You can Count on It

The humble pigeon mightn’t look smart, but it’s no bird-brain.

We humans have long been interested in defining the abilities that set us apart from other species. Along with capabilities such as language, the ability to recognise and manipulate numbers (“numerical competence”) has long been seen as a hallmark of human cognition.

In reality, a number of animal species are numerically competent and according to new research from psychologists at the University of Otago in New Zealand, the humble pigeon could be one such species.

Damian Scarf, Harlene Hayne and Michael Colombo found that pigeons possess far greater numerical abilities than was previously thought, actually putting them on par with primates.

More on pigeons in a moment, but first: why would non-human animals even need to be numerically competent? Would they encounter numerical problems in day-to-day life?

In fact, there are many reports indicating that number is an important factor in the way many species behave.

Brown cowbirds are nest parasites – they lay their eggs in the nests of “host” species; species that are then landed with the job of raising a young cowbird.

 

Cowbirds are sensitive to the number of eggs in the host nest, preferring to lay in nests with three host eggs rather than one. This presumably ensures the host parent is close to the end of laying a complete clutch and will begin incubating shortly after the parasite egg has been added.

Crows identify individuals by the number of caw sounds in their vocalisations, while lionesses appear to evaluate the risk of approaching intruder lions based on how many individuals they hear roaring.

But numerical competence is about more than an ability to count. In fact, it’s three distinct abilities:

  • the “cardinal” aspect: the ability to evaluate quantity (eg. counting the number of eggs already in a nest)
  • the “ordinal” aspect: the ability to put an arbitrary collection of items in their correct order or rank (eg. ordering a list of animals based on the number of legs they have, or ordering the letters of the alphabet)
  • the “symbolic” aspect: the ability to symbolically represent a given numerical quantity (eg. the number “3” or the word “three” are symbols that represent the quantity 3).

We know that humans are capable of all three aspects of numerical competence, but what about other animals?

For a start, we already know that the cowbird, lion and crow possess the cardinal aspect of numerical competency – they are all able to count. Pigeons possess the cardinal aspect too (as was reported as early as 1941) as do several other vertebrate and invertebrate species.

And in 1998, Elizabeth Brannon and Herbert Terrace showed that rhesus monkeys have the ability to order arrays of objects according to the number of items contained within these arrays. After learning to order sets of one, two and three items, the monkeys were able to order any three sets containing from one to nine items.

This discovery represented a clear progression in complexity, since ranking according to numerical quantity is an abstract ability – the ordinal aspect.

The new research by Scarf, Hayne and Colombo – which was published in Science in late December – has pushed, even further, our understanding of numerical abilities in the animal kingdom.

So what did they do?

Well, first they trained pigeons to peck three “stimulus arrays” – collections of objects on a touch screen. These arrays contained one, two or three objects and to receive a reward, the pigeon had to peck the arrays in order – the array with one object first, the array with two objects second, the array with three objects third.

Once this basic requirement was learned, the pigeons were presented with different object sets – one set containing arrays with one to three objects, and sets containing up to nine objects.

Having been presented with these novel object sets, the pigeons were once again required to peck the sets in ascending order. Pigeons solved the task successfully, even though they had never been trained with arrays containing more than three items.

A pigeon taking part in the University of Otago experiment.

In fact, they performed on par with rhesus monkeys, demonstrating that both pigeons and monkeys are able to identify and order the numbers from one to nine. This is significant because it shows these complex numerical abilities are not confined to the primates (and that pigeons are smarter than many people think!)

So if non-human animals possess the cardinal and ordinal aspects of numerical competency, that means it’s the symbolic representation of numbers that makes humans unique, right?

As it turns out, no.

It’s been shown that red wood ants (Formica polyctena) can not only count up to several tens (20, 30 etc.), but can also communicate this numerical information to their brethren.

It would seem, therefore, that not even the symbolic representation of numerical information is specific to humans.

Of course, we still have much more to discover and understand within this fascinating field of research. In the meantime, you might want to think twice before dismissing pigeons as “stupid birds”.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to David Guez and Andrea S. Griffin*


Time for a change. Scholars say the calendar needs a serious overhaul!

Researchers at The Johns Hopkins University have discovered a way to make time stand still — at least when it comes to the yearly calendar.

Using computer programs and mathematical formulas, Richard Conn Henry, an astrophysicist in the Krieger School of Arts and Sciences, and Steve H. Hanke, an applied economist in the Whiting School of Engineering, have created a new calendar in which each new 12-month period is identical to the one which came before, and remains that way from one year to the next in perpetuity.

Under the Hanke-Henry Permanent Calendar, for instance, if Christmas fell on a Sunday in 2012 (and it would), it would also fall on a Sunday in 2013, 2014 and beyond. In addition, under the new calendar, the rhyme “30 days hath September, April, June and November,” would no longer apply, because September would have 31 days, as would March, June and December. All the rest would have 30. (Try creating a rhyme using that.)

“Our plan offers a stable calendar that is absolutely identical from year to year and which allows the permanent, rational planning of annual activities, from school to work holidays,” says Henry, who is also director of the Maryland Space Grant Consortium. “Think about how much time and effort are expended each year in redesigning the calendar of every single organization in the world and it becomes obvious that our calendar would make life much simpler and would have noteworthy benefits.”

Among the practical advantages would be the convenience afforded by birthdays and holidays (as well as work holidays) falling on the same day of the week every year. But the economic benefits are even more profound, according to Hanke, an expert in international economics, including monetary policy.

“Our calendar would simplify financial calculations and eliminate what we call the ‘rip off’ factor,'” explains Hanke. “Determining how much interest accrues on mortgages, bonds, forward rate agreements, swaps and others, day counts are required. Our current calendar is full of anomalies that have led to the establishment of a wide range of conventions that attempt to simplify interest calculations. Our proposed permanent calendar has a predictable 91-day quarterly pattern of two months of 30 days and a third month of 31 days, which does away with the need for artificial day count conventions.”

According to Hanke and Henry, their calendar is an improvement on the dozens of rival reform calendars proffered by individuals and institutions over the last century.

“Attempts at reform have failed in the past because all of the major ones have involved breaking the seven-day cycle of the week, which is not acceptable to many people because it violates the Fourth Commandment about keeping the Sabbath Day,” Henry explains. “Our version never breaks that cycle.”

Henry posits that his team’s version is far more convenient, sensible and easier to use than the current Gregorian calendar, which has been in place for four centuries – ever since 1582, when Pope Gregory altered a calendar that was instituted in 46 BC by Julius Caesar.

In an effort to bring Caesar’s calendar in synch with the seasons, the pope’s team removed 11 days from the calendar in October, so that Oct. 4 was followed immediately by Oct. 15. This adjustment was necessary in order to deal with the same knotty problem that makes designing an effective and practical new calendar such a challenge: the fact that each Earth year is 365.2422 days long.

Hanke and Henry deal with those extra “pieces” of days by dropping leap years entirely in favour of an extra week added at the end of December every five or six years. This brings the calendar in sync with the seasonal changes as the Earth circles the sun.

In addition to advocating the adoption of this new calendar, Hanke and Henry encourage the abolition of world time zones and the adoption of “Universal Time” (formerly known as Greenwich Mean Time) in order to synchronize dates and times worldwide, streamlining international business.

“One time throughout the world, one date throughout the world,” they write in a January 2012 Global Asia article about their proposals. “Business meetings, sports schedules and school calendars would be identical every year. Today’s cacophony of time zones, daylight savings times and calendar fluctuations, year after year, would be over. The economy — that’s all of us — would receive a permanent ‘harmonization’ dividend.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Johns Hopkins University


Everything you need to know about statistics (but were afraid to ask)

Does the thought of p-values and regressions make you break out in a cold sweat? Never fear – read on for answers to some of those burning statistical questions that keep you up 87.9% of the night.

  • What are my hypotheses?

There are two types of hypothesis you need to get your head around: null and alternative. The null hypothesis always states the status quo: there is no difference between two populations, there is no effect of adding fertiliser, there is no relationship between weather and growth rates.

Basically, nothing interesting is happening. Generally, scientists conduct an experiment seeking to disprove the null hypothesis. We build up evidence, through data collection, against the null, and if the evidence is sufficient we can say with a degree of probability that the null hypothesis is not true.

We then accept the alternative hypothesis. This hypothesis states the opposite of the null: there is a difference, there is an effect, there is a relationship.

  • What’s so special about 5%?

One of the most common numbers you stumble across in statistics is alpha = 0.05 (or in some fields 0.01 or 0.10). Alpha denotes the fixed significance level for a given hypothesis test. Before starting any statistical analyses, along with stating hypotheses, you choose a significance level you’re testing at.

This states the threshold at which you are prepared to accept the possibility of a Type I Error – otherwise known as a false positive – rejecting a null hypothesis that is actually true.

  • Type what error?

Most often we are concerned primarily with reducing the chance of a Type I Error over its counterpart (Type II Error – accepting a false null hypothesis). It all depends on what the impact of either error will be.

Take a pharmaceutical company testing a new drug; if the drug actually doesn’t work (a true null hypothesis) then rejecting this null and asserting that the drug does work could have huge repercussions – particularly if patients are given this drug over one that actually does work. The pharmaceutical company would be concerned primarily with reducing the likelihood of a Type I Error.

Sometimes, a Type II Error could be more important. Environmental testing is one such example; if the effect of toxins on water quality is examined, and in truth the null hypothesis is false (that is, the presence of toxins does affect water quality) a Type II Error would mean accepting a false null hypothesis, and concluding there is no effect of toxins.

The down-stream issues could be dire, if toxin levels are allowed to remain high and there is some health effect on people using that water.

  • What is a p-value, really?

Because p-values are thrown about in science like confetti, it’s important to understand what they do and don’t mean. A p-value expresses the probability of getting a given result from a hypothesis test, or a more extreme result, if the null hypothesis were true.

Given we are trying to reject the null hypothesis, what this tells us is the odds of getting our experimental data if the null hypothesis is correct. If the odds are sufficiently low we feel confident in rejecting the null and accepting the alternative hypothesis.

What is sufficiently low? As mentioned above, the typical fixed significance level is 0.05. So if the probability portrayed by the p-value is less than 5% you reject the null hypothesis. But a fixed significance level can be deceiving: if 5% is significant, why is 6% not?

It pays to remember that such probabilities are continuous, and any given significance level is arbitrary. In other words, don’t throw your data away simply because you get a p-value of 6-10%.

  • How much replication do I have?

This is probably the biggest issue when it comes to experimental design, in which the focus is on ensuring the right type of data, in large enough quantities, is available to answer given questions as clearly and efficiently as possible.

Pseudoreplication refers to the over-inflation of degrees of freedom (a mathematical restriction put in place when we calculate a parameter – e.g. a mean – from a sample). How would this work in practice?

Say you’re researching cholesterol levels by taking blood from 20 male participants.

Each male is tested twice, giving 40 test results. But the level of replication is not 40, it’s actually only 20 – a requisite for replication is that each replicate is independent of all others. In this case, two blood tests from the same person are intricately linked.

If you were to analyse the data with a sample size of 40, you would be committing the sin of pseudoreplication: inflating your degrees of freedom (which incidentally helps to create a significant test result). Thus, if you start an experiment understanding the concept of independent replication, you can avoid this pitfall.

  • How do I know what analysis to do?

There is a key piece of prior knowledge that will help you determine how to analyse your data. What kind of variable are you dealing with? There are two most common types of variable:

  • Continuous variables. These can take any value. Were you to you measure the time until a reaction was complete, the results might be 30 seconds, two minutes and 13 seconds, or three minutes and 50 seconds.
  • Categorical variables. These fit into – you guessed it – categories. For instance, you might have three different field sites, or four brands of fertiliser. All continuous variables can be converted into categorical variables.

With the above example we could categorise the results into less than one minute, one to three minutes, and greater than three minutes. Categorical variables cannot be converted back to continuous variables, so it’s generally best to record data as “continuous” where possible to give yourself more options for analysis.

Deciding which to use between the two main types of analysis is easy once you know what variables you have:

ANOVA (Analysis of Variance) is used to compare a categorical variable with a continuous variable – for instance, fertiliser treatment versus plant growth in centimetres.

Linear Regression is used when comparing two continuous variables – for instance, time versus growth in centimetres.

Though there are many analysis tools available, ANOVA and linear regression will get you a long way in looking at your data. So if you can start by working out what variables you have, it’s an easy second step to choose the relevant analysis.

Ok, so perhaps that’s not everything you need to know about statistics, but it’s a start. Go forth and analyse!

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Sarah-Jane O’Connor, University of Canterbury


Paddlefish sensors tuned to detect signals from zooplankton prey

In 1997, scientists at the Center for Neurodynamics at the University of Missouri – St. Louis demonstrated that special sensors covering the elongated snout of paddlefish are electroreceptors that help the fish detect prey by responding to the weak voltage gradients that swimming zooplankton create in the surrounding water.

Now some of the same researchers have found that the electroreceptors contain oscillators, which generate rhythmical firing of electrosensory neurons.

The oscillators allow the electroreceptors to create a dynamical code to most effectively respond to electrical signals emitted naturally by zooplankton.

The results are presented in a paper appearing in the AIP’s journal Chaos.

To test the response of paddlefish electroreceptors to different stimuli, the researchers recorded signals from electrosensory neurons of live fish, while applying weak electric fields to the water in the form of computer-generated artificial stimuli or signals obtained previously from swimming zooplankton.

The team then analysed the power contained in different frequency ranges for the noisy input signals and the corresponding electroreceptor responses and compared the two.

In addition to finding that the paddlefish sensors best encode the signals emitted by zooplankton, the team also found that as the strength of the input signal was raised, the firing of the fish’s sensory neurons transitioned from a steady beat to a noisy pattern of intermittent bursts.

This bursting pattern became synchronized across different groups of electroreceptors, increasing the likelihood of the signal reaching higher-order neurons.

This provides a plausible mechanism to explain how reliable information about the nearness of prey is transferred to the fish’s brain, the researchers write.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to American Institute of Physics


Are pigeons as smart as primates? You can count on it

We humans have long been interested in defining the abilities that set us apart from other species. Along with capabilities such as language, the ability to recognise and manipulate numbers (“numerical competence”) has long been seen as a hallmark of human cognition.

In reality, a number of animal species are numerically competent and according to new research from psychologists at the University of Otago in New Zealand, the humble pigeon could be one such species.

Damian Scarf, Harlene Hayne and Michael Colombo found that pigeons possess far greater numerical abilities than was previously thought, actually putting them on par with primates.

More on pigeons in a moment, but first: why would non-human animals even need to be numerically competent? Would they encounter numerical problems in day-to-day life?

In fact, there are many reports indicating that number is an important factor in the way many species behave.

Brown cowbirds are nest parasites – they lay their eggs in the nests of “host” species; species that are then landed with the job of raising a young cowbird.

Cowbirds are sensitive to the number of eggs in the host nest, preferring to lay in nests with three host eggs rather than one. This presumably ensures the host parent is close to the end of laying a complete clutch and will begin incubating shortly after the parasite egg has been added.

Crows identify individuals by the number of caw sounds in their vocalisations, while lionesses appear to evaluate the risk of approaching intruder lions based on how many individuals they hear roaring.

But numerical competence is about more than an ability to count. In fact, it’s three distinct abilities:

  • the “cardinal” aspect: the ability to evaluate quantity (eg. counting the number of eggs already in a nest)
  • the “ordinal” aspect: the ability to put an arbitrary collection of items in their correct order or rank (eg. ordering a list of animals based on the number of legs they have, or ordering the letters of the alphabet)
  • the “symbolic” aspect: the ability to symbolically represent a given numerical quantity (eg. the number “3” or the word “three” are symbols that represent the quantity 3).

We know that humans are capable of all three aspects of numerical competence, but what about other animals?

For a start, we already know that the cowbird, lion and crow possess the cardinal aspect of numerical competency – they are all able to count. Pigeons possess the cardinal aspect too (as was reported as early as 1941) as do several other vertebrate and invertebrate species.

And in 1998, Elizabeth Brannon and Herbert Terrace showed that rhesus monkeys have the ability to order arrays of objects according to the number of items contained within these arrays. After learning to order sets of one, two and three items, the monkeys were able to order any three sets containing from one to nine items.

This discovery represented a clear progression in complexity, since ranking according to numerical quantity is an abstract ability – the ordinal aspect.

The new research by Scarf, Hayne and Colombo – which was published in Science in late December – has pushed, even further, our understanding of numerical abilities in the animal kingdom.

So what did they do?

Well, first they trained pigeons to peck three “stimulus arrays” – collections of objects on a touch screen. These arrays contained one, two or three objects and to receive a reward, the pigeon had to peck the arrays in order – the array with one object first, the array with two objects second, the array with three objects third.

Once this basic requirement was learned, the pigeons were presented with different object sets – one set containing arrays with one to three objects, and sets containing up to nine objects.

Having been presented with these novel object sets, the pigeons were once again required to peck the sets in ascending order. Pigeons solved the task successfully, even though they had never been trained with arrays containing more than three items.

In fact, they performed on par with rhesus monkeys, demonstrating that both pigeons and monkeys are able to identify and order the numbers from one to nine. This is significant because it shows these complex numerical abilities are not confined to the primates (and that pigeons are smarter than many people think!)

So, if non-human animals possess the cardinal and ordinal aspects of numerical competency, that means it’s the symbolic representation of numbers that makes humans unique, right?

As it turns out, no.

It’s been shown that red wood ants (Formica polyctena) can not only count up to several tens (20, 30 etc.), but can also communicate this numerical information to their brethren.

It would seem, therefore, that not even the symbolic representation of numerical information is specific to humans.

Of course, we still have much more to discover and understand within this fascinating field of research. In the meantime, you might want to think twice before dismissing pigeons as “stupid birds”.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to David Guez, University of Newcastle and Andrea S. Griffin, University of Newcastle

 


Calls for a posthumous pardon … but who was Alan Turing?

You may have read the British Government is being petitioned to grant a posthumous pardon to one of the world’s greatest mathematicians and most successful codebreakers, Alan Turing. You may also have read that Turing was convicted of gross indecency in 1952 and died tragically two years later.

But who, exactly, was he?

Born in London in 1912, Turing helped lay the foundations of the “information age” we live in.

He did his first degree at King’s College, Cambridge, and then became a Fellow there. His first big contribution was his development of a mathematical model of computation in 1936. This became known as the Turing Machine.

It was not the first time a computer had been envisaged: that distinction belonged to Charles Babbage, a 19th century mathematician who designed a computer based on mechanical technology and built parts of it (some of which may be seen at the Science Museum in London or Powerhouse Museum in Sydney, for example).

But Babbage’s design was necessarily complicated, as he aimed for a working device using specific technology. Turing’s design was independent of any particular technology and was not intended to be built.

It was very simple and would be very inefficient and impractical as a device for doing real computations. But its simplicity meant it could be used to do mathematical reasoning about computation.

Turing used his abstract machines to investigate what kinds of things could be computed. He found some tasks which, although perfectly well defined and mathematically precise, are uncomputable. The first of these is known as the halting problem, which asks, for any given computation, whether it will ever stop. Turing showed that this was uncomputable: there is no systematic method that always gives the right answer.

So, if you have ever wanted a program that can run on your laptop and test all your other software to determine which of them might cause your laptop to “hang” or get stuck in a never-ending loop, the bad news is such a comprehensive testing program cannot be written.

Uncomputability is not confined to questions about the behaviour of computer programs. Since Turing’s work, many problems in mainstream mathematics have been found to be uncomputable. For example, the Russian mathematician and computer scientist, Yuri Matiyasevich, showed in 1970 that determining if a polynomial equation with several variables has a solution consisting only of whole numbers is also an uncomputable problem.

Turing machines have been used to define measures of the efficiency of computations. They underpin formal statements of the P vs NP problem, one of the Millennium Prize problems.

Another important feature of Turing’s model is its capacity to treat programs as data. This means the programs that tell computers what to do can themselves, after being represented in symbolic form, be given as input to other programs. Turing Machines that can take any program as input, and run that program on some input data, are called Universal Turing Machines.

These are really conceptual precursors of today’s computers, which are stored-program computers, in that they can treat programs as data in this sense. The oldest surviving intact computer in the world, in this most complete sense of the term, is CSIRAC at Melbourne Museum.

It seems a mathematical model of computation was an idea whose time had come. In 1936, the year of Turing’s result, another model of computation was published by Alonzo Church of Princeton University. Although Turing and Church took quite different routes, they ended up at the same place, in that the two models give exactly the same notion of computability.

In other words, the classification of tasks into computable and uncomputable is independent of which of these two models is used.

Other models of computation have been proposed, but mostly they seem to lead to the same view of what is and is not computable. The Church-Turing Thesis states that this class of computable functions does indeed capture exactly those things which can be computed in principle (say by a human with unlimited time, paper and ink, who works methodically and makes no mistakes).

It implies Turing Machines give a faithful mathematical model of computation. This is not a formal mathematical result, but rather a working assumption which is now widely accepted.

Turing went to Princeton and completed his PhD under Church, returning to Britain in 1938.

Early in the Second World War, Turing joined the British codebreaking operation at Bletchley Park, north-west of London. He became one of its most valuable assets. He was known by the nickname “Prof” and was described by colleague Jack Good as “a deep rather than a fast thinker”.

At the time, Germany was using an encryption device known as Enigma for much of its communications. This was widely regarded as completely secure. The British had already obtained an Enigma machine, from the Poles, and building on their work, Turing and colleague Gordon Welchman worked out how the Enigma-encrypted messages collected by the British could be decrypted.

Turing designed a machine called the Bombe, named after a Polish ice cream, which worked by testing large numbers of combinations of Enigma machine configurations, in order to help decrypt secret messages. These messages yielded information of incalculable value to the British. Winston Churchill described the Bletchley Park codebreakers as “geese that laid the golden eggs but never cackled”.

In 1945, after the war, Turing joined the National Physical Laboratory (NPL), where he wrote a report on how to construct an electronic computer, this time a general-purpose one unlike the machines dedicated to cryptanalysis which he helped to design at Bletchley Park.

This report led to the construction of an early computer (Pilot ACE) at NPL in 1950. By then, Turing had already moved on to Manchester University, where he worked on the first general-purpose stored-program computer in the world, the Manchester “Baby”.

In their early days, computers were often called “electronic brains”. Turing began to consider whether a computer could be programmed to simulate human intelligence, which remains a major research challenge today and helped to initiate the field of artificial intelligence.

A fundamental issue in such research is: how do you know if you have succeeded? What test can you apply to a program to determine if it has intelligence? Turing proposed that a program be deemed intelligent if, in its interaction with a human, the human is unable to detect whether he or she is communicating with another human or a computer program. (The test requires a controlled setting, for example where all communication with the human tester is by typed text.)

His paper on this topic – Computing Machinery and Intelligence – was published in 1950. The artificial intelligence community holds regular competitions to see how good researchers’ programs are at the Turing test.

The honours Turing received during his lifetime included an OBE in 1945 and becoming a Fellow of the Royal Society in 1951.

His wartime contributions remained secret throughout his life and for many years afterwards.

In 1952 he was arrested for homosexuality, which was illegal in Britain at the time. Turing was found guilty and required to undergo “treatment” with drugs. This conviction also meant he lost his security clearance.

In 1954 he ingested some cyanide, probably via an apple, and died. An inquest classified his death as suicide, and this is generally accepted today. But some at the time, including his mother, contended his death was an accidental consequence of poor handling of chemicals during some experiments he was conducting at home in his spare time.

The irony of Turing losing his security clearance – after the advantage his work had given Britain in the war, in extraordinary secrecy – is clear.

The magnitude of what was done to him has become increasingly plain over time, helped by greater availability of information about the work at Bletchley Park and changing social attitudes to homosexuality.

Next year, 2012, will be the centenary of Turing’s birth – with events planned globally to celebrate the man and his contribution. As this year approached, a movement developed to recognise Turing’s contribution and atone for what was done to him. In 2009, British Prime Minister, Gordon Brown, responding to a petition, issued a formal apology on behalf of the British government for the way Turing was treated.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Graham Farr, Monash University


Calls For a Posthumous Pardon … But Who was Alan Turing?

Momentum is gathering behind calls to pardon the father of computer science. BinaryApe

You may have read the British Government is being petitioned to grant a posthumous pardon to one of the world’s greatest mathematicians and most successful codebreakers, Alan Turing. You may also have read that Turing was was convicted of gross indecency in 1952 and died tragically two years later.

But who, exactly, was he?

Born in London in 1912, Turing helped lay the foundations of the “information age” we live in.

He did his first degree at King’s College, Cambridge, and then became a Fellow there. His first big contribution was his development of a mathematical model of computation in 1936. This became known as the Turing Machine.

It was not the first time a computer had been envisaged: that distinction belonged to Charles Babbage, a 19th century mathematician who designed a computer based on mechanical technology and built parts of it (some of which may be seen at the Science Museum in London or Powerhouse Museum in Sydney, for example).

But Babbage’s design was necessarily complicated, as he aimed for a working device using specific technology. Turing’s design was independent of any particular technology, and was not intended to be built.

The now iconic shot of Alan Turing.

It was very simple, and would be very inefficient and impractical as a device for doing real computations. But its simplicity meant it could be used to do mathematical reasoning about computation.

Turing used his abstract machines to investigate what kinds of things could be computed. He found some tasks which, although perfectly well defined and mathematically precise, are uncomputable. The first of these is known as the halting problem, which asks, for any given computation, whether it will ever stop. Turing showed that this was uncomputable: there is no systematic method that always gives the right answer.

So, if you have ever wanted a program that can run on your laptop and test all your other software to determine which of them might cause your laptop to “hang” or get stuck in a never-ending loop, the bad news is such a comprehensive testing program cannot be written.

Uncomputability is not confined to questions about the behaviour of computer programs. Since Turing’s work, many problems in mainstream mathematics have been found to be uncomputable. For example, the Russian mathematician and computer scientist, Yuri Matiyasevich, showed in 1970 that determining if a polynomial equation with several variables has a solution consisting only of whole numbers is also an uncomputable problem.

Turing machines have been used to define measures of the efficiency of computations. They underpin formal statements of the P vs NP problem, one of the Millennium Prize problems.

Another important feature of Turing’s model is its capacity to treat programs as data. This means the programs that tell computers what to do can themselves, after being represented in symbolic form, be given as input to other programs. Turing Machines that can take any program as input, and run that program on some input data, are called Universal Turing Machines.

These are really conceptual precursors of today’s computers, which are stored-program computers, in that they can treat programs as data in this sense. The oldest surviving intact computer in the world, in this most complete sense of the term, is CSIRAC at Melbourne Museum.

 

CSIRAC was Australia’s first digital computer, and the fourth “stored program” computer in the world. Melbourne Museum

It seems a mathematical model of computation was an idea whose time had come. In 1936, the year of Turing’s result, another model of computation was published by Alonzo Church of Princeton University. Although Turing and Church took quite different routes, they ended up at the same place, in that the two models give exactly the same notion of computability.

In other words, the classification of tasks into computable and uncomputable is independent of which of these two models is used.

Other models of computation have been proposed, but mostly they seem to lead to the same view of what is and is not computable. The Church-Turing Thesis states that this class of computable functions does indeed capture exactly those things which can be computed in principle (say by a human with unlimited time, paper and ink, who works methodically and makes no mistakes).

It implies Turing Machines give a faithful mathematical model of computation. This is not a formal mathematical result, but rather a working assumption which is now widely accepted.

Turing went to Princeton and completed his PhD under Church, returning to Britain in 1938.

Early in the Second World War, Turing joined the British codebreaking operation at Bletchley Park, north-west of London. He became one of its most valuable assets. He was known by the nickname “Prof” and was described by colleague Jack Good as “a deep rather than a fast thinker”.

One of the famous Enigma machines decrypted at Bletchley Park. Keir David

At the time, Germany was using an encryption device known as Enigma for much of its communications. This was widely regarded as completely secure. The British had already obtained an Enigma machine, from the Poles, and building on their work, Turing and colleague Gordon Welchman worked out how the Enigma-encrypted messages collected by the British could be decrypted.

Turing designed a machine called the Bombe, named after a Polish ice cream, which worked by testing large numbers of combinations of Enigma machine configurations, in order to help decrypt secret messages. These messages yielded information of incalculable value to the British. Winston Churchill described the Bletchley Park codebreakers as “geese that laid the golden eggs but never cackled”.

In 1945, after the war, Turing joined the National Physical Laboratory (NPL), where he wrote a report on how to construct an electronic computer, this time a general-purpose one unlike the machines dedicated to cryptanalysis which he helped to design at Bletchley Park.

This report led to the construction of an early computer (Pilot ACE) at NPL in 1950. By then, Turing had already moved on to Manchester University, where he worked on the first general-purpose stored-program computer in the world, the Manchester “Baby”.

The remade Bombe machine at Bletchley Park, England, features miles of circuitry. Keir David

In their early days, computers were often called “electronic brains”. Turing began to consider whether a computer could be programmed to simulate human intelligence, which remains a major research challenge today and helped to initiate the field of artificial intelligence.

A fundamental issue in such research is: how do you know if you have succeeded? What test can you apply to a program to determine if it has intelligence? Turing proposed that a program be deemed intelligent if, in its interaction with a human, the human is unable to detect whether he or she is communicating with another human or a computer program. (The test requires a controlled setting, for example where all communication with the human tester is by typed text.)

His paper on this topic – Computing Machinery and Intelligence – was published in 1950. The artificial intelligence community holds regular competitions to see how good researchers’ programs are at the Turing test.

The honours Turing received during his lifetime included an OBE in 1945 and becoming a Fellow of the Royal Society in 1951.

His wartime contributions remained secret throughout his life and for many years afterwards.

In 1952 he was arrested for homosexuality, which was illegal in Britain at the time. Turing was found guilty and required to undergo “treatment” with drugs. This conviction also meant he lost his security clearance.

In 1954 he ingested some cyanide, probably via an apple, and died. An inquest classified his death as suicide, and this is generally accepted today. But some at the time, including his mother, contended his death was an accidental consequence of poor handling of chemicals during some experiments he was conducting at home in his spare time.

Dino Gravalo.

The irony of Turing losing his security clearance – after the advantage his work had given Britain in the war, in extraordinary secrecy – is clear.

The magnitude of what was done to him has become increasingly plain over time, helped by greater availability of information about the work at Bletchley Park and changing social attitudes to homosexuality.

Next year, 2012, will be the centenary of Turing’s birth – with events planned globally to celebrate the man and his contribution. As this year approached, a movement developed to recognise Turing’s contribution and atone for what was done to him. In 2009, British Prime Minister, Gordon Brown, responding to a petition, issued a formal apology on behalf of the British government for the way Turing was treated.

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Graham Farr*

 


Protecting confidential data with math

Statistical databases (SDBs) are collections of data that are used to gather and analyse information from a variety of sources. The data may be derived from sales transactions, customer files, voter registrations, medical records, employee rosters, product inventories, or other compilations of facts and figures.

Because database security requires multiple processes and controls, it presents huge security challenges to organizations. With the computerization of databases in healthcare, forensics, telecommunications, and other fields, ensuring this kind of security has become increasingly important.

In a paper published Thursday in the SIAM Journal on Discrete Mathematics, authors Rudolf Ahlswede and Harout Aydinian analyse a security-control model for statistical databases.

“Providing privacy and confidentiality in SDBs is not a new issue,” Aydinian points out. “Privacy interests have evolved from the very first census in the United States. Recorded protests until the mid-20th century reflect constitutional issues resulting from the requirement for U.S. residents to provide sensitive personal information. Questions on census forms about diseases, mortgage values, and other items have raised many concerns.”

While such databases are very helpful in aggregating data, there is a risk that confidential information about an individual’s record may be deliberately compromised. “Since such data sets also contain sensitive information, such as the disease of an individual, or the salary of an employee, it is necessary to provide security against the disclosure of confidential information,” says Aydinian. “Even in cases where a user has no direct access to sensitive information, sometimes confidential data about an individual can be inferred by correlating enough statistics.”

Typically, statistical databases are designed to only accept queries that involve specific statistical functions (such as sum, average, count, min, max, etc.). However, the use of these queries may render databases susceptible to compromise. For instance, it may be possible to infer information about specific individuals by putting together data from a sequence of statistical queries, using prior knowledge of an individual, or through collusion among users.

An SDB is considered secure if no protected data can be inferred from available queries. “In the literature, many scenarios of compromise and inference control methods have been proposed to protect SDBs,” Aydinian says. “However, to date no one security control method is capable of completely preventing compromise.”

Query restriction is one of several general approaches used for security control. A “query request” retrieves a subset of data from a database that meets a set of conditions. In query restriction, the kind and amount of data that can be retrieved by such queries is limited, for example, the size of the data, or the amount of overlap between data that is returned.

In one type of query restriction method, only certain sums of individual records (called “SUM queries”) that meet a minimum specified size or number, and satisfy a specified set of conditions, are available to users.

Aydinian explains with an example. “Consider a company with a large number of employees. Suppose that for each member of the company, the sex, age, rank, length of employment, salary etc. is recorded. The salaries of individual employees are confidential. Suppose that only SUM queries are allowed, i.e. the sum of the salaries of the specified people is returned. Then one might pose the query: What is the sum of salaries for males, above 50, and during the last 10 years?”

The task addressed in the paper is to provide an optimal collection of SUM queries that prevents compromise of confidential information—such as individual salaries, for instance. A natural solution is to maximize the number of available SUM queries. The authors obtain tight bounds for the maximum number of such queries that return subsets of data without compromising groups of entries.

“Future work in the query-restriction approach includes evaluation of new security-control mechanisms, which are easy to implement and guarantee absolute security,” says Aydinian. “At the same time, it is desirable that these methods satisfy other criteria like richness of available queries, consistency, cost etc. It also seems promising to develop methods combining different security control mechanisms.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Society for Industrial and Applied Mathematics


Digital alchemy: Sir Isaac Newton’s papers now online

The notebooks of Sir Isaac Newton, who was famously reported to have suffered a (scientifically) earth-shaking blow to the head from an apple, are being scanned and published online by the University of Cambridge.

Newton, a Biblical numerologist when he wasn’t developing calculus or building the first reflecting telescope, founded classical mechanics with Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), which was first published in 1687. In the book that made his name, Newton set out his three laws of motion, and his theory of universal gravitation (prompted by pondering what force plummeted the fruit straight down onto his head, or so goes the apocryphal tale).

Newton studied and later held the Lucasian Chair of Mathematics at Cambridge, which was given numerous manuscripts of his in 1872 and has since bought more. The online publication has started with Newton’s mathematical works of the 1660s and more papers will become available over coming months.

A philosopher of science at Flinders University, George Couvalis, said that Newton’s gravitational experiments – which largely corrected ancient observations of gravity – were sparked by his interest in magic and magnetism. “The idea that things might naturally attract one another is an idea that he got from magical ideas. He adapted it across to mathematical theory because it was a mystical theory,” Dr Couvalis said

It was important to remember that scientists of Newton’s era did not have what we would consider a modern sceptical outlook and – with the exception of the “exceptional” Galileo Galilei – instead held a fusion of views that we would consider deeply irrational, Dr Couvalis said.

“It was certainly far more common in the 17th and 18th centuries for scientists to be interested in magical beliefs and alchemical beliefs and religious beliefs. Johannes Kepler, for example, had all kinds of strange views about the music of the spheres, Copernicus had strange views about the sacredness of the sun, and Newton famously had views about the mysterious numerical meanings of Biblical passages and about alchemical material,” Dr Couvalis said.

Scientists of the period saw their work touching on many illogical and occult fields of interest, including Robert Boyle, a founder of modern chemistry, who had “an interest in doing experimental research on magical mirrors, which to us would sound bizarre but at the time it was thought to be a possibility,” said Dr Couvalis, who added that Boyle pulled back from some experiments for religious reasons. “He thought it might get him in touch with demons.”

Demonology may have fallen out of favour amongst scientists, but “the view that we’re getting everything right would be a serious mistake,” Dr Couvalis said. “To some degree science is always in the sway of the time it’s in; this is now the standard view of philosophers and historians.”

“Newton’s mechanics is in certain respects pretty much right, but in other respects it was shown by Einstein and others to be wildly wrong. By about 1900 we had people saying to their graduate students ‘You should give up physics because it’s all been done,’ but Einstein managed to show that it was wildly wrong in certain respects,” Dr Couvalis said.

The ideal of the scientific method is never met, and our beliefs and discoveries will likely on day be seen as flawed but perhaps useful stepping stones in the continuum of science, Dr Couvalis said. “People make mistakes, people have a lot of trouble leaving assumptions behind, and our tests are never rigorous enough to be absolutely certain that we’re getting things right. Future experimental studies and the sheer empirical facts will show us to be wrong in many ways that we can’t anticipate.”

“We work with what we have because we just don’t know anything better at the moment. It might turn out that Einstein’s special and general theories of relativity are wrong in some deep-seated way. It might turn out that some of our theories of the universe are wrong. It’s starting to look in biology as if neo-Darwinism isn’t completely right, so where will that go – I don’t know. Research will determine the direction. That doesn’t mean that we’re going to go back to being creationists – that view has been thoroughly debunked. Imre Lakatos wrote in the 1970s there are no good scientific theories, there’s only the best rotten theory we have.”

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Matthew Thompson, The Conversation