Implications of no-free-lunch theorems

In the 18th century, the philosopher David Hume observed that induction—inferring the future based on what’s happened in the past—can never be reliable. In 1997, SFI Professor David Wolpert with his colleague Bill Macready made Hume’s observation mathematically precise, showing that it’s impossible for any inference algorithm (such as machine learning or genetic algorithms) to be consistently better than any other for every possible real-world situation.

Over the next decade, the pair proved a series of theorems about this that were dubbed the “no-free-lunch” theorems. These proved that one algorithm could, in fact, be a bit better than another in most circumstances—but only at the cost of being far worse in the remaining circumstances.

These theorems have been extremely controversial since their inception, since they punctured the claims of many researchers that the algorithms they had developed were superior to other algorithms. As part of the controversy, in 2019, the philosopher Gerhard Schulz wrote a book wrestling with the implications of Hume’s and Wolpert’s work.

A special issue of the Journal for General Philosophy of Science published in March 2023 is devoted to Schulz’s book, and includes an article by Wolpert himself, in which he reviews the “no-free-lunch” theorems, pointing out that there are also many “free-lunch” theorems.

He states that the meta-induction algorithms that Schurz advocates as a “solution to Hume’s problem” are simply examples of such a free lunch based on correlations among the generalization errors of induction algorithms. Wolpert concludes that the prior algorithms that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Santa Fe Institute