Mathematicians Calculate 42-Digit Number After Decades Of Trying

Dedekind numbers describe the number of ways sets of logical operations can be combined, and are fiendishly difficult to calculate, with only eight known since 1991 – and now mathematicians have calculated the ninth in the series.

The ninth Dedekind number was calculated using the Noctua 2 supercomputer at Paderborn University in Germany

A 42-digit-long number that mathematicians have been hunting for decades, thanks to its sheer difficulty to calculate, has suddenly been found by two separate groups at the same time. This ninth Dedekind number, as it is known, may be the last in the sequence that is feasible to discover.

Dedekind numbers describe the number of ways a set of logical operations can be combined. For sets of just two or three elements, the total number is easy to calculate by hand, but for larger sets it rapidly becomes impossible because the number grows so quickly, at what is known as a double exponential speed.

“You’ve got two to the power two to the power n, as a very rough estimate of the complexity of this system,” says Patrick de Causmaecker at KU Leuven in Belgium. “If you want to find the Dedekind numbers, that is the kind of magnitude of counting that you will have to face.”

The challenge of calculating higher Dedekind numbers has attracted researchers in many disciplines, from pure mathematicians to computer scientists, over the years. “It’s an old, famous problem and, because it’s hard to crack, it’s interesting,” says Christian Jäkel at Dresden University of Technology in Germany.

In 1991, mathematician Doug Wiedemann found the eighth Dedekind number using 200 hours of number crunching on the Cray-2 supercomputer, one of the most powerful machines at the time. No one could do any better, until now.

After working on the problem on and off for six years, Jäkel published his calculation for the ninth Dedekind number in early April. Coincidently, Causmaecker and Lennart van Hirtum, also at KU Leuven, published their work three days later, having produced the same result. Both groups were unaware of one another. “I was shocked, I didn’t know about their work. I thought it would take at least 10 years or whatever to recompute it,” says Jäkel.

The resulting number is 286,386,577,668,298,411,128,469,151,667,598,498,812,366, which is 42 digits long.

Jäkel’s calculation took 28 days on eight graphical processing units (GPUs). To reduce the number of calculations required, he multiplied together elements from the much smaller fifth Dedekind number.

Causmaecker and van Hirtum instead used a processor called a field-programmable gate array (FPGA) for their work. Unlike a CPU or a GPU, these can perform many different kinds of interrelated calculations at the same time. “In an FPGA, everything is always happening all at once,” says van Hirtum. “You can compare it to a car assembly line.”

Like Jäkel, the team used elements from a smaller Dedekind number, in their case the sixth, but this still required 5.5 quadrillion operations and more than four months of computing time using the Noctua 2 supercomputer at Paderborn University, says van Hirtum.

People are divided on whether another Dedekind number will ever be found. “The tenth Dedekind number will be in the realm of 10 to the power of 82, which puts you at the number of atoms in the visible universe, so you can imagine you need something big in technical advancement that also grows exponentially,” says Jakel.

Van Hirtum also thinks the amount of computing power becomes impractical for the next number, requiring trillions more computations which would require capturing the power output of the entire sun. “This jump in complexity remains absolutely astronomical,” he says.

Causmaecker, however, is more positive, as he thinks new ways of calculating could bring that requirement down. “The combination of exponential growth of computing power, and the power of the mathematical algorithms, will go together and maybe in 20 or 30 years we can compute [Dedekind number] 10.”

For more such insights, log into www.international-maths-challenge.com.

*Credit for article given to Alex Wilkins*


Shedding light on complex mathematical group theories

EU researchers contributed important knowledge to the field of modular representation theory in the form of proofs and pioneering analyses.

Modular representation theory studies linear actions on finite groups, or groups of a countable (finite) number of elements.

A discussion of finite groups requires definition of several associated terms. The so-called representation of a given finite group can be reduced using a prime integer to get a modular representation of the group (sort of breaking down the whole into the sum of its parts).

Mathematically, an indecomposable or irreducible module of a finite group has only two submodules, the module itself and zero. Vertices and sources are mathematical entities associated with indecomposable modules.

While modular representation theory has evolved tremendously, many issues still remain to be addressed. In particular, modules of symmetric groups, a type of finite group whose elements allow only a certain number of structure-preserving transformations, are an active area of interest.

European researchers supported by funding of the ‘Vertices of simple modules for the symmetric and related finite groups’ (D07.SYMGPS.OX) project sought to develop fast algorithms for computation of vertices and sources of indecomposable modules as well as to study the Auslander-Reiten quiver considered to be part of a presentation of the category of all representations.

Investigators first analysed two-modular Specht modules and the position of Specht modules in the Auslander-Reiten quiver with important definitive results.

In addition, the team produced ground-breaking proofs regarding the Lie module of the symmetric group, shedding light on a topic of mathematics until now clouded in mystery.

Furthermore, the Fiet conjecture was proved and innovative results were obtained regarding vertices of simple modules of symmetric groups.

Overall, the project team provided pioneering work and definitive results and proofs regarding symmetric groups and related finite groups that promise to significantly advance the mathematical field of modular representation theory.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to CORDIS

 


Researchers find classical musical compositions adhere to power law

A team of researchers, led by Daniel Levitin of McGill University, has found after analysing over two thousand pieces of classical music that span four hundred years of history, that virtually all of them follow a one-over-f (1/f) power distribution equation. He and his team have published the results of their work in the Proceedings of the National Academy of Sciences.

One-over-f equations describe the relative frequency of things that happen over time and can be used to describe such naturally occurring events as annual river flooding or the beating of a human heart. They have been used to describe the way pitch is used in music as well, but until now, no one has thought to test the idea that they could be used to describe the rhythm of the music too.

To find out if this is the case, Levitin and his team analysed (by measuring note length line by line) close to 2000 pieces of classical music from a wide group of noted composers. In so doing, they found that virtually every piece studied conformed to the power law. They also found that by adding another variable to the equation, called a beta, which was used to describe just how predictable a given piece was compared to other pieces, they could solve for beta and find a unique number for each composer.

After looking at the results as a whole, they found that works written by some classical composers were far more predictable than others, and that certain genres in general were more predictable than others too. Beethoven was the most predictable of the group studied, while Mozart was the least of the bunch. And symphonies are generally far more predictable than Ragtimes with other types falling somewhere in-between. In solving for beta, the team discovered that they had inadvertently developed a means for calculating a composer’s unique individual rhythm signature. In speaking with the university news group at McGill, Levitin said, “this was one of the most unanticipated and exciting findings of our research.”

Another interesting aspect of the research is that because the patterns are based on the power law, the music the team studied shares the same sorts of patterns as fractals, i.e. elements in the rhythm that occur the second most often happen only half as often, the third, just a third as often and so forth. Thus, it’s not difficult to imagine music in a fractal patterns that are unique to individual composers.

For more such insights, log into our website https://international-maths-challenge.com

Credit of the article given to Bob Yirka , Phys.org