Strange numbers found in particle collisions

Strange numbers

An unexpected connection has emerged between the results of physics experiments and an important, seemingly unrelated set of numbers in pure mathematics.

At the Large Hadron Collider in Geneva, physicists shoot protons around a 17-mile track and smash them together at nearly the speed of light. It’s one of the most finely tuned scientific experiments in the world, but when trying to make sense of the quantum debris, physicists begin with a strikingly simple tool called a Feynman diagram that’s not that different from how a child would depict the situation.

Feynman diagrams were devised by Richard Feynman in the 1940s. They feature lines representing elementary particles that converge at a vertex (which represents a collision) and then diverge from there to represent the pieces that emerge from the crash. Those lines either shoot off alone or converge again. The chain of collisions can be as long as a physicist dares to consider.

To that schematic physicists then add numbers, for the mass, momentum and direction of the particles involved. Then they begin a laborious accounting procedure — integrate these, add that, square this. The final result is a single number, called a Feynman probability, which quantifies the chance that the particle collision will play out as sketched.

“In some sense Feynman invented this diagram to encode complicated math as a bookkeeping device,” said Sergei Gukov, a theoretical physicist and mathematician at the California Institute of Technology.

Feynman diagrams have served physics well over the years, but they have limitations. One is strictly procedural. Physicists are pursuing increasingly high-energy particle collisions that require greater precision of measurement — and as the precision goes up, so does the intricacy of the Feynman diagrams that need to be calculated to generate a prediction.

The second limitation is of a more fundamental nature. Feynman diagrams are based on the assumption that the more potential collisions and sub-collisions physicists account for, the more accurate their numerical predictions will be. This process of calculation, known as perturbative expansion, works very well for particle collisions of electrons, where the weak and electromagnetic forces dominate. It works less well for high-energy collisions, like collisions between protons, where the strong nuclear force prevails. In these cases, accounting for a wider range of collisions — by drawing ever more elaborate Feynman diagrams — can actually lead physicists astray.

“We know for a fact that at some point it begins to diverge” from real-world physics, said Francis Brown, a mathematician at the University of Oxford. “What’s not known is how to estimate at what point one should stop calculating diagrams.”

Yet there is reason for optimism. Over the last decade physicists and mathematicians have been exploring a surprising correspondence that has the potential to breathe new life into the venerable Feynman diagram and generate far-reaching insights in both fields. It has to do with the strange fact that the values calculated from Feynman diagrams seem to exactly match some of the most important numbers that crop up in a branch of mathematics known as algebraic geometry. These values are called “periods of motives,” and there’s no obvious reason why the same numbers should appear in both settings. Indeed, it’s as strange as it would be if every time you measured a cup of rice, you observed that the number of grains was prime.

“There is a connection from nature to algebraic geometry and periods, and with hindsight, it’s not a coincidence,” said Dirk Kreimer, a physicist at Humboldt University in Berlin.

Now mathematicians and physicists are working together to unravel the coincidence. For mathematicians, physics has called to their attention a special class of numbers that they’d like to understand: Is there a hidden structure to these periods that occur in physics? What special properties might this class of numbers have? For physicists, the reward of that kind of mathematical understanding would be a new degree of foresight when it comes to anticipating how events will play out in the messy quantum world.

Today periods are one of the most abstract subjects of mathematics, but they started out as a more concrete concern. In the early 17th century scientists such as Galileo Galilei were interested in figuring out how to calculate the length of time a pendulum takes to complete a swing. They realized that the calculation boiled down to taking the integral — a kind of infinite sum — of a function that combined information about the pendulum’s length and angle of release. Around the same time, Johannes Kepler used similar calculations to establish the time that a planet takes to travel around the sun. They called these measurements “periods,” and established them as one of the most important measurements that can be made about motion.

Over the course of the 18th and 19th centuries, mathematicians became interested in studying periods generally — not just as they related to pendulums or planets, but as a class of numbers generated by integrating polynomial functions like x2 + 2x – 6 and 3x3 – 4x2 – 2x + 6. For more than a century, luminaries like Carl Friedrich Gauss and Leonhard Euler explored the universe of periods and found that it contained many features that pointed to some underlying order. In a sense, the field of algebraic geometry — which studies the geometric forms of polynomial equations — developed in the 20th century as a means for pursuing that hidden structure.

This effort advanced rapidly in the 1960s. By that time mathematicians had done what they often do: They translated relatively concrete objects like equations into more abstract ones, which they hoped would allow them to identify relationships that were not initially apparent.

This process first involved looking at the geometric objects (known as algebraic varieties) defined by the solutions to classes of polynomial functions, rather than looking at the functions themselves. Next, mathematicians tried to understand the basic properties of those geometric objects. To do that they developed what are known as cohomology theories — ways of identifying structural aspects of the geometric objects that were the same regardless of the particular polynomial equation used to generate the objects.

By the 1960s, cohomology theories had proliferated to the point of distraction — singular cohomology, de Rham cohomology, étale cohomology and so on. Everyone, it seemed, had a different view of the most important features of algebraic varieties.

It was in this cluttered landscape that the pioneering mathematician Alexander Grothendieck, who died in 2014, realized that all cohomology theories were different versions of the same thing.

“What Grothendieck observed is that, in the case of an algebraic variety, no matter how you compute these different cohomology theories, you always somehow find the same answer,” Brown said.

That same answer — the unique thing at the center of all these cohomology theories — was what Grothendieck called a “motive.” “In music it means a recurring theme. For Grothendieck a motive was something which is coming again and again in different forms, but it’s really the same,” said Pierre Cartier, a mathematician at the Institute of Advanced Scientific Studies outside Paris and a former colleague of Grothendieck’s.

Motives are in a sense the fundamental building blocks of polynomial equations, in the same way that prime factors are the elemental pieces of larger numbers. Motives also have their own data associated with them. Just as you can break matter into elements and specify characteristics of each element — its atomic number and atomic weight and so forth — mathematicians ascribe essential measurements to a motive. The most important of these measurements are the motive’s periods. And if the period of a motive arising in one system of polynomial equations is the same as the period of a motive arising in a different system, you know the motives are the same.

“Once you know the periods, which are specific numbers, that’s almost the same as knowing the motive itself,” said Minhyong Kim, a mathematician at Oxford.

One direct way to see how the same period can show up in unexpected contexts is with pi, “the most famous example of getting a period,” Cartier said. Pi shows up in many guises in geometry: in the integral of the function that defines the one-dimensional circle, in the integral of the function that defines the two-dimensional circle, and in the integral of the function that defines the sphere.

That this same value would recur in such seemingly different-looking integrals was likely mysterious to ancient thinkers. “The modern explanation is that the sphere and the solid circle have the same motive and therefore have to have essentially the same period,” according to Brown.

Among other things, it would deepen the already provocative relationship between fundamental geometric constructions from two very different contexts: motives, the objects that mathematicians devised 50 years ago to understand the solutions to polynomial equations, and Feynman diagrams, the schematic representation of how particle collisions play out. Every Feynman diagram has a motive attached to it, but what exactly the structure of a motive is saying about the structure of its related diagram remains anyone’s guess.

Read more: Quanta Magazine

Give your website a premium touchup with these free WordPress themes using responsive design, seo friendly designs www.bigtheme.net/wordpress