Producing quark-gluon matter from higher-energy nuclear collisions will give insight into its fluidlike properties.
Characterizing the quark-gluon matter produced in heavy-ion collisions. CMS measured the average energy per particle emerging perpendicular to the line of collision (red point). The new measurement indicates that this transverse energy density increases more rapidly with energy (dotted curve) compared to what was measured at lower energies (straight line).
For several years, researchers have known that the quark-gluon matter created in the collision of relativistic nuclei has an exceptionally low viscosity. Now, the latest experiments exploring this state of matter are being reported in Physical Review Letters by the collaboration running the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider in Geneva. The team was able to study the quark-gluon matter at higher collision energies and for longer times compared to earlier experiments . As a result, their data will lead to more accurate calculations of the viscosity and expansion of the strongly interacting matter and may help researchers better understand the structure of gluons in high-energy particles.
One of the main goals of high-energy nuclear physics is to break apart protons, neutrons, and heavier nuclei into their constituent quarks and gluons and study the state of matter believed to exist at the time of the big bang. But creating deconfined quarks and gluons, which are bound by the strong force, requires either very high temperatures or very high energies. At the Relativistic Heavy Ion Collider (RHIC), these conditions are created by smashing ultrarelativistic gold nuclei together, while collisions between lead nuclei are studied at the higher-energy Large Hadron Collider (LHC). Like other detector experiments, CMS measures the energies and momenta of the particles created in such collisions and uses this information to figure out the state of matter created shortly after the collision.
Earlier experimental findings at RHIC indicated that the matter produced in heavy-ion collisions expands like a fluid with very little dissipation. In fact, the dissipation approaches a fundamental lower limit for fluids, suggesting that the deconfined quark-gluon matter could be one of the most “perfect” fluids ever studied in the lab  (See 26 October 2010 Trends). The RHIC researchers reached this conclusion by studying the momentum anisotropy of particles produced in heavy-ion collisions. Typically, the geometry of the colliding nuclei determines the anisotropy, but how this spatial anisotropy is converted to momentum-space anisotropy depends on the shear viscosity: the smaller the shear viscosity the more efficient this conversion is. The RHIC experiments showed that the anisotropies are quite large, suggesting that the ratio of the shear viscosity to entropy density, (η/s), in the quark-gluon matter is very small.
Heavy-ion collision experiments at the LHC are now aimed at understanding how the properties of quark-gluon matter, in particular η/s, depend on the temperature or energy density, or whether there are significant effects from the relatively small size of the system produced in heavy-ion collisions. The energy density of the quark-gluon matter increases with the center-of-mass energy of the accelerated ions, while the size of the system is determined by the degree to which the ions strike each other head on (the centrality of the collision.) To characterize the geometry of the system produced in the heavy-ion collisions, researchers measure the rapidity, y=ln((E+cpz)/(E-cpz))/2, of all of the post-collision particles. Here E and pz are the energy of the particle and its momentum along the beam axis. Roughly speaking, the rapidity describes how fast particles are moving in the longitudinal direction (along the path of the collision) and is a useful way to describe the longitudinal distribution of the matter.
The particular quantity that CMS measures in the new experiments is the transverse energy density—the energy per particle of those particles that were ejected perpendicular to the line of collision. This quantity depends on two unknowns—the initial state of the collision (the moment right after the two nuclei collide) and the viscosity of the quark-gluon matter as it expands to the final state that CMS actually measures—and may therefore tell us something new about both.
One of the uncertainties in trying to quantify η/s from heavy-ion collisions is an incomplete theoretical picture of the initial state. When a proton or nucleus is accelerated to high energies, the number of gluons inside of the particle increases. This occurs because the particle has more energy with which to generate new gluons, but eventually, there are so many gluons that they start to interact and annihilate each other. At some point, gluon creation and recombination balance out, the number of gluons saturates, and the interior of the particle is said to form a “color glass condensate.” Such a system can be described classically because it has a high occupation number of gluons .
When two heavy ions collide, the gluon fields in the two nuclei start to interact with each other. In a short window of time (up to about 0.2fermi/c, where 1fermi/c=3.3×10-24 seconds), quantum fluctuations are small and the gluons can still be described classically —a state called a glasma. In fact, it is the classical nature of the gluons fields at early times that makes the theoretical treatment of this highly nonlinear and nonequilibrium system possible. An energy scale that is called the gluon saturation scale characterizes both the color glass condensate and the glasma. The gluon saturation scale, Qs, is ∼(1-3)GeV and depends on the energy of the collisions and the rapidity, and this dependence is relatively straightforward to calculate [5, 6]. For this reason, models based on the idea of gluon saturation have been successful at characterizing these quantities in heavy-ion collisions at RHIC as well as at the LHC [5, 6]. But, there is still no consensus that the gluon saturation picture is applicable when Qs is a relatively low value, as it is in the RHIC and LHC experiments.
CMS’s experiments show (Fig. 1) that the transverse energy density rises more rapidly with the collision energy than the lower-energy RHIC experiments had indicated. The result is not completely surprising if gluon saturation is the right theory for the initial state: The saturation scale depends nonlinearly on the center-of-mass energy because the gluon density isn’t linear with energy. Therefore the gluon saturation picture can, in principle, provide a natural explanation of the rapid rise of the transverse energy density seen in Fig. 1. Proton-lead collisions planned at the LHC will provide a more direct test of the gluon saturation picture. In the long run, the ultimate test will come from colliding electrons with heavy ions. (New facilities for such experiments, such as the Electron-Ion Collider in the US or the Large Hadron Electron Collider at CERN are currently being proposed.)
The new experiments are also important because they allow a more accurate measurement of the viscosity of deconfined quark-gluon matter. To relate the initial-state transverse energy density, which can be calculated in the framework of gluon saturation, to the final state that CMS actually measures, one has to smoothly match the calculation in the glasma framework to viscous hydrodynamics. The first attempts to do this studied the anisotropy of the flow of particles . Based on the new CMS measurement, similar calculations should be able to determine the centrality and rapidity dependence of the transverse energy in the final state, thus providing additional constraints on the hydrodynamic models and the value of η/s that are largely independent from those coming from anisotropic flow. CMS has provided the data, so now we’ll have to sit tight and wait for the calculations.
About the Author: Péter Petreczky
Péter Petreczky received his Ph.D. in 1999 from Eötvös University in Budapest. From 1999 to 2002 he was a postdoc at the University of Bielefeld in Germany. He joined Brookhaven National Laboratory in 2002 as a Distinguished Goldhaber Fellow. He became a tenured scientist at BNL in 2010. Between 2003 and 2005 he was also a RIKEN-BNL Fellow. His research interests include thermal field theory, lattice QCD, and heavy ion phenomenology.
Particles and Fields
New in Physics
S. Chatrchyan et al. (CMS Collaboration), “Measurement of the Pseudorapidity and Centrality Dependence of the Transverse Energy Density in Pb-Pb Collisions at √sNN=2.76 TeV,” Phys. Rev. Lett. 109, 152303 (2012).Hunting the Quark Gluon Plasma, Results from the First 3 Years at RHIC, Assessments by the Experimental Collaborations, April 18, 2005, http://www.bnl.gov/bnlweb/pubaf/pr/docs/Hunting-the-QGP.pdf.L. D. McLerran and R. Venugopalan, “Computing Quark and Gluon Distribution Functions for Very Large Nuclei,” Phys. Rev. D 49, 2233 (1994); “Gluon Distribution Functions for Very Large Nuclei at Small Transverse Momentum,” 49, 3352 (1994).T. Lappi and L. McLerran, “Some Features of the Glasma,” Nucl. Phys. A 772, 200 (2006).D. Kharzeev and M. Nardi, “Hadron Production in Nuclear Collisions at RHIC and High-Density QCD,” Phys. Lett. B 507, 121 (2001).A. Dumitru, D. E. Kharzeev, E. M. Levin, and Y. Nara, “Gluon Saturation in pA Collisions at Energies Available at the CERN Large Hadron Collider: Predictions for Hadron Multiplicities,” Phys. Rev. C 85, 044920 (2012).B. Schenke, P. Tribedy, and R. Venugopalan, “Fluctuating Glasma Initial Conditions and Flow in Heavy Ion Collisions,” Phys. Rev. Lett. 108, 252301 (2012); “Event-by-Event Gluon Multiplicity, Energy Density and Eccentricities at RHIC and LHC,” arXiv:1206.6805 (hep-ph); C. Gale, S. Jeon, B. Schenke, P. Tribedy, and R. Venugopalan, “Event-by-Event Anisotropic Flow in Heavy-Ion Collisions from Combined Yang-Mills and Viscous Fluid Dynamics,” arXiv:1209.6330 (nucl-th).
Viewpoint: Mind the (Quantum) Context
Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, I-20133 Milan, Italy and CNISM, UdR Milano Statale, I-20133 Milan, Italy
Centre for Atomic, Molecular and Optical Physics, School of Mathematics and Physics, Queen’s University Belfast, BT7 1NN Belfast, United Kingdom
October 8, 2012
Physics 5, 113 (2012)
A new optical experiment provides further proof that quantum mechanics is not hiding some classical framework beneath its veneer of context-dependent observations.
State-Independent Experimental Test of Quantum Contextuality in an Indivisible SystemC. Zu, Y.-X. Wang, D.-L. Deng, X.-Y. Chang, K. Liu, P.-Y. Hou, H.-X. Yang, and L.-M. DuanPhys. Rev. Lett. 109, 150401 (2012)Published October 8, 2012 | PDF (free)
APS/Alan Stonebraker; Einstein drawing by Jutta Waloschek
In the quantum version of “Who am I,” the celebrity hidden in the envelope is not predetermined. This is because the answer to a specific question will depend on the context of the other questions being asked. If the player Alice switches the order of her questions or chooses a different set of questions, then it may not end up being Albert Einstein in the envelope.
The contrast with classical expectations is one of the most intriguing features of quantum mechanics. It can be illustrated by playing a classic “guessing game” with quantum rules. In this case, the answer to one question will depend on the other questions being asked and the order chosen. This dependence on the context does not conform to our everyday experience, but we may have to live with it, as theorists have shown that we can’t just assume that there exists a hidden answer sheet that is unaffected by the choice of questions. However, designing a generic test of quantum contextuality in the lab has been difficult, so far. Writing in Physical Review Letters, Chong Zu of Tsinghua University in Beijing, China, and his colleagues have shown that context does matter in a three-level photon system generated by a simple but elegant linear-optics setup .
In the “Who am I” game, a closed envelope containing the name or picture of a celebrity is given to a player (say Alice). The celebrity’s identity is known to all the other players but not to Alice, who has a certain (agreed upon) number of questions to ask the others in order to find out the name in her envelope. Needless to say, the identity of the celebrity is there in the envelope, regardless of whether Alice is able to guess it or not. Moreover, it can’t magically change should Alice decide to ask if the hidden character “is a singer” before wondering whether or not he “was born in Duluth.” The name of the celebrity is a predetermined truth, independent of the questions that Alice decides to ask and the order she decides to ask them. It remains such even if the envelope is torn apart so that Alice will never see the answer. In a nutshell, it is noncontextual. This simple example shows that, in our daily experience, we consider the outcomes of our observations as simply the “revelation” of pre-existing elements of physical reality, which are there regardless of the specific context in which our observations are made.
Let us now transport the very same game to the bizarre (i.e., not familiar to our senses) quantum world. The questions asked by Alice to her companions are now embodied by measurements performed over the state of a system, whose outcomes are ruled by the probabilistic nature of the quantum formalism (see Fig. 1). Quantum observables do not have predefined values, and certain operators do not commute with each other. That means Alice now has to mind the order of her interrogations. In quantum “Who am I,” the name in the envelope could change if Alice asks about the celebrity’s occupation before asking where he or she was born. But the range of questions matters as well. The answer to “male or female?” may depend on whether or not Alice decides to ask about the celebrity’s hair color. In terms of an experiment, the outcome of an observation is critically dependent on the assignment of a set of mutually compatible (i.e., commuting) observables. In sum, quantum theory minds the context within which observations are performed. Even more strikingly, this contextuality is a general feature of all quantum states, not just a select subset.
Attempts to demystify the contextual character of the quantum framework have led to classically inspired models containing “hidden variables,” which are well-defined properties of a system that are inaccessible to the observer. According to these theories, the outcome of a measurement of an observable A is fully predetermined by a hidden variable λ, regardless of which other compatible observable is measured with A. Such local hidden variable models have been challenged by the work of Bell  and Kochen and Specker . The latter developed a no-go theorem (widely known as the KS theorem), which proves the logical contradiction inherent in any attempt to interpret the outcome of a quantum measurement of A as resulting from a classical (i.e., commutative) joint probability distribution.
The original KS argument pinpoints the incompatibility between the classical structure of the hidden variable theories and quantum mechanics in a striking way, but it is quite experimentally unfriendly. The KS “guessing game” involves 117 different projectors onto complex vectors, which means an experimentalist would need to perform a very large number of observations on a quantum system. Subsequent work by Peres  and Mermin  greatly simplified the mathematical formulation of the KS theorem, condensing it into a mathematical inequality that forces a wedge between classical noncontextuality and (quantum) contextual theory. The outcomes of a specific set of observations are arranged together in a mathematical expression and compared to a bound set by classical hidden variable theories. If the observations are greater than this bound, then the inequality is said to be violated. The upshot of this violation is that hidden variables cannot be at work, and physicists have to content themselves with the nonintuitive consequences of quantum mechanics. The discriminating power of the KS-like inequality is similar to that of the well-known Bell’s inequality, which arose from Bell’s seminal work on nonlocality . One distinct feature, however, of the KS-like argument is that quantum contextuality manifests itself regardless of the form of the quantum states under investigation. Such state-independent contextuality (SIC) means that all quantum states, not just specially prepared ones, violate our classical intuition. This is not the case for Bell-like inequalities, which are violated only by high-quality entangled states.
One requirement for most versions of the KS theorem is that the quantum system has at least three perfectly distinguishable configurations (or to say it another way, the Hilbert space must have dimension three or more). The creation of a single higher-dimensional system and the controlled implementation of different observations on that system are challenging tasks from the general experimental point of view. However, due to a strong interest from the community studying the foundations of quantum mechanics, many experimental groups have attempted in the last decade to demonstrate KS-like inequalities with various systems, such as neutrons, photons, and nuclear spins . For instance, a very recent example  uses a composite system consisting of two trapped two-level ions, thus achieving the higher dimensionality required by SIC using two lower dimensional systems. Another group, using a linear optics experiment, was able to create a single indivisible quantum system for a KS-like demonstration, but the falsification of noncontextuality required a special initial state. .
This leaves room for the experimental demonstration of SIC along the lines of the original formulation of the no-go theorem, i.e., with the use of a single high-dimensional system and without the use of special initial states. Accepting this challenge, Zu and his colleagues designed their SIC test around a single photonic three-level system (a qutrit) . The researchers encoded this qutrit by splitting a single photon with polarizing beam splitters, so that three possible paths (or levels) are available to the photon. For their observations, the team followed the recipe put forward by Yu and Oh and proven to be the most economical (in terms of the number of observables) by Cabello . This set of “Who am I” questions consists of 13 compatible observables and 24 correlations between pairs of these observables, which Zu et al. measured using two cascaded Mach-Zehnder interferometers.
The experimental setup and techniques are simple, elegant, and quite well controlled. The research group had little room for error, since the discrepancy between the classical bound (or inequality) imposed by the Yu-Oh formulation of SIC and the value predicted by quantum mechanics is tiny (only about 4%). Despite unavoidable experimental uncertainties, the measurements by Zu et al. violated the classical bound (thus ruling out noncontextuality) by about 5 standard deviations, irrespective of the prepared system’s state. This statistically significant result sets a very good benchmark for experimental SIC with a single quantum system.
Have we thus seen the first unquestionable demonstration of genuinely state-independent contextuality of quantum mechanics? It is a reasonably close call, but not all the controversies that surround quantum foundation experiments on contextuality are ruled out by Zu and his colleagues. In particular, the effect of inefficiencies should not be overlooked. Photon losses induce no-click events at the detectors even when a single photon has been successfully heralded (and a qutrit state thus encoded). These missed detections affect the correlations entering the Yu-Oh inequalities. In order to accommodate this, the authors invoked the concept of fair sampling, which says that a long series of individual experimental runs should result in a satisfactory representative sample of the true joint probability distribution that we want to assess . Therefore, Zu et al. discarded any no-click event from their data acquisition steps. As reasonable as this might sound, it embodies an extra assumption that was not included in the original KS argument nor in the formulation provided by Yu and Oh. Future experimental endeavors should strive to remove this extra ingredient, so that noncontextual theories have no experimental loopholes to hide behind.
PS: the celebrity that Alice had to guess in the classical “Who am I” round was Bob Dylan!
C. Zu, Y-X. Wang, D-L. Deng, X-Y. Chang, K. Liu, P-Y. Hou, H-X. Yang, and L-M. Duan, “State-Independent Experimental Test of Quantum Contextuality in an Indivisible System,” Phys. Rev. Lett. 109, 150401 (2012).
J. S. Bell, “On the Problem of Hidden Variables in Quantum Mechanics,” Rev. Mod. Phys. 38, 447 (1966).
E. Specker, “Die Logik Nicht Gleichzeitig Entscheidbarer Aussagen,” (The Logic of Not Simultaneously Decidable Propositions) Dialectica 14, 239 (1960); S. Kochen, and E. P. Specker, “The Problem of Hidden Variables in Quantum Mechanics,” J. Math. Mech. 17, 59 (1967).
A. Peres, “Incompatible Results of Quantum Measurements,” Phys. Lett. A 151, 107 (1990).
N. D. Mermin, “Simple Unified Form for the Major No-Hidden-Variables Theorems,” Phys. Rev. Lett. 65, 3373 (1990).
J. S. Bell, “On the Einstein Podolsky Rosen Paradox,” Physics 1, 195 (1964).
M. Michler, H. Weinfurter, and M. Zukowski, “Experiments towards Falsification of Noncontextual Hidden Variable Theories,” Phys. Rev. Lett. 84, 5457 (2000); E. Amselem, M. Radmark, M. Bourennane, and A. Cabello, “State-Independent Quantum Contextuality with Single Photons,” 103, 160405 (2009); Y.-F. Huang et al., “Experimental Test of the Kochen-Specker Theorem with Single Photons,” 90, 250401 (2003); H. Bartosik et al., “Experimental Test of Quantum Contextuality in Neutron Interferometry,” 103, 040403 (2009); O. Moussa, C. A. Ryan, D. G. Cory, and R. Laflamme, “Testing Contextuality on Quantum Ensembles with One Clean Qubit,” 104, 160501 (2010).
G. Kirchmair, F. Zähringer, R. Gerritsma, M. Kleinmann, O. Gühne, A. Cabello, R. Blatt, and C. F. Roos, “State-Independent Experimental Test of Quantum Contextuality,” Nature 460, 494 (2009).
R. Lapkiewicz, P. Li, C. Schaeff, N. K. Langford, S. Ramelow, M. Wieśniak, and A. Zeilinger, “Experimental Non-Classicality of an Indivisible Quantum System,” Nature 474, 490 (2011).
S. X. Yu, and C. H. Oh, “State-Independent Proof of Kochen-Specker Theorem with 13 Rays,” Phys. Rev. Lett. 108, 030402 (2012); arXiv:1112.5149.
About the Author: Matteo Paris
Matteo Paris leads the Applied Quantum Mechanics group at the University of Milan, Italy, where theoretical and experimental research is conducted in the fields of Quantum Optics, Quantum Technology, Quantum Mechanics, and Open Quantum Systems. Matteo has given pioneering contributions to the development of quantum tomography for states and operations and to continuous variable quantum information. His current interests concern quantum estimation and the study of correlated systems at the boundary of the quantum and classical worlds. Matteo received his Ph.D. in 1995 from the University of Pavia and joined the University of Milan in 2004.
About the Author: Mauro Paternostro
Mauro Paternostro received his Ph.D. from Queen’s University Belfast (QUB) for his work on theoretical quantum information processing. After having been a research fellow at the Institute for Quantum Optics and Quantum Information, University of Vienna, and at QUB, he was awarded in 2008 an EPSRC Career Acceleration Fellowship and appointed Lecturer at QUB, where he is currently a Reader. He co-leads the Quantum Technology group at QUB, where he works on foundations of quantum mechanics and quantum information processing. He is strongly interested and has significantly contributed to quantum optomechanics and the coherent manipulation of open mesoscopic quantum systems. Honors include a Leverhulme Trust Early Career Fellowship and an Alexander von Humboldt Fellowship for experienced researchers.
New in Physics