Einstein, Podolsky, Rosen, and Bell

Notes by Benjamin W.

In this lecture two theorems were treated. One was the EPR theorem, due to Einstein, Podolsky and Rosen (1935), the other was the Bell theorem (1964), as a logical continuation of the EPR theorem. The EPR theorem states that if all predictions of quantum mechanics are correct and if physical reality can be described in a local framework, then quantum mechanics is necessarily incomplete: there are elements of reality in nature that are not represented in this theory.

EPR (Einstein, Podolsky and Rosen, 1935)

Consider a source that emits pairs of particles. Two assumptions are made by EPR, namely separability and locality.

The first assumption of separability is that at the time of separation, each system (particle) has its own reality. EPR assume that each system retains its own identity, which is characterized by a real physical state, although each system is also strictly correlated with the other. In the 1935 paper, the correlations deal with the momentum and position variables of two particles. Following Bohm, the theorem was also formulated in terms of spin variables: the total spin of two particles is assumed in a singlet, S = 0 state.

The second assumption is that of locality. This assumes that no real change can take place in one system as a direct result of a measurement on the other system. EPR justify this assumption: at the time of measurement the two systems no longer interact. It should be noted that locality does not assume that nothing at all in one system can be directly disturbed by a remote measurement on the other system. (As an example: the information about the other system changes instantly after a measurement is made on one system.) Locality simply excludes the possibility that a remote measurement may directly disturb or alter what is considered real in relation to one system, a reality that guarantees separability. Based on these two assumptions, EPR conclude that each system can have definite values (elements of reality) for both position and momentum simultaneously (or for any spin components, in Bohm’s formulation).

The key idea here is that an experimental result that is known from the outset (with 100% probability) can only be the consequence of a physical quantity that already exists. The fundamental conviction of EPR is therefore that the regions of space contain their own elements of reality. These elements would evolve in time, after the emission of the particles, in a local manner. It follows from the EPR theorem that measurement results are well-defined functions of these variables. Thus nowhere would a random, non-deterministic process take place.

Another characteristic of EPR is completeness. If the description of systems by state vectors were complete, values of quantities that can be predicted with certainty, should be derivable from a state vector for the system or from a state vector for a composite system of which the system is a part.
It follows from the EPR theorem that separate systems have definite position and momentum values at the same time. The quantum mechanical description of the system by means of state vectors is therefore incomplete.

Bell (1964)

Some 30 years later, Bell again looked at the elements of reality and relied on locally realistic considerations to show that quantum mechanics cannot be supplemented in any possible way without altering experimental predictions, at least in certain cases. Bell considered correlations between measurement results for systems of two particles in separate laboratories, where the measurements of the particles differ by locally defined angles of spin projection axes. In this Gedanken experiment, he showed that correlations measured in different runs of an EPRB experiment satisfy a set of conditions.

These considerations are encoded in the Bell inequalities, which apply to all measurements that provide random results, whatever the mechanism that generates correlations between spins. Thus, any theoretical model that remains within the framework of local realism must lead to predictions that satisfy the Bell inequality. Here, realism is a necessary assumption, since the concept of element of reality introduced by EPR has been used in the proof. The proof is based on locality because it excludes the possibility that the measured result A depends on the measurement setting b in the other laboratory and that, conversely, B depends on a. Because of these inequalities, it was expected that any reasonable physical theory would produce predictions that were consistent with the Bell inequalities. However, the opposite was the case and violations of the Bell inequalities were even experimentally confirmed. A number of measurements provided valid violations of the Bell inequalities with a high degree of accuracy. This has led to the conclusion that the predictions of quantum mechanics are correct, even if the Bell inequalities are violated.

Example

Bell’s inequality for an ensemble of objects with the following set of properties:

  • female or male (w or m)
  • drives a car or not (a or -a)
  • speak german or not (d or -d)

Assignment of properties:

  • n(w,a) is a woman driving a car
  • n(a,-d) is a person who drives a car and does not speak German

Then the following inequality applies:

(1)      n(w,a) \le n(w,d) + n(a,-d)

Meaning: The number of women driving a car is less than or equal to the number of women who speak German added to the number of persons driving a car who do not speak German.

We now consider a pair of photons and consider the polarizer settings:

\alpha and \beta can take values among 0°, 30° and 60°

Number of measurements where the orientation combination (\alpha,\beta) was present and where both photons passed through, according to one of the entangled Bell states

(2)      n( \alpha, \beta) = n_0 \cos^2( \alpha - \beta)

Note the perfect correlation when \alpha = \beta. Number of measurements where the left photon passed and the right photon was absorbed:

(3)      n( \alpha, \bar{\beta}) = n_0 \sin^2( \alpha - \beta)

(to avoid confusion, the event “the photon did not pass the polarizer with setting \beta is denoted \bar{\beta} rather than -\beta).

The key assumption is now: the properties “first photon passes polarizer with setting \alpha” and “second photon passes polarizer with setting \beta” can be considered in the same way as the properties “is woman” or “speaks German” as above. We suppose they are “elements of reality” in the sense of EPR. The application of Bell’s inequality (1) then gives

n(\alpha,\beta ) \leq n(\alpha,\gamma) + n(\beta,\bar{\gamma})

Insert equations (2) and (3) and divide by n_0:

\cos^2(\alpha - \beta) \leq \cos^2(\alpha-\gamma) + \sin^2(\beta-\gamma)

For \alpha = 0^\circ, \beta = 30^\circ and \gamma = 60^\circ, it follows:

\cos^2(30^\circ) \le \cos^2(60^\circ) + \sin^2(30^\circ) \qquad \text{or} \qquad \frac{3}{4} \leq\frac{1}{4}+\frac{1}{4} 

Since this is wrong, Bell’s inequality is violated by the predictions of quantum mechanics.

Literature

Claude Cohen-Tannoudji, Bernard Diu, and Franck Laloë (2020), Quantum Mechanics III: Fermions, Bosons, Photons, Correlations, and Entanglement, chap. XXI: Quantum entanglement, measurements, Bell’s inequalities, Wiley-VCH.

A. Einstein, B. Podolsky, and N. Rosen (1935), “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”, Physical Review 47, 777–80.

Franz Embacher (2000), EPR-Paradoxon und Bellsche Ungleichung, Universität Wien, online at https://homepage.univie.ac.at/franz.embacher/Quantentheorie/EPR.

Collapse

A brief essay on a controversial concept in quantum physics

There is a spectre haunting quantum physics, and it is called the “collapse of the wave function”. It has attracted so many discussions that I prefer, to get the Weltbild right, to go back to a concrete experimental scheme. One needs some special equipment to perform an experiment where the “collapse” really comes into play. In “standard experiments” like the double slit with a very low flux of particles (Feynman: “the first mystery of quantum mechanics”), there is no collapse, just dots that appear one by one on the screen. Quantum physics is silent about the position of each and the next dot – it only gives their probability distribution once we specify the entire experimental setup: what are the source properties, which slits are open, and where the particles are detected. Conversely, by accumulating the fringe pattern from this collection of dots on the screen, the experimenter is accessing a probability distribution by the standard procedure of measuring “frequencies” (Häufigkeiten) and extrapolating to an infinite number of detections. It does not make sense to say: “the particle collapses onto a dot on the screen” because there is no further experiment done after that.

Let us take the viewpoint that perhaps the true sense of the collapse is this: once a first detection has been performed, we are able to perform further experiments. After we know the result of the first detection, the prediction of further outcomes are changed because “the state of the particle has changed”. But look at it carefully: this is done in a statistical sense, by selecting events after the first detection.

There is a famous historic anecdote here from the birth years of quantum physics: W. Heisenberg contemplating the “paths” of electrons or other particles in front of a bubble chamber (see the account of it in J. Ferrari’s book).

Particle “paths” in a bubble chamber. Image taken from thebubblechamber.com

Heisenberg has just discovered the uncertainty relations and learned that “there is no sharp particle position”. But the bubbles in the chamber align nicely into orbits of charged particles. The orbits are typically circular or spiral-like because a magnetic field is applied to measure the charge and velocity of the particle. Heisenberg may have contemplated this spectacle for a long time, eventually sighing: “Now I understood.” (Well, I am conjecturing, a quote like that has been reported from N. Bohr in a similar situation.) Heisenberg realised that the appearance of the first bubble is like a detection of the particle’s position. Within some wide window of uncertainty, however: the bubble is manifestly a large, macroscopic object whose size is way above the particle’s de Broglie wavelength. So this position measurement is still compatible with a fairly well-defined momentum of the particle. The scattering of the high-energy particle with the vapor molecules in the chamber has probably not changed its momentum a lot. By extrapolating from this first position information, one can infer the probability density of the following bubble — and quantum mechanics tells us that this density will trace out the classical path, just because the first measurement was relatively imprecise, no big change in momentum, hence the most probable next bubble will simply lie ahead. (And here one can also use that the particles come from a beam with a relatively well-defined direction.) The calculation is related to the one of a correlation function: the “collapse” corresponds to the way the first observation constrains the wave function (more like a wave packet than a plane wave, for example).

Personally, although the quantum mechanics of the bubble chamber is certainly an interesting topic, I would consider other experiments more spectacular. The reason is probably that in the bubble chamber, all “detections” (bubbles) are random events. In the example that follows, the experimenter is free to choose her setup after the first measurement.

I want to talk about one of S. Haroche’s experiments: an atom flies through a cavity where it interacts with photons and one may make a measurement of the cavity field. Let us call this first cavity A (the left one in the picture, say).

Beam of atoms crossing two cavities.
A beam of atoms (magenta circles) is crossing two cavities (gray shapes with blue-green illustration of the electromagnetic field distribution). Taken from the web site www.lkb.upmc.fr/cqed

The atom is still available for further experiments (in cavity B on the right, say), that’s a key difference. If the photon number in A has increased, for example, we may infer that the atom has deposited an energy quantum, has made a transition to a lower level. With this “increase of knowledge”, quantum mechanics can update its predictions to the outcomes of experiment B. The “collapse” is again very much related to a correlation between two events: “photon number has changed in cavity A” and “atom is behaving like this or that in experiment B”. Note that this viewpoint is very close to what is called the “epistemic interpretation” of the wave function: it encodes what the experimenter knows about the system (from the way it has been prepared). Here, we adjoin the information from the first measurement. (And indeed, many experimental preparation schemes are based on measurements and selection of those events that conform with the specifications of the source.)

The “weird” aspects of quantum physics become apparent in two variations on this scheme:

1

Imagine that one did not detect an increase in the photon number. Still, something has been learned about the state of the atom. Such an event would be compatible with the “retrodiction” that the atom has been in its ground state, where photon emission is impossible. (In the simulation procedure of “Monte Carlo wave functions”, this “non-detection” leads to a (continuous) change in the atom’s wave function where the probability of the atom being in the ground state is increased. But one may argue that this simulation is too beautiful to provide a true Bild of the “real state of affairs”.) But of course, this information is only valuable in the right context. One has to be sure that an atom has been launched and that is has effectively crossed the cavity at a place where it could have emitted a photon if it had been in an excited state. How can one be sure about this in an operational way? Just repeat the experiment and observe the frequency of photon emission events. Many weird schemes related to null measurements become quite trivial when this specific “post-selection” is re-instated.

2

An even weirder scheme is to “erase” the knowledge that has been gained in the first measurement. This is not easy because one has to intervene before the measurement result has been “amplified” to a detectable signal. But it can be done in the case of the cavity whose state can be manipulated, right after the passage of the atom. One can do this in such a way that the cavity state does not disclose any longer any information about the prior state of the atom. This has been popularised as the “quantum eraser”. [No I won’t offer you an image here, the web is full of strange claims on that keyword.] One observes that typical quantum interference features are restored that depend on the atom being in a superposition state – a state that would be precisely destroyed by the “collapse” after the first measurement.

But now kicks in quantum physics as its best: this kind of interference is demonstrated after accumulating many measurements, where an interference phase is scanned by adjusting a macroscopic element in the setup. After collecting the data, the plot of the frequencies vs phase tell us: “interference has happened”. Not in any of the individual runs, but only in the overall picture.

Lecture 3: Schrödinger and Reality

03 Nov 2016 [notes by Johannes P. and Lukas H., expanded by C. Henkel]

— Follow-up to previous lecture

A sample of the scientific photographs by Berenice Abbott is in the post for lecture two. These black-and-white pictures were made end of the 1950s when she worked for the MIT. They impressively show experiments and were used in high school books. The picture on double-slit interference with water waves is reprinted in the book by Gerry and Bruno, but without crediting B. Abbott…

A contemporary example of the ongoing discussion on quantum mechanics is provided by the International School of Physics “Enrico Fermi”, who held in July 2016 a course on the foundations of quantum theory.

On the topic of interference from C60 (fullerene) molecules: another paper by M. Arndt’s group discussing the influence of temperature on such systems was sketched. The fullerenes were heated up so that they can emit infrared photons at shorter wavelengths. With higher temperature, the number of ionized (and therefore) detected particles first went up, while keeping the interference pattern. Even higher temperatures lead to thermal ionisation and fragmentation, reducing the signal. The key observation is that at intermediate temperatures, the typical wavelengths emitted by the fullerene molecules are not small enough to distinguish between the two paths — the interference remains visible. At higher temperatures, the emission spectrum contains shorter wavelengths, and welcher Weg information becomes available — the interference gets washed out.

Discussion of Papers by Schrödinger (1935) and Zurek (2003)

Two papers where given to the students to read, Erwin Schrödinger’s “Die gegenwärtige Situation in der Quantenmechanik” (The present Situation in Quantum Mechanics, Naturwissenschaften 1935) and Wojciech H. Zurek’s “Decoherence and the transition from quantum to classical — revisited” (Physics Today 1991 and arXiv 2003).

See the literature list and especially the pages on Schrödinger’s paper and on decoherence for further information.

Schrödinger (1935)

In the first paragraph of his paper, Schrödinger points out the relation of reality and our model of it. The model is an abstract construction that focuses on the important aspects and, in doing so, ignores particularities of the system. It simplifies the reality. This approximation is arbitrarily set by humans, as he states: “Without arbitrariness there is no model.”

This simplification is done to achieve an exact way to calculate and herein lies a difficulty of quantum mechanics. The model gives / contains less information for the (final) state of a system at given initial conditions than e.g. classical mechanics.

In a later section the famous Schrödinger cat experiment is described. A very low-dose radioactive material is used as a quantum system, which produces on average one radiation particle per hour. The cat is placed in a sealed box and killed via a mechanism when the radiative decay happens and is detected. This simple setup transfers the uncertainty of the quantum world up to a macroscopic level where the cat clearly is. The wave function becomes a superposition:

|\Psi\rangle = \alpha|\mathrm{dead}\rangle + \beta|\mathrm{alive}\rangle

If you now check after one hour or so whether or not the cat is still alive, you get only one definite outcome. In the words of the Copenhagen interpretation: the wave function collapses into one of the states composing it, “dead” or “alive” (but not both). From a single measurement, you do not prove interference. To do so, the experiment must be modified in such a way to measure a quantity that is sensitive to the superposition. It then has to be repeated many times with identical initial conditions. The many outcomes taken together produce an interference pattern (bright and dark fringes) that can be interpreted as a wave phenomenon.

The Quantum Zeno Effect, based on the antique Paradox of Zeno, uses the collapse of the wave function to “lock” a system in a certain state by frequently measuring it, despite other effects like radioactivity or counteracting magnetic fields. This means that if you look “frequently enough” at Schrödinger’s cat, it is immortal.

Zurek (1991/2003)

drawing of the border between quantum and classical

Drawing by W. H. Zurek from the 1991 paper in Physics Today, re-posted by xts on Physics Forums

Zurek compares classical physics to quantum mechanics via experiments done in different domains of physics. An example is given by a gravitational wave detector, which has to cope with the indeterminate force on a mirror arising from the momentum transfer of photons. He also states the need for an amplifier to measure quantum effects because these usually occur on tiny scales.

But this border between the macroscopic and microscopic is undefined itself (picture above). Often it is a matter of the necessary precision which leads to the definition of this line and suggests the simplifications we make. The “many-worlds interpretation” goes so far to push it back into the conciousness of the observer, which is notably an “unpleasant location to do physics” (Zurek). A more detailed discussion of Zurek’s paper and the concept of “decoherence” that he I promoted can be found elsewhere on this Blog.

Copenhagen Interpretation

The Copenhagen Interpretation was discussed with emphasis on three key points:

  • Detection Detection of a quantum state often needs amplification and is irreversible, because it collapses the wave function, thus erasing the information contained in the superposition of states beforehand. Born’s rule gives the probability of each outcome as |Ψ|2 (in a suitable basis related to the measured observable) or |〈out|Ψ 〉|2 (using the state |out 〉 for the measurement outcome).
  • Evolution The evolution through space and time is given by the wave function Ψ(r,t), solution to the time-dependent Schrödinger equation.
  • Preparation Ψ(r,0) “encodes” the instructions how to prepare the system which is important for continous measurement to get statistical data. More precisely, Ψ(r,0) describes a statistical ensemble of identically prepared systems on which many (independent) measurements can be made. With respect to this point, other interpretations depart by assigning the wave function to a single run of an experiment.

What does this tell us about the “reality behind” the quantum-mechanical state?
Schrödinger mentions that “indeterminate” or “fuzzy” states in quantum mechanics may not be seen like an out-of-focus photograph of reality, but instead like a accurate picture of a fog-like phenomenon (i.e., that does not contain sharp features). The wavy features of quantum phenomena (related to superposition and interference), cannot be verified from a single particle detection, however. The wave pattern we “see”, e.g in the double-split experiment, is a continuous reconstruction (or “reduction”) obtained from the statistics of individual data.

Textbook Knowledge

Question: Look through your favorite quantum mechanics textbooks and make a list with answers to the following questions:

  • what is the meaning of the “state vector” (or wave function)?
  • what is the meaning of “measurement” (i.e., what makes this process different so that the Schrödinger equation does not apply)?

Franz Schwabl,
Quantenmechanik – Eine Einführung, 7th edition 2007 (pp. 13-15, 39, 376, 390, 392)

Particles and systems can be described by a wave function. From the wave function there follows a probability distribution, that gives, for example, the probability to be at place x at time t. Since this description is statistical, it follows that quantum mechanics is non-deterministic. The wave function still exhibits causality, since the evolution in time follows a deterministic differential equation, the Schrödinger equation.

The probability distribution contains in general superpositions of pure states, so we have non-macroscopic systems where multiple pure states appear at the same time. When we measure such a system, it can come out/appear in different states. But for an ideal measurement (i.e., the system is not interacting with any environment other than the measurement apparatus), all future measurements (of the same observable) should reproduce the same result, it shouldn’t change uncontrollably. This can be seen as the act of measuring changing the wave function: if the system was in a superposition of states before the measurement, it gets reduced to one pure state afterwards.

(Erik T.)

Wolfgang Nolting,
Grundkurs Theoretische Physik 5/1 Quantenmechanik – Grundlagen, 8th edition (pp. 6, 82, 94ff, 134, 189–91):

The state of a system is described by a wave function. It is a solution of the Schrödinger equation, but has no measurable particle property. The wave function has a statistical character, which gives, in contrast to classical mechanics, only probabilities. It is not possible to measure the wave function directly, but it determines for example the probability density to find a particle at time t in a certain position.The statistical character is responsible for the uncertainty principle and the wave-packet spreading.

For a measurement we need 1) system, 2) apparatus and 3) observer. In contrast to classical physics (and thermodynamics), where interaction between 1) and 2) is often neglected, in quantum mechanics interaction cannot be neglected. If physical processes are so small that ℏ cannot be seen as relatively small, we find quantum phenomena in which any measurement disturbs the system massively and changes its state. A certain apparatus determines/measures a certain observable and changes the system in a specific way. Another apparatus that measures another observable, changes the system in another way. Therefore, the order of measurements, which apparatus is used first and which second, matters. The observable variables are assigned to operators, which do not commute — order matters.

In principle, it is not possible to measure exactly and simultaneously non-commuting operators, for example momentum and spatial position. Thus it makes no sense to talk about exact momentum and position. Quantum mechanics can answer the following type of questions: which results are possible? What is the probability that a possible value is measured? It is only possible to get from the theory a probability distribution. Experimentally, we get it by performing many measurements on the same system or measure a large number of identically prepared single systems.

(David F.)

Pierre Meystre and Murray Sargent III,
Elements of Quantum Optics, Third edition (p. 48):

“According to the postulates of quantum mechanics, the best possible knowledge about a quantum mechanical system is given by its wave function Ψ(r,t). Although Ψ(r,t) itself has no direct physical meaning, it allows us to calculate the expectation values of all observables of interest. This is due to the fact that the quantity Ψ(r,t)*Ψ(r,t) d3r is the probability of finding the system in the volume element d3r. Since the system described by Ψ(r,t) is assumed to exist, its probability of being somewhere has to equal 1.”

(Gino W.)

Walter Greiner,
Quantenmechanik Einführung, 6., überarbeitete und erweiterte Auflage 2005 (pp. 457–58, 464):

Quantum mechanics is an indeterministic theory. This means that there exist physical measurements, whose results are not uniquely determined by the state of the observed system before the measurement took place (as far as such a state is observable). In the case of a wave function, which is not an eigenstate of the observable’s corresponding operator, it is not possible to predict the exact outcome of the measurement. Only the probability of measuring one of the possible outcomes can be presented.

To avoid the problem of indeterminism it was suggested, that particles in the same state only seem to be equal, but actually possess additional properties that determine the outcome of further experiments. Concepts based on this idea have become known as theories of hidden variables. However, Bell has proved a theorem which implies that results of such concepts stand in contradiction to the principles of quantum mechanics.

(Gino W.)

Lecture 2: Double Slit Diffraction

27 Oct 2017 [notes by Gino W., Christian M., and D. Mayer, expanded by C. Henkel]

— Remarks on Literature —

De Broglie: Nouvelles perspectives en microphysique

De Broglie was a key figure in building quantum physics: he proposed that a particle like an electron should also have wave aspects (the de Broglie wavelength), by analogy to a wave like the electromagnetic field that has particle aspects (photons). In the second half of his life, de Broglie developed an interpretation of quantum physics where the particle and the wave picture apply at the same time: in the so-called “pilot wave theory”, a pointlike particle appears as a kind of singularity embedded in a wave. The motion of the particle is governed by the wave.

Paul Arthur Schilpp: Albert Einstein als Philosoph und Naturforscher

Summary of Albert Einstein’s most important works, including his criticism of quantum theory and the discussion about it with other physicists.

Bernard d’Espagnat: A la recherche du réel

During the process of measuring a quantum mechanical system, the system does not reveal all its information. This kind of behaviour has led d’Espagnat to the concept of a “veiled reality”.

Roland Omnès: Comprendre la méchanique quantique

Tries to explain how we should understand quantum mechanics, especially with regards to the concept of “consistent histories”. This is an attempt to delineate in a precise way to which extent one can describe quantum mechanics with classical ideas and logical arguments.

Shimon Malin: Nature loves to hide

A book that takes up the idea of Dirac that “Nature makes a choice” when an experiment gives a result (quite often an unpredictable one for which quantum mechanics only provides a probability). Building on this process of choice and human experiences of a mystical kind, Malin develops the idea that Nature is “alive”.

Jérôme Ferrari: Le principe

Un roman sur la vie de Heisenberg qui présente le principe d’indétermination d’un point de vue philosophique. Une scène en particulier saisit de façon remarquable la situation de Heisenberg au point de refonder notre vision du monde microscopique. Heisenberg se trouve devant une “chambre à bulles” et voit des traces de particules alors qu’il est en train de se convaincre que des particules (des points matériels) n’existent pas. La solution lui vient quand il se rend compte que les traces sont en fait des séquences d’evènements séparés (des déclenchements de bulles qui se forment). C’est l’observateur qui les relie à la façon d’une trajectoire d’un point matériel. En fait, les bulles ne permettent pas de localiser le corpuscule quantique de façon précise. Ceci permet de reconcilier les traces avec l’inégalité de Heisenberg (le principe d’indétermination) qui interdit de connaître de façon précise à la fois la position et la vitesse.

bubble chamber, post by anna v at physics.stackexchange.com
“Trajectories” of “particles” in a bubble chamber, taken from post by Anna V at physics.stackexchange.com

Double-Slit Diffraction – a typical quantum Experiment

A double-slit experiment consists of a wave source (e.g. a source for water waves or a laser), a screen (e.g. a CCD-panel for a light source) and a barrier with two slits in between. Switching on the wave source, an interference pattern will occur on the screen.

"Interference Pattern", Berenice Abbott, MIT Cambridge MA 1958-61

“Interference Pattern”, Berenice Abbott, taken from blog Brie Encounter

This only occurs however if a coherent source is used. The word coherent is not a well defined term. It is often described as the property of waves that they can interfere. If the light is not coherent, the result on the screen will just be the sum of the intensities of the two waves that originate at the the slits. In between incoherent and coherent, waves can also be partially coherent. This results in the sum of the two wave intensities combined with an interference pattern or, equivalent, interference with a smaller contrast between bright and dark fringes.

If one wants to observe interference, the coherence length has to be large enough compared to the distance between the two slits. For natural sunlight, the question arises whether it is possible to achieve interference. Because of the coherence length of a few micrometers, the distance between the two slits would have to be very small.

Additionally, the angular distribution is of importance. One needs an angular distribution that is as small as possible. Because of this it is possible to observe interference effects of stars because they can be viewed as point-shaped light sources. Conversely, it is possible to measure the angular diameter of stars by exploiting the loss of coherence as two slits are moved apart (Hanbury Brown–Twiss experiment).

If the experiment is conducted with low-intensity light, wave-particle duality can be observed. The low-intensity light can be understood as an experiment with single photons, which interfere with themselves after passing through the slits. The screen (CCD-panel) records the single incidents that happen. Summed up this will show the same interference pattern observed with high-intensity light. This setup (with electrons) has been voted the “most beautiful experiment in physics”.

Besides looking at light interference, it is also possible to demonstrate wave-particle duality with electrons or even with relatively large molecules. In 1999 M. Arndt et al. showed interference with the fullerenes C60 and C70 using a material absorption grating [1]. They reproduced the results two years later using a light grating [2]. With a so called Kapitza-Dirac-Talbot-Lau (KDTL) interference experiment the group of M. Arndt demonstrated the quantum wave nature of even larger molecules, compounds composed of up to 430 atoms [3]. Recent experiments of the group exceeded this number again [4,5].

Question

I have read that double-slit diffraction illustrates the wave-particle dualism in the following way: if one slit is closed, the interference pattern vanishes and the outcome corresponds to what one would expect from particle-like behaviour. If both slits are open, interference happens and shows wave-like behaviour.

How would you reconcile this picture with the fact that in both cases, “individual particles” are detected? And that at a single slit, diffraction (a wave phenomenon) also happens, just slightly less spectacular?

References

[1] M. Arndt et al., Wave–particle duality of C60 molecules, Nature 41, 680-682 (1999)

[2] O. Nairz et al., Diffraction of complex molecules by structures made of light, Phys. Rev. Lett. 87, 160401 (2001)

[3] S. Gerlich et al., Quantum interference of large organic molecules, Nature Commun. 2, 263 (2011)

[4] Th. Juffmann et al., Real-time single-molecule imaging of quantum interference, Nature Nanotech. 7, 297 (2012)

[5] S. Eibenberger et al., Matter–wave interference of particles selected from a molecular library with masses exceeding 10 000 amu, Phys. Chem. Chem. Phys. 15, 14696-700 (2013)

Lerntagebuch

student’s learning journal by Matthias L. —

Eine zentrale Rolle im Weltbild der Physik spielte über Jahrhunderte das Experiment. So verschieden die Experimente auch waren, ihnen allen gemein war, dass sie einen Messwert lieferten, welcher mit Hilfe von theoretischen Werkzeugen plausibel gemacht werden konnte (z.B. Modelle). Manche Messergebnisse bestätigten Modelle, manche widerlegten sie und wieder andere erweiterten die Modelle. Im Laufe der Jahre schien man sich mehr und mehr daran zu gewöhnen, Messwerte zu verstehen, sie also in den Einklang mit Modellvorstellungen und Theorien zu bringen. Der vom Baum fallende Apfel stellt ein eindeutig beschreibbares Problem dar, welches keinen Anlass zur Interpretation bietet, vom Schmerz des Betrachters einmal abgesehen, der den Apfel auf den Kopf bekommt. Manch ein Physiker ging davon aus, die Physik sei schon komplett erforscht, was in etwa dem Sinneszustand eines Rentners gleicht, der ein Nickerchen in seinem Garten hält. Auch dieser mag es vielleicht nicht, wenn schräg gegenüber ein Kindergarten gebaut werden sollte. Viel zu laut, zu viel Gewusel und überhaupt ist es für die Umwelt bestimmt nicht gut, ja, ja! Ein ähnliches Schicksal erfuhr die Physik Anfang des 20. Jahrhunderts, als man mit dem klassischen Weltbild nicht mehr in der Lage war, den Doppelspaltversuch oder das Experiment nach Stern und Gerlach zu verstehen (Vorlesung vom 17. Nov 2016).

Das Modell Massepunkt, welches der Rentner jahrelang schätzte, wich nun einem trubeligen Kindergarten, mit kaum vorstellbaren Vorgängen und Zuständen, welche ich als Quantenmechanik bezeichnen möchte. Vieles an ihr gleicht den kleinen Kindern, sie ist noch relativ jung, quirlig, lebendig und hat noch eine große Zukunft vor sich. Welche genau, das bleibt ungewiss, man kann sie nur mit Wahrscheinlichkeiten angeben, was eine gute Überleitung für ein mächtiges Instrument der Quantenmechanik ist, nämlich die Wellenfunktion. Erst mit ihr ist es möglich, Konzepte wie z.B. den Welle-Teilchen Dualismus zu beschreiben, obwohl hier ein Bruch mit den etablierten klassischen Vorstellungen der Physik vorliegt. Die Quantenmechanik ist der Versuch, Situationen zu beschreiben, zu denen unsere Sinne keinen Zugang haben. Dies führte Feynman zu der Aussage, dass man die Frage nach der Zufälligkeit der Quantenmechanik besser vermeide. Vor diesem Hintergrund scheint es vernünftiger, dem Konzept des „shut up and calculate“ zu folgen (17. Nov 2016), was am Beispiel des Doppelspaltversuches veranschaulicht werden soll.

Elektronen und andere Quantenteilchen verhalten sich anders, als wir es von uns bekannten Objekten erwarten würden (z.B. einem Apfel). Sie besitzen nicht nur Teilcheneigenschaften (Auuaa!), sondern verhalten sich zusätzlich auch wie eine Welle. Wird nun ein Elektronenstrahl auf eine Blende mit zwei schmalen und parallelen Spalten gerichtet, passieren die Elektronen diese und hinterlassen auf der dahinterliegenden Fotoplatte gut lokalisierbare Punkte. Wenn man nun das Teilchenmodell zu Grunde läge, würden sich hinter den beiden Spalten je ein Häufchen bilden, je nach dem, durch welchen Spalt die Elektronen geflogen sind. Doch dem ist nicht so. Elektronen hinterlassen in der beschriebenen Versuchsanordnung ein strukturiertes Muster, welches aus Streifen von unterschiedlicher Intensität besteht, ganz wie man es von der Interferenz von Wasserwellen kennt. Dieser Welle-Teilchen Dualismus ist in der klassischen Physik das, was lange die Oscarverleihung für Leonardo DiCaprio war. Erst die quantenmechanische Betrachtungsweise nach Heisenberg und Schrödinger lieferte mit neuen Werkzeugen weniger widersprüchliche Aussagen. Man definiert ein quantenmechanisches System mit einer Messgröße, die erst zum Zeitpunkt der Messung einen scharf definierten Wert hat (Kollaps der Wellenfunktion). Vorher kann die Messgröße durch Überlagerung von möglichen Zuständen vorliegen. Auf den Doppelspaltversuch bezogen erklärt sich das Interferenzmuster als eine Abbildung der Wellenfunktion, wenn der genaue Weg der Teilchen nicht eindeutig ist. Die Wellenfunktion des Systems ist dann die Summe der Wellenfunktionen der beiden Teilsysteme, also der beiden Spalte, was zu der Überlagerung führt. Misst man nun die Position der Elektronen, indem man einen Spalt verschließt, so kollabiert die Wellenfunktion auf eines ihrer Bestandteile und es tritt kein Interferenzmuster mehr auf.

In diesem Zusammenhang wurden in der Vorlesung erste Berührungspunkte mit Veröffentlichungen von Zurek gemacht, welche die Dekohärenz beschreiben (01. Dez 2016). Dieses Phänomen beschreibt den Verlust der Möglichkeit zu interferieren, bedingt durch den Austausch von Informationen und Energie zwischen dem System und seiner Umgebung. Zurek beschreibt quantenchaotische Systeme, bei denen es zunächst hilfreich ist, sie vorher zu definieren (man denke an den Kindergarten). Ebenfalls kam ich mit der Katze eines Herrn Schrödingers in Kontakt, deren Schicksal ich aus Gründen des Tierschutzes in keiner Schule ansprechen würde, die sich näher als 10 km an einem Öko-Bio-Supermarkt befindet.

Im weiteren Verlauf wurden in der Vorlesung das EPR-Experiment und die Bellschen Ungleichungen präsentiert (12. Jan 2017). 1935 legten Einstein und seine beiden Studenten Podolsky und Rosen ein Gedankenexperiment vor, welches die Quantenmechanik mit ihren eigenen Aussagen widerlegte. Sie benutzen drei Prämissen (Lokalität, Realität und Vollständigkeit), welche von der klassischen Theorie erfüllt werden. Nachdem Bell 1964 mit seiner Ungleichung eine Möglichkeit aufzeigte, mit deren Hilfe man Vorhersagen der Quantentheorie testen konnte, war es möglich zu zeigen, dass die Quantentheorie als eine vollständige Theorie anzusehen ist, mit dem Preis, dass sie „nichtlokal“ ist.

Weiterführend wurde die Quantenverschränkung besprochen, eines der seltsamsten Phänomene der Quantenphysik (19. Jan 2017). Gemäß der normalen Quantentheorie besitzen Teilchen keinen eindeutigen Zustand, man kann ihnen nur relative Wahrscheinlichkeiten zuordnen, dass sie sich in dem einen oder anderen Zustand befinden. Erst wenn eine Messung stattfindet, fallen plötzlich die Würfel, und das Teilchen nimmt entsprechend der Wahrscheinlichkeiten den einen oder den anderen Zustand an. Noch abstrakter geht es zu, wenn zwei Teilchen miteinander wechselwirken, dann sind ihre individuellen Wahrscheinlichkeiten nicht mehr unabhängig voneinander, sie sind verschränkt.

Mit einer weit verbreiteten Interpretation der Quantenmechanik wurde eine weitere Sitzung bestritten (26. Jan 2017), nämlich der „Viele-Welten-Interpretation“ nach Everett. Die vorherrschende Lehrmeinung zu Everetts Zeiten war die Kopenhagener Interpretation. Sie beinhaltet als wesentliches Merkmal den Kollaps der Wellenfunktion. Für die Zeitentwicklung eines Systems bestehen zwei Möglichkeiten. Einerseits kann dies auf kontinuierliche Art erfolgen, der Schrödingergleichung gehorchend, oder andererseits diskontinuierlich über eine Messung. Wie schon erwähnt, kollabiert die Wellenfunktion in einen Eigenzustand des Messoperators. Hier ist es notwendig, das betrachtete Gesamtsystem in einen klassischen und einen quantenmechanischen Bereich zu teilen, denn erst wenn ein Messergebnis mit klassischen Begriffen beschreibbar ist, kann es als Ereignis gesehen werden, welches eindeutig ist.

Dies lässt uns versöhnlich zu unserem Rentner und seinem Garten zurückkehren. So sehr sein Ruhebedürfnis von dem Kindergarten an seinem Grundstück mit dem Lärm strapazieren wird, so gibt es eventuell auch die Möglichkeit, das ein oder andere Mal seinen Enkel von diesem abzuholen und einen netten Nachmittag zu verbringen, z.B. einen Apfelbaum zu pflanzen. Die Quantenmechanik kommt trotz aller Komplexität ohne klassische Begriffe und Konzepte nicht aus, da unser Alltag und die konkreten experimentellen Befunde durch diese geprägt sind.

Lecture 1: A Little of Quantum Philosophy

20 Oct 2016 [notes by Johannes P. H. and Martin S., expanded by C. Henkel]

— Goals of the Course —

  • introduction to interpretations
  • and their key ideas,
  • provide an overall picture.

How to understand “Interpretation”:

A physical theory is providing a connection between a mathematical model (formalism) and the physical world. An interpretation tries to formulate in ordinary words what the theory (the formalism) “means” and what it tells us about “reality”. In the lecture, we shall try to sketch the corresponding world view (Weltbild). We shall encounter a strange aspect of quantum physics:
there are things that are difficult even to formulate (J. Bell uses the word “unspeakable”), and things that are not even “thinkable” (we shall see what this may mean).
Many interpretations of quantum mechanics come with an elaborate mathematical formalism: we shall try to illustrate their world view and connect the “elements of reality” (a concept taken from A. Einstein’s work) to the mathematical concepts. This may also require to create new ways of speaking about physical reality.

Background and Methods

The lecture should be understood in many places as amateur philosophy. Let us mention the old philosophical problem whether “the world exists” independently of our sensory impressions or whether all our impressions are just the product of our own mind. A scientist who is amateur philosopher will probably recognise herself in the viewpoint of “intersubjective positivism” (or better “realism”):

  • Through the communication between persons about impressions related to an object, the object enters our reality. This does not necessarily mean that the object does not exist without talking (which would be an extreme point of view), but if we talk about it, we recognise it in our reality. The word “intersubjective” is meant to emphasize the role of communication for this approach.
  • An independent reality may not exist or not be directly accessible to us, but as we exchange information about our observations, we are led to conclude that there are things in reality that we can agree upon — we may infer that they “exist”.
  • It may be that two interpretations are equivalent to another with respect to experimental predictions. But that does not mean that the world views behind these interpretations are the same.

Mathematics as a language:

  • The lecture tries not to develop too much formalism.
  • But it turns out that a careful understanding of the mathematical concepts reveals also a meaningful physical understanding.
  • In this way, one can connect the elements of the formalism to “physical objects” of the “outside world”.

Notes

The “Quantum Technologies” flagship is a strategic investment of the European Union with 1 billion (1 Milliarde) euros over the next 5–10 years. The promises of the flagship are related to communication, computing, precision measurements, and simulations, which can be improved by using quantum technologies.

We shall often hear that the results of certain measurements in quantum physics are random and can be predicted only in a statistical sense. But it seems difficult to certify that a sequence of results is random in quantum theory. The mathematical theory of random numbers can help us to understand how this may be done. (Topic not covered in the course.) A server of quantum random numbers can be found at the web site qrng.anu.edu.au. If you want to buy a quantum device that generates random numbers, try www.qutools.com.

Quotation

Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.” (“God made the integers, everything else is the work of man.”)

Leopold Kronecker, 7 December 1823 — 29 December 1891, was a German mathematician who worked on number theory, algebra and logic.

Decoherent etymologies

The word “decoherence” is a typical physicists’ invention. One could also have chosen “classification” although that word was already occupied with another meaning. “To decohere” means “to become less and less coherent” or “to lose coherence”. In the quantum physics context (and probably only in this one), the word “coherence” is related to typical quantum features like interference and superposition — indeed the features that make quantum objects behave in a “non-classical” way. (Avoid to think of “incoherent” behaviour here.) During the process of decoherence (see details in this post), these features “get lost” and the system is behaving in a “more classical” way. From a naive observer’s perspective, a “more classical” behaviour could also be called “more coherent” in the sense of “more intelligible” – a possibility that adds up to the potential confusion. If we see in decoherence a process that “classifies” a system (make it more classical) – we get to the alternative wording “classification”.

Decoherence

[notes by Erik T.]

The following lecture is based on W. H. Zurek, “Decoherence and the transition from quantum to classical – revisited”, arXiv:quant-ph/0306072 (June 2003), an updated version of the article in Physics Today 44, 36-44 (Oct 1991).

The border between the classical and the quantum worlds. Drawing by W. H. Zurek, re-blogged in “Reading in the dark” (St. Hilda’s underground book club).

Zurek compares classical physics to quantum mechanics via the measurements in both. An example is given by a gravitational wave detector, which uses the indeterminate force on a mirror from the momentum transfer of photons. In quantum mechanics, one is in need of an amplifier to measure quantum effects because they usually occure on tiny scales: this is what N. Bohr asks for when crossing the borderline.

But this border between the macroscopic and microscopic is undefined itself. Often it is a matter of the needed precision which leads to the definition of this line. It is also related to the simplifications we make. The many worlds theory goes so far to push it back into the conciousness of the observer: to auote Zurek, this is notably an “unpleasant location to do physics”.

Decoherence is the loss of the ability to interfere. It happens because of the exchange of information and energy between a system and its environment. Zurek especially considers quantum-chaotic systems, so first of all we need to define what a chaotic system is. A chaotic system is characterised by its strong dependence on initial conditions and on perturbations. An example for this is the “baker’s map” illustrated below: an area gets stretched and compressed and then folded back together. This leads, after a few iterations, to a complete disruption of initial structures.

Visualization of the baker’s map, inspired from a figure in S. Strogatz, Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering (Westview Press 1994).

For a quantum system this means that an initial distribution in phase-space gets folded and stretched such that after a certain time it exhibits fine structures. The Wigner function W(x,p) changes its sign and this leads to interference. Now Zurek argues that friction destroys these structures. This means that the system can no longer show interference.

Note: this way of thinking about quantum chaos cannot be fully compared to classical mechanics, because chaos doesn’t exist in the typical sense here. If two wave functions overlap, they will always overlap, since the unitary Hamiltonian conserves such geometric properties.

Assume a is the distance between the maxima in the phase space. Then the period of the fine structure is proportional to \Delta p = \hbar/a and

\displaystyle W(x \sim a, p) \sim \cos \left( \frac{p}{\Delta p} \right) = \cos \left(\frac{pa}{\hbar}\right)

We can now look at the friction and the resulting diffusion by using the differential equation for Brownian motion. It has the same structure as the equation for heat diffusion:

\displaystyle \frac{\partial W}{\partial t} = D \frac{\partial^2 W}{\partial p^2} + ...

So by plugging in W we get:

\displaystyle \frac{\partial W}{\partial t} = -\frac{D a^2}{\hbar^2} \cos(\frac{p}{\Delta p}) = -\frac{D a^2}{\hbar^2} W

and this can easily be solved as W(t) \sim e^{-t/\tau}, where \tau = \frac{\hbar^2}{D a^2}. This is a typical solution for a system under the influence of friction and \tau gives a characteristic timescale or, equivalently, 1/\tau gives us a rate. We will later see, how big this scale really is for a macroscopic system, but first we need to find an estimate for D. This is provided by the Einstein relation between friction and diffusion. Friction leads to a loss of kinetic energy, so for a friction coefficient \gamma we have

\displaystyle \frac{d}{d t} \frac{p^2}{2 M} = - \gamma p \frac{p}{M},

where - \gamma p = \dot{p}. But at the same time the kinetic energy increases because of the diffusion, so

\displaystyle \frac{\delta}{\delta t} p^2 = D

For the system to be at equilibrium, both processes have to be equal, so

\displaystyle \gamma \frac{p^2}{M} = \frac{D}{2M}

But in equilibrium, we also have

\displaystyle \frac{p^2}{2 M} = \frac{k_B T}{2} = \frac{D}{4 \gamma M}

where T is the temperature. This leads to

\displaystyle \tau = \frac{\hbar^2}{2 \gamma M k_B T a^2}

and for \gamma \sim 10^{-3}{\rm s}, m \sim 10^{-3}{\rm kg} and a \sim 10^{-6}{\rm m}, so for a small macroscopic object we get \tau \sim 10^{-29} {\rm s}. This timescale is an extremely short one, so short that we cannot even begin to think about measuring it at the moment.

Schrödinger’s cat and Wigner’s infinite regression

Let’s take a step back and look at Schrödingers cat again, where nuclear decay determines the state of the cat. The atom is in a superposition of states of being decayed and not being decayed, the Geiger counter is also in a superposition, namely of detecting an event and not detecting one and so on. Now where do we draw the line of thinking about superposition, the cat? Or the observer? Or even the Universe? This is Wigner’s infinite regression. It seems obvious, that a good theory needs to break up this line of reasoning somewhere. So it has to properly define the border between micro and macro.

Example: Schrödinger’s cat

The cat is seemingly in a superposition of states |{\rm cat}\rangle = |{\rm dead}\rangle + |{\rm alive}\rangle. First of all, this is a wrong state, since the cat is a composite system. But if we assume we could write the state that way, then decoherence leads to an increase in the entropy, because it will remove the superposition (passage from a pure state to a statistical mixture).

Example: oscillator at temperature T

Hamiltonian

\displaystyle H = \frac{p^2}{2 M} + k\frac{x^2}{2},

spring constant and oscillator frequency

\displaystyle \omega = \sqrt{\frac{k}{M}},

Wigner function of thermal equilibrium state

\displaystyle W_T(x, p) = \frac{1}{Z} \exp\left[-\frac{2 H}{\hbar \omega \coth \left(\frac{\hbar \omega}{2 k_B T}\right)} \right]

So here we have an uncertainty product

\displaystyle \Delta x \Delta p = \frac{ \hbar }{ 2 } \coth \left( \frac{\hbar \omega}{2 k_B T} \right) \geq \frac{ \hbar }{ 2 },

larger than the minimum allowed by the Heisenberg uncertainty principle.

Welcome,

This site provides an

  • incomplete,
  • personal,
  • provisional

selection of interpretations of Quantum Mechanics. It collects lecture notes taken by students who agreed with their publication on the web. The site is open for discussion and comments, but its maintainers keep the right to filter its contents. For discussion of esoteric quantum shwantum, please look elsewhere.

Provisional motto: Sorry, we won’t …

… shut up and calculate!