Consistent Histories: Questions and Answers

What is the relationship of consistent histories and standard quantum
mechanics (as found in textbooks)?

Consistent histories is
standard quantum mechanics presented in a coherent fashion with the ambiguities
and lack of clarity found in the usual textbook presentations replaced with a
clear set of logical rules and principles for reasoning about a quantum system.
It is, in brief, "Copenhagen done right." There is, to be sure, more that can
be said  see the following questions.

What is the role of measurements
in standard quantum mechanics and in consistent histories?

In textbook quantum theory measurements are used to introduce
probabilities into the theory, and this feature is the source of many
conceptual difficulties and paradoxes. In particular, it gives the misleading
impression that one cannot apply statistical ideas to quantum processes in the
absence of measuring devices, e.g., to the decay of unstable particles in the
center of the sun, or in interstellar space. In addition, a great deal of
fruitless effort has been expended in an effort to resolve the quantum
"measurement problem" that arises when one wants to treat the measuring
apparatus itself as a quantum system.
By contrast, in the consistent histories approach probabilities are
introduced as part of the axiomatic foundations of quantum theory, with no
necessary connection with measurements. Quantum dynamical processes are
inherently stochastic, and the probabilities can be calculated using a
generalization of the rule originally introduced by Born. Because it does not
employ measurement as a fundamental principle, the consistent histories
approach allows one to analyze, from a fully quantummechanical perspective,
what actually goes on in a physical measurement process. For example, one can
show that a properly constructed measuring apparatus will reveal a property
that the measured system had before the measurement, and might well
have lost during the measurement process. The probabilities calculated for
measurement outcomes (pointer positions) are identical to those obtained by the
usual rules found in textbooks. What is different is that by employing
suitable families of histories one can show that measurements actually measure
something that is there, rather than producing a mysterious collapse of a wave
function.

What is wave function collapse?

Wave function collapse (or reduction) was introduced by von Neumann [1] as a separate mode of time evolution for a quantum system,
quite distinct from the unitary time evolution implied by Schrödinger's
equation. The concept leads to a number of conceptual difficulties, and is one
of the sources of the widespread (but incorrect) notion that there are
superluminal influences in the quantum world.
From the consistent histories perspective, wave function collapse is a
mathematical procedure for calculating certain kinds of conditional
probabilities that can be calculated by alternative methods, and thus has
nothing to do with any physical process. That is, "collapse" is something
which takes place in the theorist's notebook, not in the experimentalist's
laboratory. Consequently, there is no conflict between quantum mechanics and
relativity theory. (Also see the discussion of
nonlocality.

How does consistent histories resolve the Schrödinger cat paradox?

As Schrödinger showed, the unitary time evolution
that results from solving his equation will often give rise to a superposition
of two states representing quite different states of affairs: in the case he
discussed this was a superposition of a live and a dead cat. It is very
hard to provide any sensible physical interpreation for such a state, and
consistent historians do not attempt to do so. Instead they note that unitary
time evolution is only one of many possible quantum descriptions of this
situation, and there is another perfectly valid stochastic description in
which with some probability the cat is alive and with some probability it is
dead. In addition, the unitary and the stochastic description are carried out
using two separate, incompatible frameworks, so they cannot be combined; to do
so would be like saying that a spinhalf particle has both an x and a z
component of angular momentum equal to +1/2.

How is the EPR paradox handled in consistent histories?

Einstein, Podolsky, and Rosen (EPR) in a celebrated paper
[2] showed that by measuring the property of some system A located far
away from another system B one can, under suitable conditions, infer something
about the system B. By itself the possibility of such an indirect measurement
is not at all surprising, as one can see from the following example. Colored
slips of paper, one red and one green, are placed in two opaque envelopes,
which are then mailed to scientists in Atlanta and Boston. The scientist who
opens the envelope in Atlanta and finds a red slip of paper can immediately
infer, given the experimental protocol, the color of the slip of paper
contained in the envelope in Boston, whether or not it has already been opened.
There is nothing peculiar going on, and in particular there is no mysterious
influence of one "measurement" on the other slip of paper. The quantum
mechanical situation considered by EPR is more complicated than indicated by
this example in that one has the possibility of measuring more than one
property of system A and also considering more than one property of system B.
However, when one does a proper analysis [3], the
conclusion is just the same as in the "classical" case of the colored slips of
paper.
The analysis carried out by EPR was based on the theory of quantum
measurements available at that time, and their conclusion, that there was
something unsatisfactory about the way in which quantum mechanics had been
formulated, was basically correct, even if they themselves were unable to come
up with a better version.

Is quantum mechanics nonlocal?

This depends on what one means by "nonlocal." Two separated quantum
systems A and B can be in an entangled state that lacks any classical analog.
However, it is better to think of this as a nonclassical rather than as a
nonlocal state, since doing something to system A cannot have any influence on
system B as long as the two are sufficiently far apart. In particular, quantum
theory gives no support to the notion that the world is infested by mysterious
longrange influences that propagate faster thaan the speed of light. Claims
to the contrary are based upon an inconsistent or inadequate formulations of
quantum principles, typically with reference to measurements. (Also see
measurements,
EinsteinPodolskyRosen.)

What are frameworks, and what is the singleframework rule?

A framework or consistent family is a set of mutuallyexclusive
possibilities to which one can assign probabilities according to the rules of
quantum theory. It is the analog of a sample space in ordinary probability
theory. For example, the two possibilities +1/2 and 1/2 for the z component
S_{z} of spin angular momentum of a spinhalf particle are mutually
exclusive and form a quantum framework. On the other hand, +1/2 for the z
component and +1/2 for the x component S_{x} are incompatible
in that it is meaningles to combine S_{z}=+1/2 and
S_{x}=+1/2 in a single quantum description. The reason is that there
is nothing in the quantum Hilbert space which can correspond to this
combination, and assuming that S_{z}=+1/2 AND S_{x}=+1/2 makes
sense leads to logical difficulties [4]. Values for
S_{z} and S_{x} are not mutually exclusive in the sense that if
one is true the other is necessarily false. Instead, they are noncomparable:
trying to construct logical relationships between them does not make sense.
Since they are not mutually exclusive, S_{z} and S_{x} cannot
appear in the same framework. While S_{z} and S_{x} can be
described by means of separate frameworks, these two frameworks are
incompatible . One cannot combine a quantum description based upon some
framework with a description based upon an incompatible framework, because the
result would be meaningless (i.e., quantum theory can assign it no meaning).
This is known as the singleframework rule.
Frameworks are used for families of histories as well as for
quantum states at a single time. In this case, not only must the different
possibilities be mutually exclusive, but they must satisfy consistency
conditions in order that quantum probabilities can be assigned in a
consistent way. Frameworks of histories that cannot be combined in a way that
satisfies the consistency conditions are by definition incompatible, and once
again the singleframework rule states that descriptions using incompatible
frameworks cannot be combined.

Given two or more frameworks, which one provides the
correct description of a quantum system?

There is no fundamental principle of quantum mechanics, no law of
nature, that singles out one framework as the only possibility for a "correct"
description. Consider a classical spinning body and a description, X, which
assigns some value to the x component of its angular momentum L_{x}.
Let Z be a second description that assigns a value to the z component of
angular momentum L_{z}. Can one say that X rather than Z is the
correct description? Clearly this would be silly. Instead one thinks of X and
Z as part of a total description of the angular momentum that includes both of
them. The case of a quantum spinhalf particle is similar: it makes no sense
to say that a description assigning a value to S_{x} is the correct
description rather that a description that assigns a value to S_{z}.
However, unlike the classical case, there is no total description that includes
both S_{x} and S_{z}, for these two frameworks are incompatible
and cannot be combined, due to the mathematical properties of the quantum
Hilbert space.
In practice, physicists choose among the various possible quantum
descriptions depending upon the question they want to answer. In the case of
Schrödinger's cat there is a framework which is
useful for answering (at least in a probabilistic sense) the question of
whether the cat is dead or alive, and this framework is incompatible with the
one corresponding to unitary time evolution according to Schrödinger's
equation, which leads to a superposition state. Either description is a valid
one from the perspective of fundamental quantum theory. On the other hand,
they are not at all the same in terms of the sorts of questions that they allow
one to address. If one employs the framework based on unitary time evolution,
the question of whether the cat is dead or alive is not meaningful, since it is
like asking for the value of S_{z} when S_{x}=+1/2 for a
spinhalf particle. Indeed, it does not even make sense to talk about a cat,
since those properties that we normally associate with a cat (small furry
animal with four legs and a tail, etc.) are incompatible with the description
based on the superposition state.
It is worth emphasizing that as long as one is considering a
particular physical question, such as the probability that the cat will be dead
or alive, the multiplicity of possible frameworks causes no problem, for they
will all give the same answer (for a given experimental situation, initial
conditions, etc.) to the same question.

Hasn't the consistent histories approach been shown to be
inconsistent?

Various claims have been made to the effect that the consistent
histories formulation of quantum theory leads to contradictions or to similar
logical difficulties. For a discussion of some of these see
[4] and [5]. If examined with care, such claims usually
amount to a refusal by the critic to accept the
singleframework rule as a legitimate part of the consistent histories
approach. When this rule is ignored or replaced by something else, the altered
interpretation is then shown to be inconsistent. The situation is similar to
what would happen if someone who accepted the Newtonian idea of absolute
time were to insist that this be part of special relativity, and
then claim that the latter is inconsistent.
It is always possible that some fatal flaw in the consistent
histories program (including the singleframework rule) has been overlooked.
But none has been discovered up till now, and the failure of both proponents
and critics to find any logical inconsistency suggests that the current
formulation is fairly robust.

What is the difference between "consistent histories" and
"decoherent histories"?

They are two names for the same thing. The term "consistent
histories" was introduced by Griffiths in 1984, and was also used by
Omnès. The term "decoherent histories" was introduced by GellMann and
Hartle in 1990. Certain minor differences in the earlier formulations have by
now largely disappeared. For example, both Griffiths and Omnès now use
a consistency (or decoherence) condition first formulated by GellMann and
Hartle.

Where is a good place to learn the essentials of the consistent
histories approach?

For a list of articles and books, click here.
References
[1]
J. von Neumann, Mathematical Foundations of Quantum
Mechanics, (Princeton University Press, 1955).
[2]
A. Einstein, B. Podolsky and N. Rosen, "Can quantummechanical
description of physical reality be considered complete?," Phys. Rev. 47
(1935) 777.
[3]
R. B. Griffiths, "Correlations in separated quantum systems: a
consistent history analysis of the EPR problem," Am. J. Phys. 55 (1987) 11.
Also see Chs. 23 and 24 of Consistent Quantum Theory.
[4]
R. B. Griffiths, "Choice of consistent family, and quantum
incompatibility," Phys. Rev. A 57 (1998) 1604,
quantph/9708028 .
[5]
R. B. Griffiths, "Consistent quantum realism: A reply to Bassi and
Ghirardi," J. Stat. Phys. 99 (2000) 1409,
quantph/0001093 .
Back to consistent histories menu