Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] paper: "The quantum state cannot be interpreted statistically"



Let me emphasize that I have not worked my way through the paper.
I don't fully understand the paper, and have not yet made a
decent first pass at trying to understand it. Therefore what
I am about to say must be considered hypothetical. My plan is
to re-read the paper, testing these hypotheses.

On 11/19/2011 04:13 AM, Derek McKenzie wrote:
I am unable to understand early statements in the paper like "If the
quantum state is a physical property of the system". It is not
standard terminology to refer to a 'state' as a 'property', so I have
no idea what this statement even means. In fact, I'm really not sure
what they mean by 'property of a system' to begin with.

Given that the entire thesis of the paper is directed at such
foundational concepts I am very frustrated that I can't work out what
they mean by the concepts they are using.

That's a good, valid criticism IMHO.

In particular, I reckon the title "The quantum state cannot be
interpreted statistically" is absurd, because /everything/ can be
interpreted in statistical terms. The most that can be said is
that some statistics are more robust than others. For example,
when somebody says there is a 100% chance of rain tomorrow, they
are using statistical ideas and statistical terminology.

All the key results in the paper are expressed in statistical
terms.

I conjecture that one of the key ideas that is used but not stated
in clear conventional terms is the idea of /sufficient statistic/.
http://en.wikipedia.org/wiki/Sufficient_statistic

In particular, this applies to the "coin" analogy at the top of
page 2 in the paper. Knowing the outcome of the previous toss
is not a sufficient statistic for predicting the outcome of the
next toss ... or (!) even knowing the /probabilities/ of the
various possible outcomes of the next toss.

In contrast, in the usual model of a coin toss, there is a single
parameter P that allows us to model the coin as a Bernoulli
process, and this P is a sufficient statistic. Knowing this P
makes the coin toss (under the usual assumptions) a zero-order
Markov process.

Tangential remark: If we relax the assumptions, we can build
a peculiar flipping machine that flips the coin approximately
180 degrees every time ... perhaps using the technique some
cooks use to flip pancakes without a spatula. This produces a
first-order Markov process, where each outcome is anticorrelated
with the previous outcome.

You can imagine even more peculiar coin tosses, for example
if the machine damages the coin a little bit each time, such
that additional variables are needed to describe the state of
the system.

Returning to the main topic: Here are some of the key issues
concerning the foundations of quantum mechanics:
1) The tension between:
1a) relativistic causality, and
1b) entangled states, which require correlations to be
established, including correlations between events that
are separated by a spacelike interval. This does not
require /information/ to be transmitted across spacelike
intervals, so it does not /directly/ trample our ideas
of relativistic causality, if those ideas are sufficiently
carefully stated in terms of information.

This item (1) sometimes goes by the name of "collapse of
the wavefunction" but that is just a name, not an explanation.
IMHO it is a rather ugly name. One is also reminded of
Einstein's complaint about "spooky action at a distance".

2) The possible role of /hidden variables/ as a way to
reconcile QM with our intuition.

We can apply these ideas as follows: IMHO the best way to
summarize the famous Bell inequalities is to say that Bell
disproved a wide class of hidden-variable theories, including
the most plausible "intuitive" hidden variable theories.

I hypothesize that what Pusey et al. have done is to disprove
another class of hidden-variable theories.

I'm not sure it will ever be possible to disprove /all/ hidden-
variable theories, because theorists are too ingenious at
coming up with new ones. However, efforts in this direction
are not wasted, because at some point the hidden-variable
theories become so complex and non-intuitive that they are
not serious competitors to plain vanilla quantum mechanics.

=========

To summarize: My hypothesis and my suggestion for how to
proceed is to (a) compare the paper to our current understanding
of Bell's inequalities, and (b) to recast the paper using
standard terminology, such as hidden variable, Markov process,
sufficient statistic, et cetera.