Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Basic statistics



Regarding Brian's comments:

...
I would refer everyone on this thread to the wonderful article "From
Laplace to Supernova SN 1987A: Bayesian Inference in Astrophysics"
by Tom Loredo, located at
http://www.astro.cornell.edu/staff/loredo/bayes/tjl.html

From my personal experience trying to learn basic statistics, I
always got hung up on the notion of a population, and of the
standard deviation of the mean. I found the Bayesian approach to
be both more intuitive, easier to apply to real data, and more
mathematically sound (there is a great article by E.T. Jaynes at
http://bayes.wustl.edu/etj/articles/confidence.pdf where he outlines
several pathologies in standard stats).

I too, want to second Brian's endorsement of the Bayesian approach to
probability theory--especially as interpreted by Jaynes and his
school of the maximum entropy procedure for determining probability
distributions based on incomplete data.

Bottom line: there is no population in the Bayesian approach.

True.

Probability is a measure of ones state of knowledge, not a property
of the system.

Whoa, there. It's true that this approach does not interpret the
fundamental meaning of probabilites as the asymptotic relative
frequencies of particular outcomes for infinite ensembles of copies
of the system (or random process) at hand. But a Bayesian
interpretation of probability is not really based on anything as
subjective as the state of knowledge any particular person's mind.
Such a characterization is really a straw man that opponents of that
approach tend to level at it. Rather, this interpretation of
probablity is just as objectively defined as the relative frequency
approach. In the Bayesian approach probability is a measure of the
ideal degree of confidence that an ideal perfectly rational mind
would have about the state (or outcome) of the system (or random
process) given only all the objectively available data extant for
that system/process upon which to assign such a confidence measure on
the various possible outcomes. The Baysian approach is about the
procedure of most rationally assigning a measure of the various
degrees of confidence to the possible outcomes of some random process
armed with just all the objectively available data.

In doing so, all of the strained attempts at creating a fictitious
population out of measurements vanish (such as, say, analyzing
measurements of the mass of the moon by imagining many hypothetical
universes of "identical" measurements). On instead is quantifying
your state of knowledge.

Again, it's not the subjective state of the actual knowledge of an
actual less-than-completely-rational mind that it relevant. Rather
it would be better considered to be the state of knowledge of an
'ideal'perfectly rational mind that is supplied with *only* *all*
the objectively available data of the situation about which there is
only partial information extant.

In almost all easy cases, the Bayesian approach yields the *exact
same* numerical result as the standard approach. The interpretation
is a lot easier, and a lot easier to communicate to students.


Brian Blais

David Bowman