Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Symbol for uncertainty



On 05/10/2011 07:19 PM, James McLean wrote:

* Many at my college use capital-delta-x (for variable x). I prefer to
avoid that because the same symbol is used for "change in x".
* Some like to use sigma-sub-x. But the way I read things, that really
should be reserved specifically for standard deviation. Some
uncertainties aren't standard deviations.
* NIST recommends u(x), but I don't particularly like that because it
looks like a functional relationship.

The key words there are "variable" and "functional". I recommend
using each of them in a more narrow, more formal sense:

There is a rather deep point to be made here:
*) We can write x_i to denote the ith particular observation (aka outcome).
Each such x_i is a number.
*) Each x_i is drawn from some underlying probability distribution X.
This X is absolutely not a number. It's more like a function.

We must clearly distinguish between the distribution X and a particular
observation x_i drawn from X. It is conventional but very unhelpful
to speak in terms of the "random variable" x when we really should be
talking about the distribution X.

In simple cases we can say that the probability of finding a given
observation x_i near some point x is X(x)dx where this x is a plain
old bona-fide dummy variable (not some weird "random" variable).
We can also write X(b)db, which means the same thing, using b as
the new dummy variable.

So, we agree that it doesn't make sense to talk about u(x_i) because
x_i is just a number, and has no uncertainty. If I roll a pair of
dice and observe five dots, then x_i=5 with no uncertainty. Meanwhile
we can talk about the distribution X and the width thereof. Uncertainty
is a property of X as a whole, not of any particular observation drawn
from X. Therefore we say u is a functional of X and denote it u[X] ...
using square brackets to emphasize that it is a functional (not a mere
function). This is consistent with writing m[X] to represent the mean
of the distribution X.

Another option is to write X as a subscript, i.e. m_X ± u_X.

Another option is simply to write A ± B and then explain that A is
the nominal value of the distribution X while B is the corresponding
uncertainty.

The one thing that is not a viable option is to use x as a shorthand
for the distribution X. I am quite aware that there are entire books
on the subject of "random variables" that use this notation, but it
is truly a terrible notation. It is just begging to be misunderstood.
It is doubly-especially unsuitable for use in an introductory course.

The conventional notation for probability such as P(x|y) and P(y|x)
is bad enough already. The "random variable" business just makes it
worse.

If you want to think about probability without driving yourself crazy,
I strongly recommend the set-theoretic approach. This allows you to
make sense of the subject, without dragging in the really heavy
artillery i.e. the measure-theoretic approach. A good reference for
this is Tom Apostol _Calculus_ volume II.

Tangential remark: I highly recommend Apostol's _Calculus_ for many
reasons. http://www.av8n.com/physics/apostol.htm

So let the masses speak!

Is is the weight of the evidence proportional to the mass?