Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] How does one combine uncertainty in multiple sampling?



On 11/14/2010 12:03 PM, Folkerts, Timothy J wrote:

This sounds like a binomial distribution situation,

That only works if the samples are independent.

With two (or more) polls, you could find the total # of responses,
and the weighted average for p and find the new uncertainty. The
total uncertainty will be larger than either poll separately, but the
relative uncertainty will be smaller.

Again: That only works if the samples are independent.

The pollsters habitually report the "statistical margin of error"
aka "sampling error" which is a _lower bound_ on the actual overall
uncertainty.

If respondents are systematically deceiving the pollsters, increasing
the size of the sample will *not* give you a relative uncertainty
that goes down like 1/sqrt(N). It might not go down at all.


On 11/13/2010 11:44 PM, Bernard Cleyet wrote:
a newsletter that claims one can't have a combined error less than the lowest individual one.

That's absurd in the other direction. I assume they are talking
about the relative error i.e. the percentage error. Nobody in their
right mind talks about absolute error in this context without being
ultra-specific about it.

In any case, if there are no systematic errors, i.e. if the sampling
error is the dominant contribution to the uncertainty, then the
relative uncertainty goes down like 1/sqrt(N) ... as TJF pointed out.

If we are drawing independent samples from a given distribution, a
large-sample mean is better than a small-sample mean, better as an
estimator of the mean of the underlying distribution. This has been
known and widely used for hundreds of years. Maybe thousands.

=============

Bottom line: Not everything averages out ... but a lot of things do.