Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] uncertainty on the uncertainty




On 2017/Jul/06, at 14:34, brian whatcott <betwys1@sbcglobal.net> wrote:

On 07/06/2017 04:02 AM, Peter Schoch wrote:
3. I do expect them to then compare their average ± the standard error
to an accepted value (if possible) or to analyze the results; i.e., what if
the error is artificially low? What could have caused that? Is the sample
size unreasonably small?
^^^^^^^^^^^^^^^^^^^
Not sure where I got this from, many years ago; possibly a chemistry tutor: when you have no idea how many samples would constitute a reasonable number, take at least 18 samples :-)

Brian W.



TI’s book “Calculating Better Decisions” [1]. p. 6-10 What happens when I don’t have 30 samples? "….It turns out as the number of samples goes below 30, the “normal” curve can no longer be “accurately” [2] used to describe the distribution of the sample means."

A later from TI [3] “For samples …. (n>30) sigma Xbar can be considered equal to …. the sample standard deviation / sqrt n.


[1] Supplied with my ancient calculator. (1977)
[2] my quotes.
[3] 1981


bc thinks waiting for JD is better than using an internet search engine, or bothering to peruse his much more advanced books.

p.s. The first book does do an exercise finding all the possibles from a sample of two (n=2) from a collection of five grades.