Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
///
In his Phys-L post of 16 Apr 2005 of the above title, Jack Uretsky
(2005) wrote:
I don't understand. If you can't control the variables, how can you get
statistically "valid" results?
My experimentalist friends (in particle physics) now divideWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW
uncertainties into a "statistical" part and a "systematic" part.
uncertainties into a "statistical" part and a "systematic" part."
I don't understand. If you can't control the variables, how can you get
statistically "valid" results?
My expeerimentalizt friends (in particle physics) now divide
uncertainties into a +statistical" part and a "systematic" part.
These two parts must be combined somehow (how to combine is still IMO
an unsolved problem) in order to arrive at a meaningful statement of
uncertainty. When several experiments measure the same quantity,
then a comparison of quoted uncertainties gives one an intuitive
feeling for the uncertainty of our knowledge of the quantity being
measured.
I have seen nothing in the field of measuring teaching techniques that
tells me that we have much insight into how to make "statistically valid"
comparisons."