Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
The problem is that I have used some standard jargon from the standard
educational literature and alluded to some results from that same
literature.
OK. You have now defined which standard deviation you are
Effect size is generally defined as the change in the mean on an evaluation
divided by the standard deviation of the curve.
literature effect sizes are less than 1.0 and a curriculum which achievesWe're not concerned with "usually". We"re doing straight
anything over 0.5 is usually considered to be very effective.
would be used to compare the effects of 2 different curricula and the effectHow does a "statistically zero score" differ from a "zero score"?
size would be calculated for the difference between the curricula.
Obviously the effect size is not a valid comparison tool when the students
come in with a statistically zero score, or a score that could be produced
by random guessing.
familiar with effect size, so it will convey meaning. Whether or not thisAs one famous logician put it, the answer to "You don't clean a
is the best way to compare curricula can certainly be questioned, but it is
currently the method often used.
The initial curve (Lawson test) in JCST (figure 2) essentially looks like a
normal distribution. The final one is also similar, but moved over by about
1 SD. I am judging this by the curve. The result is that the number of
students who would be classified as concrete is dramatically reduced. I
have found for the Lawson test that when one looks at individual student
scores they do not all move up, but rather each student moves a different
amount, with some making dramatic gains, and others none at all. The curve
in JCST unfortunately moves so far to the right that the right hand tail is
cut off by saturation on the test.
this is to look at the same article. To communicate effectively about an
article it is helpful if both people have read it.
I don't recall "outlining a theory". The process of making
Whether or not the data exactly follows the theory you outlined is indeed
problematical, but the effect size analysis is routinely used when talking
about test results for students.
student treatments in a fairly standard manner. A given distribution mayAhh. Then how do you justify using statistical measures that
not be exactly normal, and I don't think that the idea that the ideal
distribution is the same for each student has any meaning. Students
generally fall in the same place on the curve and do not fluctuate over the
curve substantially when retested if you have a well designed test.
Students do not behave like gas molecules.
on the mean to see if the gain comparisons are significant. If one has aThe error on the mean is defined by the curve, not by the
fairly large number of students the error on the mean will not be
significant.
I am a bit puzzled about how you would ask for the probability that they
came from the same unknown distribution. Student test scores generally rise
rather than fall after instruction.