Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] WHY VALUE ADDED TESTING IS A BUST WAS: Re: [PTSOS] Teachers' Test scores to be made public



Value added testing is using the state's high stakes tests to measure how
much students gain under a particular teacher. Then the teacher is either
whipped or rewarded according to the results. This is very different from
the teacher using a test to see the effect of their teaching by pre and
posttesting using a research based test. The teacher using tests, is action
research.

The problem with value added testing is that the gain measurements change
dramatically from one year to the next for individual teachers. This is the
gist of the paper. This is because specific classes may have a larger
number of students who will not make much gain. The system is easily
manipulated by administrators to make particular teachers look good or bad.
And from what I have heard, this is going on.

The quality of the high stakes testing is also subject to question. If the
students have a different test between the beginning and end, can we be
assured that the gain is not biased by the test. And what are the tests
really measuring? Most are too simplistic, and are not using good research.
In addition student learning is often judged by single test questions.

Now that we know there is a fairly strong correlation between FCI/FMCE gain
and the Lawson test, we have a way of assessing how various gain figures can
be compared for different classes. I propose dividing the gain by the the
Lawson score, because the maximum gain seems to fall on a straight line. In
particular a score of 10/12 for Lawson is necessary to achieve 100% FMCE
gain, and it looks like a Lawson score of zero would predict zero gain.

If the state testing were used to help teachers improve what they do in the
classroom, students would learn more. But in the high stakes situation,
teachers are focused on just improving the scores, which can be done by
intensive review, rather than by teaching for understanding. I know from
conversations with colleagues that intensive review is being mandated by
principals rather than real theaching. The better schools, which means
schools that have better intake, still do real teaching. Of course being
better is a self fulfilling rating. "Better" schools attract "better"
students and so they end up being "better". Shayer & Adey showed in England
that all of the schools they surveyed were doing the same things, and that
the output was merely a function of the intake.

The result of this situation is that the poorer students are actually pushed
down because they are given review rather than real teaching. But the test
scores go up, and then plateau because the method is self limiting. You can
only get so far on just rote memorization. And the students find the
material incomprehensible and boring. So they just drop out.

Value added testing is snake oil. It is way too random, and subject to
being easily manipulated by administrators. And it also promotes bad
teaching. England tried high stakes testing, and it didn't work. Do we
have to make the same mistakes? We now have evidence that thinking skills
in England have decreased dramatically of the last 30 years as shown by
Michael Shayer. Is this happening in the US???

But SETS, student evaltions of teachers, are also snake oil, because they do
not correlate well with what the students actually learned, and they go down
for IE classes where we achieve higher gain. Indeed SETS correlate better
with grades and good looking teachers, so if you want good student
evaluations give all As.

John M. Clement
Houston, TX


I am probably paying insufficient attention.
Is it proposed to measure student gain as a measure of teacher
performance?
Isn't this the sort of thing that John C has essentially been
advocating, for years n years?