Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: FAST: exemplary middle school science



Michael Edmiston wrote:

I would like to thank Hugh Haskell for the review of FAST.

I have not been involved with FAST in any way, but I have been involved in
many lab exercises with current middle-school and high-school science
teachers, and have been heavily involved with pre-service teachers because
my college has a fairly large education department.

Hugh hit upon something that is epidemic... poorly designed experiments that
don't work or give results that are ambiguous. Time and again I have seen
teachers or teachers-of-teachers demonstrate a "wonderful" experiment that
is supposed to demonstrate something, but in reality it doesn't work, or it
demonstrates something else, or it gives results no one can interpret.
Sometimes the instructor doesn't even realize things don't make sense.
Often the instructor realizes things don't make sense and passes it off as
"well we didn't really do this with good enough equipment, but you get the
general idea." No... I don't get the general idea. If the experiment
didn't yield something pretty close to the expected result and/or didn't
measure the number it was supposed to measure, then what's the point?
Occasionally that question has been answered with "well at least we got the
students into the lab and doing real science." Geesh; might as well just
let them into the lab to play with the equipment; they might learn more. I
don't think they are doing real science unless (1) it gives the desired
result, or (2) it does not give the desired result but students and teacher
work together to figure out why.

[snip]

I agree entirely with the thrust of Michael's comments on my (perhaps
outdated) review of the FAST curriculum. However, I have a couple of
minor quibbles with the language used. First is the use of the word
"experiment." I know that we all get a little sloppy in our use of
language amongst each other, but we need to be very careful here.
What middle school students are doing in their laboratories is very
rarely an experiment, in the scientific sense. It is far better to
call these a "lab activity" or some such term. Students should never
be allowed to think that what they do in lab can qualify as the same
thing that a professional scientist does in his or her lab (unless,
of course, it is one of those rare cases where it *is* the same). And
the clue to this is Michael's statement that the student lab
exercises need to "give the desired result." That certainly is
exactly what we want in student lab activities, because we expect no
new science to be discovered there. The purpose of student labs is
not to *do* science, but to *teach* science. Therefore, to call them
"experiments" is clearly a misnomer, of which I am often as guilty as
most, and I think I was even guilty of it in my review of FAST.

The other quibble has to do with the idea of "expected results." I
have a running battle with our chemistry teachers who seem to have
the idea that their students can do error analysis in their labs by
simply calculating the difference between the answers they obtain and
the "correct" answer--that is, the answer in the text. This is to
error analysis as "paint by numbers" is to art.

A well designed laboratory exercise should have an "expected outcome"
in the sense that the concept illustrated should be predictable (and
I think that is the sense in which Michael used the term), but in
most cases the numerical result to be obtained should be unknown (not
always, however--sometimes we do want the students to measure known
quantities). To give a trivial example, if we are trying to teach the
concept of mass to young students, we need to give them at least a
few unknown masses, whose values nobody knows beforehand. After they
have perfected their skill at measuring mass by measuring the mass of
some known objects and seeing that they are capable of getting the
right answer, then they should be given a rock, for instance,
preferably one that they found themselves, or some other object like
that, or perhaps several such items. Then they would be measuring a
property of the material that they might be beginning to have a feel
for, but the numbers they get would be unknown to anyone before the
measurements were made. Older students could use the same or a
similar experiment to learn true error analysis, by learning how to
use multiple measurements of the same quantity to estimate the range
of values within which the "true" value for the given object lies.

The next step in this progression would be to have the students
measure some quantity ("g," say) whose nominal value is known,
calculate the "error bars" for their experiment, and then decide if
their results are consistent with the known value, or not. If not,
then they are on the verge of doing what a professional scientist
occasionally has to do--decide if the discrepancy is due to some
undetected systematic error on their part, or if they have really
found some new physics. Since, of course, we do not expect the
students to find "new physics" in the middle school or high school
laboratory, this means that they have to make a concerted effort to
find the source of the systematic error in their measurement.
Sometimes that can be a very difficult task, so it should be assigned
judiciously.

And of course, every time they use the phrase "human error" they get
an arm chopped off. The third time they use it, they lose their head!

I don't think I'm saying anything that is news to the members of this
list, but unfortunately, the lab exercises that we, and most others
who do this sort of thing, design seldom measure up to this standard.
It's too bad that this is true, but on the other hand, it's a
difficult standard to measure up to week after week. And it's
probably true that not every lab exercise has to measure up, just a
reasonable fraction of them, so the students can get a feel for the
frustrations that a practicing scientist experiences.

As I think about this, I am concluding that developing these kinds of
appreciation for lab work may be more important for the students than
learning how to design their own experiments. In the first place,
experimental design is a very high-level skill, and it is time enough
in grad school to learn how to do that (granted, some people have a
knack for it early on, but I doubt it can be taught early on any more
than other high level skills). I have in the past thought that having
the students design their own experiments would be a good activity
for them. It was almost always a complete disaster, since the
students didn't know enough to do it well, and most were not
sufficiently motivated to do it at all. And in my moments of
self-honesty I realize that I probably did it because I was unable
(or too lazy) to come up with a good lab myself, so I just foisted
the job off on the students. In my future lives, I think I will let
the students concentrate their efforts on analyzing the results from
the well-designed experiments of others, before they have to go off
and design their own.

Hugh
--

Hugh Haskell
<mailto://haskell@ncssm.edu>
<mailto://hhaskell@mindspring.com>

(919) 467-7610

Let's face it. People use a Mac because they want to, Windows because they
have to..
******************************************************