Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Student Evaluations



I agree that teaching to a standardized test can turn out to have
deleterious effects on learning and teaching practice. But of the 5
methods I listed for evaluating teaching effectiveness, pre and post
testing (still imperfect at the present level) seems to me to hold
the most promise of actually showing that students have learned
something.

Some points:
1) I think we have to keep in mind what the FCI (and similar tests)
is good for. We are trying to determine the best way(s) to present
the material in class so that students gain the maximum
understanding. The FCI is an attempt to compare one teaching method
against another. I think it can be used for that. If I try something
new next year (with a statistically valid sample of students) and the
scores go up then maybe I'm on to something.

2) If you are evaluating teaching methods you want to know how much
the student got out of YOUR class and YOUR methods. So It seems
important to me to both pre and post test. Most standardized tests
(such as the AP exam) are designed to be given once to assess
knowledge base. They do not tell you where this knowledge came from
(reading outside of class, mom is a mathematician etc.)

3) If you really want to find out how your students are doing (and
aren't trying to fudge data for a tenure dossier) you have to be
somewhat honest with yourself. I haven't ever read all the FCI
questions or tried to take it myself and I don't intend to. I've read
the literature ABOUT the test and I'm taking the word of those who
developed and tested it that it is valid measurement of understanding
of key Newtonian concepts. The whole point of a such a test should be
and honest comparison of your teaching methods with others (ie not
teaching to the test).

4) About every other year a former student approaches me, tells me
how wonderful my class was, proceeds to mention a specific physical
principle AND GETS IT WRONG! My point is students are poor judges of
whether they've learned anything (or learned it correctly). As I
mentioned, student evaluations are a useful feedback tool for finding
out how students perceive what is going on in the class. But frankly
I don't care what is going on in the heads of my students. I don't
care if they are happy, if they like me, if they like the subject, if
they think they've learned something or not. The primary goal in the
class is to get the students to the point where they can work out the
right answer to a problem they have never seen before by applying the
know physical laws. If I thought getting the entire class royally
pissed off lead to better performance I'd try to do it (I happen to
think the opposite but you get the point).

kyle

------------------------------
Date: Mon, 27 Dec 1999 12:20:34 -0500
From: "Richard W. Tarara" <rtarara@SAINTMARYS.EDU>
Subject: Re: Student Evaluations
MIME-Version: 1.0
Content-Type: text/plain

I have some problems with basing teacher evaluations on 'standardized
tests'. For one thing it truly encourages 'teaching to the test'. I know
some people will argue that this is fine if the test does represent the
knowledge we want taught, but of course it is someone else's choice of what
should be taught, not the teacher's. I too use the FCI, but not in the way
the researchers do. I give the test as a midterm (towards the end of
Newton's Law instruction) and then again incorporated into the final. The
midterm is not returned to them. This year, with a very diverse (in
ability) class (non science majors, 'low math' conceptual course) they
scored 60% on the midterm and 75% on the final. I can assure everyone, that
these students did NOT have 75% mastery of Force Concepts at the end of the
course--but they had learned how to answer Third Law questions. ;-)

I guess my point here is that given standardized methods of assessment,
there is too much temptation to teach HOW TO TAKE THE TEST rather than the
content/process that the test will really assess. I've run several summer
workshops for AP Physics but have given that up as the participants became
more and more fixated on 'How to teach to the Test' and didn't want to
listen to my approach (which was that an AP course should be equivalent to a
College course and therefore we should spend the workshop time making sure
that everyone was up to speed on what a College course really was all
about--with emphasis on HOW to teach problem solving but at the same time
recognizing that good problem solvers may not have the conceptual
understanding we might expect.) The only way the standardized test as
ultimate assessment works is to have a national test, written by experts in
both content and testing, formally administered, with the test itself
changed in format often.
To this latter point, I would encourage those who use the FCI to try the
following: Pre and Post test with the FCI, but then later give the Tools
for Scientific Thinking, Force and Motion Evaluation. The style of those
questions are much different than the FCI and you might be surprised how
much that matters even if your students do well on the post FCI.

Once you turn the assessment over to such national testing, you have
severely impinged on the Academic Freedom of the instructors--having pretty
much set their curriculum. ;-(

Just my ramblings,

Rick


----- Original Message -----
From: "kyle forinash" <kforinas@IUS.EDU>

Given the above comments and conclusions it seems clear to me that
pre- and post- testing, when available, is the more effective
evaluation process when compared to SET scores, particularly in
introductory physics where the goal is to convey particular concepts
which the student either does or does not understand. A well
established diagnostic test in mechanics is available (the Force
Concept Inventory; see Refs. 5, 6, 7, 8, 9, 10) and I have used that
as my primary evaluation tool this year. Further discussion can be
found below.


------------------------------
Date: Mon, 27 Dec 1999 11:55:48 -0600
From: brian whatcott <inet@INTELLISYS.NET>
Subject: Distance learning sample lecture at the Georgia Institute of
Technology (fwd)
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii

For the folks who were willing to dismiss distance learning out of
hand in the recent thread - here is a sample newsgroup post which
I think expresses the hunger for college topics, not being met in
the usual ways.

Brian Whatcott

----Forwarded----

Newsgroups: rec.crafts.metalworking
Subject: Re: Distance learning sample lecture at the Georgia Institute of
Technology

Date: Mon, 27 Dec 1999 07:28:26 GMT

<brantley@telepath123.com> wrote:
I ran across an interesting distance learning sample lecture at the
Georgia Institue of Technology.It covers Machine chip formation
mechanics.

....

Too bad that GIT charges $510 per credit hour or I would sign up for
the whole class.

I'll bet you can take the class for either a _lot_ less, or nothing at
all, if you don't take it for credit. It's called "auditing" the class if
you do it formally: there might be an auditing fee plus some other charges
for tapes or whatever they have for a distance-learning arrangement.

The other approach might be to e-mail whoever teaches the course and see
if you can somehow "sit in" on the course, again for no credit.
Professors generally don't care and usually welcome non-credit people.
Tell him that you liked the chip-formation lecture and ask how you might
be able to get the others. I'll bet he'll be happy to help.

That $510/credit is only for people who want a Georgia Tech degree.

Mark Kinsler
--
............................................................................
114 Columbia Ave. Athens, Ohio USA 45701 voice740.594.3737
fax740.592.3059
Home of the "How Things Work" engineering program for adults and kids.
See http://www.frognet.net/~kinsler



brian whatcott <inet@intellisys.net>
Altus OK

------------------------------
Date: Mon, 27 Dec 1999 15:01:00 -0500
From: Hugh Haskell <hhaskell@MINDSPRING.COM>
Subject: Re: Student Evaluations of Teaching
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii; format=flowed

Ludwik said:

If it were up to me I would evaluate performance of teachers on
the basis of how well their students know what is expected. To
make this possible two prerequisites must be met.

But this depends on all the students arriving at the teacher's
doorstep with the same knowledge and the same ability (on the
average) each time that teacher teaches that course. And to a
difficult to determine level, it depends on the "chemistry" of the
class members. we all know that individual classes take on a
"personality" that somehow depends on the mix of personalities in the
class. The personality can have profound effects on how well the
class does as a group.

It seems to me that the problem of evaluating a teacher's
effectiveness is made incredibly difficult by the problem of
determining the beginning point of the students and how they mix.
these variable are probably uncontrollable in any but the most
rigidly controlled environments. Furthermore, all sorts of
unpredictable events can happen during the course of the term that
can have a significant effect on how well the class learns (weather,
unrest of various sorts, illness (of the teacher or of the students),
absence of the teacher for various reasons, etc., etc.).

All of these problems mean that a teacher's effectiveness probably
cannot be reliably measured in the short term. It seems to me that
the only even remotely reliable measure, is to survey the students
after several years, and see how many of them have succeeded in the
field taught by the teacher, or some other reasonably objective
standard.

Hugh


Hugh Haskell
<mailto://hhaskell@mindspring.com>

Let's face it. People use a Mac because they want to, Windows because they
have to..
******************************************************

------------------------------
Date: Mon, 27 Dec 1999 15:09:59 -0500
From: "Steven D. Richardson" <richarsd@CHOICE.NET>
Subject: Re: Student Evaluations of Teaching
MIME-Version: 1.0
Content-Type: text/plain

Maybe I am missing something but what is the role of the administrator in
all of this? If someone in a position of authority observes the teaching of
someone several times, is that not also a valid tool? This presumes that the
observation can and does lead to change, if needed. Does tenure lead into
this as well? Outside the university, tenure is being done away with, so it
seems. Is this viewed as helping keeping the quality of teaching where it
should be? If one can be removed for failing to meet standards, won't they
improve?

Just my thoughts,

Steve

------------------------------
Date: Mon, 27 Dec 1999 17:04:58 -0600
From: Jack Uretsky <jlu@HEP.ANL.GOV>
Subject: Re: Student Evaluations of Teaching
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

i all-
OK, let's turn the discussion around. How does one go about
identifying teachers who are totally incompetent?
Regards,
Jack

Adam was by constitution and proclivity a scientist; I was the same, and
we loved to call ourselves by that great name...Our first memorable
scientific discovery was the law that water and like fluids run downhill,
not up.
Mark Twain, <Extract from Eve's Autobiography>

On Mon, 27 Dec 1999, Hugh Haskell wrote:
But this depends on all the students arriving at the teacher's
doorstep with the same knowledge and the same ability (on the
average) each time that teacher teaches that course. And to a
difficult to determine level, it depends on the "chemistry" of the
class members. we all know that individual classes take on a
"personality" that somehow depends on the mix of personalities in the
class. The personality can have profound effects on how well the
class does as a group.

It seems to me that the problem of evaluating a teacher's
effectiveness is made incredibly difficult by the problem of
determining the beginning point of the students and how they mix.
> these variable are probably uncontrollable in any but the most
rigidly controlled environments. Furthermore, all sorts of
unpredictable events can happen during the course of the term that
can have a significant effect on how well the class learns (weather,
unrest of various sorts, illness (of the teacher or of the students),
absence of the teacher for various reasons, etc., etc.).

All of these problems mean that a teacher's effectiveness probably
cannot be reliably measured in the short term. It seems to me that
the only even remotely reliable measure, is to survey the students
after several years, and see how many of them have succeeded in the
field taught by the teacher, or some other reasonably objective
standard.

Hugh


Hugh Haskell
<mailto://hhaskell@mindspring.com>

Let's face it. People use a Mac because they want to, Windows because they
have to..
******************************************************


------------------------------
Date: Mon, 27 Dec 1999 17:09:56 -0600
From: Jack Uretsky <jlu@HEP.ANL.GOV>
Subject: Re: Student Evaluations of Teaching
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Hi all-
Mebbe <I'm> missing something in all this. How does one evaluate
the quality of university teaching by observing a few (or many) lectures?
Learning, after all, is what happens <between> the lectures, not during
them. I guess there are some administrators who are aware of this.

Regards,
Jack

Adam was by constitution and proclivity a scientist; I was the same, and
we loved to call ourselves by that great name...Our first memorable
scientific discovery was the law that water and like fluids run downhill,
not up.
Mark Twain, <Extract from Eve's Autobiography>

On Mon, 27 Dec 1999, Steven D. Richardson wrote:

Maybe I am missing something but what is the role of the administrator in
all of this? If someone in a position of authority observes the teaching of
someone several times, is that not also a valid tool? This presumes that the
observation can and does lead to change, if needed. Does tenure lead into
this as well? Outside the university, tenure is being done away with, so it
seems. Is this viewed as helping keeping the quality of teaching where it
should be? If one can be removed for failing to meet standards, won't they
improve?

Just my thoughts,

Steve


------------------------------
Date: Mon, 27 Dec 1999 18:27:05 -0500
From: "Steven D. Richardson" <richarsd@CHOICE.NET>
Subject: Re: Student Evaluations of Teaching
MIME-Version: 1.0
Content-Type: text/plain

I should be clear...that classroom observations of teachers in a high school
setting seem effective. I know that when my administrator has visited my
room, he always has picked up on several improvements/changes/strengths.
Point being, they are accurate in seeing the strengths in the classroom. At
least in the high school setting, a lot can be seen in the tone of the room,
structure of the classroom environment, and how the instructor interacts and
teaches their pupils.

I grant that not all are as lucky as I and that I have a good situation but,
it can be done effectively. I would imagine that if someone watched your
classes, labs, and saw how questions were answered, etc. they could tell
where improvement was needed. They could also tell if a person was
incompetent. I think that people are afraid of having other people involved
because they fear judgment. If people shared more and invited people into
their classes before the dean, etc. had to get involved, if things were more
collaborative, struggling people could get help rather than they continue to
get poor student evaluations, nothing happens, and they are not effective.

I'm officially off my soap box,

Thanks,

Steve

------------------------------
Date: Mon, 27 Dec 1999 17:36:19 -0700
From: Jim Green <JMGreen@SISNA.COM>
Subject: Re: Student Evaluations of Teaching
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii; format=flowed

I grant that not all are as lucky as I and that I have a good situation but,
it can be done effectively. I would imagine that if someone watched your
classes, labs, and saw how questions were answered, etc. they could tell
where improvement was needed. They could also tell if a person was
incompetent. I think that people are afraid of having other people involved
because they fear judgment. If people shared more and invited people into
their classes before the dean, etc. had to get involved, if things were more
collaborative, struggling people could get help rather than they continue to
get poor student evaluations, nothing happens, and they are not effective.

Steve, I don't share your assumptions: In a previous incarnation during a
rather stirring faculty meeting, the "senior" (ie tenured) faculty offered
to act individually as a mentor to each of the "junior" faculty within a
given department -- to sit occasionally in their classes and point out
helpful teaching ideas and to guide their individual research.

After the meeting, in a quiet moment, I went to the office of the faculty
senate president and asked him to propose a few parings -- just for
instance. He could not. In every case contemplated the junior faculty
member was a better teacher and was doing much better research than any
proposed senior faculty mentor.

The matter was dropped.

Jim Green
mailto:JMGreen@sisna.com
http://users.sisna.com/jmgreen

--------------------------------

-----------------------------------------------------
kyle forinash 812-941-2390
kforinas@ius.edu
Natural Science Division
Indiana University Southeast
New Albany, IN 47150
http://Physics.ius.edu/
-----------------------------------------------------