Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Effective HS Physics (was Statistics / more ...)



I did not claim that IQ was a good predictor, nor do I believe that it will
make you successful, unless you can count Mensa membership as success.
Incidentally Binet and Simon developed the first modern intelligence test in
1905. They used it to predict success in school. They tracked students and
modified the test until it was a better predictor. This formed the basis
for modern IQ tests. They also came up with concept of mental age on which
the quotient is based. It was originally devised in response to request by
the French education minister who wanted to detect mentally deficient
children. As such IQ may actually have some predictive value in some K-12
schools, but I doubt that its validity is as good in college, or in a
multiethnic society.

Certainly an IQ of 65 is likely to be a predictor of difficulty in achieving
success in school, but beyond those sort of general statements it is not
clear what most of these tests measure. Without having access to the actual
questions it is difficult to judge what the test "might" mean. This is even
true of teacher made tests. Another factor is that as teaching methods and
content vary, the tests have to be continually revised.

Unfortunately tests like the SAT are scatter guns which may be good
predictors for a certain class of students who are going into certain
programs, but can not under any conditions predict all students for all
programs. As I understand it, new questions are verified by comparing the
results with older questions. While this may help assure the uniformity of
the test, it does not really make the test "better". BTW the counselors at
my school advise students to take both SAT and ACT, because the SAT
correlates with SES, but the ACT with content. I would never defend most of
these standardized tests, though like most people I am willing to use test
scores as a club when it is useful. I suspect that the sort of research
that went into the original Binet test is impossible with the college tests.
To make them good predictors the colleges must feed back the student grades
to the college board by student name, or with the relevant SAT score.
Current privacy laws probably make this impossible. If I ran an admissions
office, I would want to track the scores of all incoming students and their
grades in various programs, then try to find if there is a correlation.
This could then be used to help make decisions about later incoming
students. I don't know if schools really go to that sort of trouble, or
that admissions officers are capable of doing that. Without constant
research, you have no way of knowing that your choices are correct. I guess
the Cornell study shows that most admissions offices fly by the seat of the
pants.

I would like to point out one of the big idiocies in state standardized math
tests. The question will often ask the student to solve an equation, and
then present a set of choices for the answers. No halfway intelligent
student would solve the equation if it is easier to just plug in answers
until a matching set is found.

The procedure used to develop the physics MC diagnostic tests is much
better. It relies on interviews and/or written free responses. The
distracters are chosen from among the common misconceptions. Sometimes
further research is conducted by means of interviews to judge if the test
results line up with the student's understanding. I have occasionally used
some informal interviews to see if specific questions are correct, and I
have found that generally the verbal responses line up with the MC test. I
have also found that students who do well on such tests as the FCI or FMCE
also tend to do well as problem solvers. Mazur likewise found that when he
managed to get the FCI scores up, that the scores on his conventional test
also went up. Again, they can not measure every thing you want a student to
learn, but they do have the distinct advantage of having very careful
attention paid to their construction, and they are "standard".

I have never taken one of the more involved intelligence tests, but my
daughter has taken them, and in one of the education courses I have heard a
description of how they are administered. The description alone was enough
to scare one away and make one wonder how valid they really are. The
professor took a course in testing where the major project was to administer
a Wechsler test to one individual. He said it was unbelievably difficult to
find a person willing to take it. It could not be given all at once,
because the testee would become so frustrated that the test had to be
suspended until another day. Some of the test involved showing something
for a fixed time, like 1 second, and asking a question about it. When they
couldn't answer, it was shown for 2 sec, and later for 3... Usually the
first time the subject just says "HUH, I didn't see a thing!"

John M. Clement

John Clement wrote:

I read that IQ was originally designed to predict
success in school.

I don't believe everything I read.

There's a big difference between "originally designed
to predict" and "actually successful at predicting".

It is relatively straightforward to test what a
person knows at the moment; it is much, much harder
to predict what a person will achieve later.

I haven't found anybody seriously claiming to make such
predictions ... and if I do find them, I will assume
they are charlatans until proven otherwise.

Furthermore, even a moment's thought reveals that
intelligence is multi-dimensional -- therefore any
one-dimensional number like an IQ value cannot possibly
represent intelligence. There's a theorem that says
you cannot change dimensionality in a way that is one-to-one
and continuous.

The College Board has repeatedly redefined what SAT
stands for; currently it is "Scholastic Assessment
Test"... but it doesn't say precisely what it is
assessing.

Similar remarks apply to the GREs. There's a much
longer list of what they don't predict than what
they do.

http://www.news.cornell.edu/releases/Aug97/GRE.study.ssl.html

] The Graduate Record Examination (GRE)
] does little to predict who will do well in graduate school for
] psychology and quite likely in other fields as well, according to a
] new study by Cornell and Yale universities.
]
] Of the three subtests of the GRE (verbal, quantitative and
] analytical) and the GRE advanced test in psychology, only the
] analytical subtest predicted any aspect of graduate success
] beyond the first-year grade point average (GPA), and this
] prediction held for men only. The verbal subtest and psychology
] test predicted first-year GPA, but this prediction vanished by the
] second year's GPA.