Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Multiple guess, OK?



The ability to solve an equation can not be well captured by MC. This is
because the easiest method is to substitute the answers until you find one
that works. Also testing the method of solution is completely not amenable
to MC. Of course if you require bubbling in a numeric answer the ability to
solve can be captured.

In physics MC can not test things like the ability to create a graph, and as
with math it can not test the method of solution. But it can often get at
the common misconceptions in solving problems, as shown in the paper.
Physics problems require reasoning and multiple steps, so they are not like
solving equations. But the MC answer can probably still can encourage
students to work backward rather than forward.

Lawson has a MC version of his hand written test, but it relies on paired
questions, and the second question can sometimes suggest the correct answer
to the first one, so I like the hand graded one better. Also the hand
graded one can test combinatorial reasoning while the MC version can not.
But combinatorial reasoning could be easily computerized.

The big problem with MC is that it can easily test memorized knowledge, but
testing transfer ability is much more limited. In addition doing the
research to figure out how to make a good MC question as done in the paper
requires a lot of work. So most MC tests tend to be very simplistic and do
not properly assess the student ability, but only memorized things.

Most teachers think their tests are superior, and most teacher made tests
are pretty good according to the research, but they are seldom superior. So
having a number of really good MC questions could be valuable.
Unfortunately the ones provided by publishers in test banks are usually just
teacher made and are often wretched. Teachers then often pick the poorest
ones because they don't recognize the superior ones, because they do not
know the thinking behind the good ones. So MC tends to descend to the
lowest level.

As to essays, there is a program that grades them. But it can be easily
fooled by verbiage which has absolutely no content, or is even complete
nonsense. Many great authors would probably fail it.

John M. Clement
Houston, TX



Clearly not everything that can be captured by open response
can be captured by multiple choice. Just consider writing an essay.

That is not to say, however, that in most situations -- with
notable exception like writing a composition or identifying
an unknown substance in a lab -- MC items are not as good or
better than an open response. Certainly cheaper.

Over the last 3-4 decades a lot of research was done by ETS,
College Board and the likes. They generally found that
for*similar* kind of knowledge MC is more reliable. They
also couldn't find types of knowledge that don't fall in this
category on most normal tests like AP.

I am traveling and w/o access to references, but they are
relatively easy to find.

Ze'ev
Sent via BlackBerry by AT&T

Can free-response questions be approximated by multiple-choice
equivalents? Shih-Yin Lin and Chandralekha Singh

American Journal of Physics -- August 2013 -- Volume 81,
Issue 8, pp.
624
In answer to your question, here's my five-word review:
Not OK.
Travesty of science.

===============================
Longer version:

Consider the two assertions:

+A) There exist one or more questions (X) such that X can be
+represented
in free-response format AND in multiple-guess format.

+B) For all questions (X), if X can be represented in free-response
format then it can also be represented in
multiple-guess format.

I assume everybody on this list knows that +A is true and
+B is false.
I might go so far as to say that +A is obviously true and
+B is obviously false.

As a point of formal logic, the negation of +B is:

-B) There exist one or more questions (X) such that X can
be represented
in free-response format but not in multiple-guess format.

which is obviously true. Proof by construction. I hope
everybody on
this list can come up with relevant examples.

The paper in question can be found via:
http://ajp.aapt.org/resource/1/ajpias/v81/i8/p624_s1

It starts by proving the obvious. It uses two examples to prove
assertion (+A).

It then *appears* to claim that two examples prove the
general case,
i.e. to prove assertion (+B). Wow, that's quite a leap, from two
examples to the general case. I emphasize that it
*appears* to prove
this, because the English is so non-specific that I cannot be sure
what it is claiming. The key conclusion is:

The findings suggest that research-based MC questions can
reasonably
reflect the relative performance of students on the free-response
questions ....
This claim *appears* to apply to all possible questions, but a
Philadelphia lawyer could argue that the paper doesn't
explicitly say
/what/ the conclusions apply to. Therefore:
-- If we are generous, the conclusions apply only to two hand-
selected examples, and the paper is obviously trivial.
-- If the conclusions are meant to apply more generally, the
paper is obviously wrong.
-- In any case, the paper is so badly written that we cannot
tell whether it is trivial or wrong!

==================================

Sometimes people who ought to know better assume that if
something is
published in the peer-reviewed literature, it must be OK. This is
certainly not true ... especially in the PER literature.

The publication of papers like this reflects badly not just on the
authors, but also on the reviewers, on the journal, and on
the entire
field.