Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Lecture Isn't Effective: More Evidence #2



At 4:47 PM -0500 8/7/11, John Clement wrote:

AP tests and other such tests change
and are not research based because the research requires several years.
They can not do this for every version of the test.

I don't dispute the results you quote, but I will dispute your comment on the AP tests. Granted, the free-response part of the AP test can't be given extensive trials and verification, but the multiple choice part is another matter. While the entire test is not given over and over again the questions themselves are very carefully drafted and then extensively tested before use in actual AP tests. By the time a question is used it has been through several reviews, and has been given to both high school and college students, and the CB has accumulated extensive statistics on each question. All this is necessary in order to normalize the results and make the tests consistent from year to year. While it doesn't take several years to create an entire test, it does take several years for any given question to make it into any given test, and the balance of questions is carefully chosen to provide the breath and depth desired for each test and to allow for year-to-year comparisons. Each version of the test includes a few questions that have been used before as part of the effort to create continuity in the test.

As to the FCMI and its relatives, I would like to see data regarding the pre-test raw-scores over time. Since the FCMI and the other similar tests have not changed for several years and their security depends on the diligence of the teachers using them, it is not beyond reason that the content of the test is gradually becoming more general, and that as a result, the initial base scores of the students may be increasing with time. I agree that teaching to the test using traditional methods may have little effect on the results, but if the students know the questions themselves and the answers before they take the test, it cannot but help raise the raw scores.

This has been a problem with the Navy for many years. Promotions among the enlisted ranks are heavily weighted by a multiple choice test given every candidate for promotion (each test is specifically tailored to the candidate's military specialty). In many specialties for which there are fewer openings for promotions, the candidates often study together for the exams, using collections of questions from previous exams. What they do is assign two or three questions to each candidate--that is, when they are about the take the test, each candidate is expected to memorize two or three questions from the test, and they get together after the test to combine the memorized results, ending up with a virtually complete copy of the test. Using this scheme, the candidates will have accumulated a large file of actual test questions over a few years that they use to study for the next test. Occasionally they get lucky, and someone in the test-preparation unit gets lazy and just uses and old test.

This actually happened to some sailors who worked for me during my Navy career. They opened their test and were shocked to see the exact test that they had recently been studying. They were accused of cheating on the test, but when they explained the circumstances, they were cleared and someone at the test prep unit got axed instead.

It is not likely that many copies of the exact test are floating around among students, but it is almost certain that some are, and it is possible that some students have benefited from their advance knowledge, and at the same time, skewed any research results that might have depended on that administration of the test.

Hugh
--

Hugh Haskell
mailto:hugh@ieer.org
mailto:haskellh@verizon.net

It isn't easy being green.

--Kermit Lagrenouille