Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] pre/post testing to determine student progress



If you reply to this long (30 kB) post please don't hit the reply button unless you prune the copy of this post that may appear in your reply down to a few relevant lines, otherwise the entire already archived post may be needlessly resent to subscribers.

********************************************
ABSTRACT: In a POD post of 21 Sep 2006, I pointed out that the "Friendly Guide" prepared by "Coalition for Evidence-Based Policy" (CEBP) criticizes pre/post testing for its supposed failure to employ control groups. CEBP is evidently unaware of the fact that traditional courses provide reasonably well matched controls for pre/post testing in astronomy, economics, biology, chemistry, computer science, economics, engineering, and physics. In response Mike Theall: (a) suggested that CEBP "may have an agenda tied to current policies in Washington that have more to do with politics or personal beliefs than a comprehensive view of education," and (b) wondered about CEBP's connection with, and funding from, the U.S. Dept. of Education (USDE). Regarding "a": CEBP appears to have an agenda dominated by the personal beliefs of its members that randomized control trials (RCT's) are the "gold standard" of educational research. Regarding "b": a Google search disclosed that CEBP, although formerly under the USDE, is now part of the "The Council for Excellence in Government," with the mission to promote government policymaking (including education) based on "rigorous evidence" (read RCT's) of program effectiveness. Although not a part of USDE and therefore not directly funded by USDE, CEBP has influenced USDE's research funding. According to Michael Scriven, the USDE's "Institute of Educational Science" (IES) decided to "take all $500 million dollars of their research money and pull it out of anything except randomized control trials."
********************************************

In a POD/PhysLrnR post of 21 Sep 2006 titled "Re: pre/post testing to determine student progress" [Hake (2006)] I quoted from "Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide" [CEBP (2003)] prepared by the "Coalition for Evidence-Based Policy":

". . . . A 'pre-post' study examines whether participants in an intervention improve or regress during the course of the intervention, and then attributes any such improvement or regression to the intervention. The problem with this type of study is that, WITHOUT REFERENCE TO A CONTROL GROUP, it cannot answer whether the participants' improvement or decline would have occurred anyway, even without the intervention. This often leads to erroneous conclusions
about the effectiveness of the intervention." [My CAPS.]

The CEBP was formerly a part of the Institute of Education Sciences [IES (2006)], in turn a part of the U.S. Dept. of Education [for the structure of this bureaucratic colossus see
<http://www.ed.gov/about/offices/or/index.html?src=ln>].

I pointed out that the CEBP's criticism of pre/post testing is irrelevant for most of the pre/post studies in introductory astronomy, economics, biology, chemistry, computer science, economics, engineering, and physics [see Hake 2004 for references]. The reason is that control groups HAVE been utilized - they are the introductory courses taught by the traditional method. The matching is due to the fact that (a) within any one institution the test [Interactive Engagement (IE)] and control [Traditional (T)] groups are drawn from the same generic introductory course taken by relatively homogeneous groups of students, and (b) IE-course teachers in all institutions are drawn from the same generic pool of introductory course physics teachers who, judging from uniformly poor average normalized gains <g> they obtain in teaching traditional (T) courses, do not vary greatly in their ability to enhance student learning.

I suspect that the pre/post testing referred to above might pass muster at the USDE's "What Works Clearing House" <http://www.w-w-c.org/> as "quasi-experimental studies [Shadish et al. (2002)] of especially strong design" [see <http://www.w-w-c.org/reviewprocess/standards.html>].

In response to Hake (2006), Ed Nuhfer (2006a) in a post titled "Re: pre/post testing to determine student progress - Phooey!" wrote [my inserts at ". . . [insert]. . . "]:

"If we want professors to assess learning gains in their classes routinely, the anticipation that all should be setting up control groups for every course lies someplace between silliness and madness. . . [physics education researchers have shown that average NORMALIZED pre-to-posttest gain <g> - i.e., the *actual* average gain [<%post> - <%pre>] divided by the *maximum* possible average gain [100% - <%pre>], where the angle brackets <. . .> indicate class averages], on the Force Concept Inventory [Hestenes et al. (1992)] for traditional courses so far measured - probably well over 100 - is always about 0.2, so there's no need for every teacher to set up a control group]. . . One can do assessment well in everyday practice with good record keeping, multiple measures, and support from the literature (see Nuhfer, 2006b). Frankly, not a single author of that CEBP report is likely to have adhered to their own particular guideline in their own everyday practice although I'd be happy to be corrected by a record that showed any of them ran a parallel control group for every class he/she taught . . .[as far as I know the members of CEBP have: (a) focused almost exclusively on K-12 education, despite the fact that student learning in K-12 is crucially dependent on the quality of teacher preparation programs in universities, and (b) never attempted to gauge student learning in their own courses on a scale similar to that in physics - see e.g., "Do Psychologists Research the Effectiveness of Their Courses? Hake Responds to Sternberg" (Hake 2005a)]. . . . . . . . . . . . ."

To which Mike Theall (2006) replied [my CAPS and inserts at ". . . .[insert]. . . ."]:

TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
I have no "evidence" for thinking the following, but "The Coalition for
Evidence-Based Policy" . . .[CEBP]. . . may have an agenda tied to current policies in Washington that have more to do with politics or personal beliefs than a comprehensive view of education. I wonder what their position on "Intelligent design" is? In any case, while I can't argue with the essential implications of the title (I like evidence too), I wonder if the title implies a bias favoring traditional methods (national testing of college students perhaps?) . . . .[for the latest on the planned NCLB-type monitoring of higher education see USDE (2006)]. . . . see over qualitative methods or assessment activities such as Ed describes. I ALSO WONDER WHAT THE RELATIONSHIP OF CEBP IS TO THE US DEPT OF ED.. . .[USDE]. . . AND WHETHER OR HOW MUCH FUNDING THEY HAVE RECEIVED FROM THE DEPT. This year FIPSE . . . .[<http://www.ed.gov/about/offices/list/ope/fipse/index.html>]. . . . got 11.5 million for all of higher education. Is there a figure for CEPB that we might use to ascertain the extent to which the Dept is funding organizations that (seem to) be directly supporting Bush's policies?
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT

As to the relationship of the CEBP to the USDE, in 2003 the CEBP was part of the USDE's "Institute for Education Sciences" (IES) directed by psychologist Grover Whitehurst [see <http://ies.ed.gov/director/> and Whitehurst (2003)]. But a search for "CEBP" at "Search ED. gov" at the U.S. Dept. of Education homepage <http://www.ed.gov/index.jhtml> yielded only two hits, both to the 171-page "IES Style Guide" (2005) where Appendix A. "Abbreviations List: Organizations, Agencies, Surveys, And Terms" on page A2 lists "CEBP: Coalition for Evidence-Based Policy."

SO WHERE HAS CEBP GONE?

A Google <http://www.google.com/> search for "CEBP" unearthed CEBP (2006) at <http://coexgov.securesites.net/index.php?keyword=a432fbc34d71c7>. There it is is stated that: [bracketed by lines "CEBP-CEBP-CEBP-CEBP-. . . .", my CAPS and inserts at ". . . [insert]. . ."]:

CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP
The Coalition for Evidence-Based Policy IS SPONSORED BY THE COUNCIL FOR EXCELLENCE IN GOVERNMENT <http://coexgov.securesites.net/index.php>, WITH THE MISSION TO PROMOTE GOVERNMENT POLICYMAKING BASED ON RIGOROUS EVIDENCE OF PROGRAM EFFECTIVENESS. In the field of medicine, public policies based on scientifically-rigorous evidence have produced extraordinary advances in health over the past 50 years. By contrast, in most areas of social policy -- such as EDUCATION, poverty reduction, labor and employment, crime and justice, and health care financing and delivery -- government programs often are implemented with little regard to evidence, COSTING BILLIONS OF DOLLARS YET FAILING TO ADDRESS CRITICAL NEEDS OF OUR SOCIETY. However, rigorous studies have identified a few highly-effective social interventions, suggesting that a concerted government strategy to build the knowledge base of these proven interventions, and spur their widespread use, could bring rapid progress to social policy similar to that which transformed medicine.

Since the Coalition's founding in 2001, our work with top federal and state policymakers has resulted in important evidence-based reforms. As illustrative examples, we have helped advance:

(a) Key Reforms in the Office of Management and Budget's (OMB) process for assessing the performance of federal programs government-wide, including new OMB guidance on "What Constitutes Strong Evidence of Program Effectiveness" <http://tinyurl.com/netwl>;

(b) Concrete advances in Congressional funding and SUPPORT FOR RIGOROUS STUDIES (ESPECIALLY RANDOMIZED CONTROLLED TRIALS) IN EDUCATION and other areas of social policy; and

(c) A NEW "PRIORITY" IN A NUMBER OF THE EDUCATION DEPARTMENT'S COMPETITIVE GRANT PROGRAMS FOR APPLICANTS THAT BUILD A RIGOROUS EVALUATION INTO THEIR PROPOSED PROJECT.

An independent evaluation of our work . . .[by Bernard H. Martin (2004)]. . . . conducted for the William T. Grant Foundation found that the Coalition has been "instrumental in transforming a theoretical advocacy of evidence-based policy among certain agencies into an operational reality." . . .[No mention is made of the fact that 2 of the 8.5 pages of Martin's report is devoted to "Reservations."]

[Who is Bernard H. Martin? According to NAPA (2006), Martin is a "Consultant. Former positions with U.S. Office of Management and Budget: Special Assistant to the Deputy Director for Management; Deputy Associate Director for Education, Income Maintenance and Labor Division; Assistant Director for Legislative Reference; Deputy Associate Director, Labor, Veterans and Education Division; Chief, Economics Science General Government Branch, Legislative Reference Division."]

The Coalition's bipartisan Board of Advisors <http://coexgov.securesites.net/index.php?keyword=a432fbc71d7564> is comprised of distinguished former government officials, scholars, and other individuals from a broad range of policy areas. The Coalition's Executive Director is Jon Baron . . .[ <http://www7.nationalacademies.org/sbir/JBaronBio.html>]. . ."

CEBP (undated) online at <http://www.excelgov.org/admin/FormManager/filesuploading/Coalition_purpose_agenda_3_06.pdf> is a two-page overview of the Coalition's purpose and agenda.
CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP

The CEBP's Board of Advisors includes luminaries such as famed Randomized Control Trial (RCT) authority Robert Boruch (University of Pennsylvania); political economist David Ellwood (Harvard); former FDA commissioner David Kessler (Univ. of California - San Francisco); past American Psychological Association president Martin Seligman (University of Pennsylvania); psychologist Robert Slavin (Johns Hopkins); economics Nobelist Robert Solow (MIT); and progressive-education basher Diane Ravitch.

Unfortunately, no physical scientists, mathematicians, philosophers, or K-12 teachers are members of the CEBP. The CEBP's "Guide" is addressed to K-12 education, but its recommendations could influence funding for educational research at the postsecondary level - of primary interest to many education researchers.

According to CEBP (undated) [bracketed by lines "CEBP-CEBP-CEBP-CEBP-. . . .", my CAPS]:

CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP
"The precedent from medicine: rigorous evidence - particularly the randomized controlled trial - has produced remarkable advances. In medicine, randomized controlled trials have provided the conclusive evidence of effectiveness for most of the major medical advances over the past 50 years, including: (i) vaccines for polio, measles, and hepatitis B; (ii) interventions for hypertension and high cholesterol, which in turn have helped bring about a decrease in coronary heart disease and stroke by more than 50% over the past half-century; and (iii) cancer treatments that have dramatically improved survival rates from leukemia, Hodgkin's disease, breast cancer, and many other cancers."
CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP-CEBP

Unfortunately, the value of randomized controlled trials (RCT's) in medicine does not always guarantee their value in education, witness California's costly class size reduction (CSR) program, based on Tennessee's highly regarded [Mosteller (1995), Mosteller et al. (1996), Finn & Achilles (1999)] RCT experiment STAR. But according to the latest report of the California Class Size Reduction Research Consortium (CCSRRC 2002), California's CSR program yielded NO CONCLUSIVE EVIDENCE OF INCREASED STUDENT ACHIEVEMENT. One reason appears to be that there were simply not enough teachers in California to support any substantive class size reduction without deterioration of teaching effectiveness.

Even RCT proponents Cook & Payne (2002) wrote [my CAPS]:

"In some quarters, particularly medical ones, the randomized experiment is considered the causal 'gold standard.' IT IS CLEARLY NOT THAT IN EDUCATIONAL CONTEXTS, given the difficulties with implementing and maintaining randomly created groups, with the sometimes incomplete implementation of treatment particulars, with the borrowing of some treatment particulars by control group units, and with the limitations to external validity that often follow from how the random assignment is achieved."

For discussion of the place of RCT's in educational research see, e.g.: "The 2004 Claremont Debate: Lipsey vs. Scriven: Determining Causality in Program Evaluation and Applied Research: Should Experimental Evidence be the Gold Standard" [Donaldson & Christie (2005)] - note that to psychometricians "experimental" = RCT, see e.g., Shadish et al. (2002)]; "Should Education Research Be Like Medical Research?" [Hake (2003)]; "Should Randomized Control Trials Be the Gold Standard of Educational Research?" [Hake (2005b,c,d)], and the extensive discussion of RCT's on the EvalTalk list - typing "RCT" into the "Search for" slot of the EvalTalk search engine <http://bama.ua.edu/cgi-bin/wa?S1=evaltalk&X=-> yields 170 hits as of 24 Sep 2006 18:45:00-0700 (17 of them due to Hake).


Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake@earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>

REFERENCES [Tiny URL's courtesy <http://tinyurl.com/create.php>.]
CCSRRC (2002). "What We Have Learned About Class Size Reduction in California, California Class Size Reduction Research Consortium [American Institutes for Research (AIR), RAND, Policy Analysis for California Education (PACE), WestEd, and EdSource]; full report online as a 9.5 MB pdf at <http://www.classize.org/>. A press release is online at <http://www.classize.org/press/index-02.htm>.

CEBP. 2003. "Coalition for Evidence-Based Policy," "Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide," online as a pdf at
<http://www.ed.gov/rschstat/research/pubs/rigorousevid/index.html>, or download directly at <http://www.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf> (140 kB).

CEBP. 2006. "Coalition for Evidence-Based Policy," online at
<http://coexgov.securesites.net/index.php?keyword=a432fbc34d71c7>.

CEBP. Undated. "Coalition For Evidence-Based Policy: A Project Sponsored by the Council for Excellence in Government," online at <http://www.excelgov.org/admin/FormManager/filesuploading/Coalition_purpose_agenda_3_06.pdf> (64 kB).

Cook, T.D. & M.R. Payne. 2002. "Objecting to the Objections to Using
Random Assignment in Educational Research," in Mosteller & Boruch (2002).

Donaldson, S.I. & C.A. Christie. 2005. "The 2004 Claremont Debate: Lipsey vs. Scriven: Determining Causality in Program Evaluation and Applied Research: Should Experimental Evidence be the Gold Standard," Journal of MultiDisciplinary Evaluation #3, October; online at <http://tinyurl.com/79n3b>.

Finn, J. & C. Achilles. 1999. "Tennessee's Class Size Study: Findings, Implications, Misconceptions," Educational Evaluation and Policy Analysis. 21(2): 97-109. See also other articles in the same special issue (Grissmer, 1999). Evidently an abstract will eventually be available at <http://www.aera.net/publications/?id=324>.

Grissmer, D., ed. 1999. "Class Size: Issues and New Findings," Educational Evaluation and Policy Analysis, Special Issue 21(2). Evidently an abstract will eventually be available at
<http://www.aera.net/publications/?id=324>.

Hake, R.R. 2003. "Should Education Research Be Like Medical Research?" online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0312&L=pod&P=R1271&I=-3>. Post of 2 Dec 2003 21:18:19-0800 to EvalTalk, Math-Learn, PhysLrnR, and POD.

Hake, R.R. 2004. "Re: Measuring Content Knowledge," POD posts of 14 &15 Mar 2004, online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&P=R13279&I=-3> and
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&P=R13963&I=-3>.

Hake, R.R. 2005a. "Do Psychologists Research the Effectiveness of Their Courses? Hake Responds to Sternberg," online at <http://listserv.nd.edu/cgi-bin/wa?A2=ind0507&L=pod&P=R11939&I=-3>. Post of 21 Jul 2005 22:55:31-0700 to AERA-C, AERA-D, AERA-J, AERA-L, ASSESS, EvalTalk, PhysLrnR, POD, & STLHE-L.

Hake, R.R. 2005b. "Should Randomized Control Trials Be the Gold Standard of Educational Research? online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0504&L=pod&P=R11840&I=-3>. POD post of 15 Apr 2005 22:07:01-0700 to AERA-C, AERA-D, AERA-G, AERA-H, AERA-J,
AERA-K, AERA-L, AP-Physics, ASSESS, Biopi-L, Chemed-L, EvalTalk, Math-Learn, Phys-L, PhysLrnR, Physhare, POD, STLHE-L, & TIPS.

Hake, R.R. 2005c. Re: Should Randomized Control Trials Be the Gold Standard of Educational Research? online at
<http://lists.asu.edu/cgi-bin/wa?A2=ind0504&L=aera-l&T=0&O=D&P=2100>.
Post of 17/18 Apr 2005 to AERA-C, AERA-D, AERA-G, AERA-H, AERA-J,
AERA-K, AERA-L, AP-Physics, ASSESS, Biopi-L, Chemed-L, EvalTalk, Math-Learn, Phys-L, PhysLrnR, Physhare, POD, STLHE-L, & TIPS.

Hake, R.R. 2005d. "Re: Should Randomized Control Trials Be the Gold Standard? online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0504&L=pod&P=R13838&I=-3>. Post of 19 Apr 2005 09:48:12 -0700 to AERA-C, AERA-D, AERA-G, AERA-H, AERA-J, AERA-K, AERA-L, AP-Physics, ASSESS, Biopi-L, Chemed-L, EvalTalk, EdStat-L ,Math-Learn, Phys-L, PhysLrnR, Physhare, Physoc, POD STLHE-L, & TIPS.

Hake, R.R. 2006. "Re: pre/post testing to determine student progress," online at <http://listserv.nd.edu/cgi-bin/wa?A2=ind0609&L=pod&O=D&P=28499>. Post 21 Sep 2006 17:19:14-0700 to POD and PhysLrnR.

Hestenes, D., M. Wells, & G. Swackhamer, 1992. "Force Concept Inventory," Phys. Teach. 30: 141-158; online (except for the test
itself) at <http://modeling.asu.edu/R&E/Research.html>. The 1995 revision by Halloun, Hake, Mosca, & Hestenes is online (password protected) at the same URL, and is available in English, Spanish, German, Malaysian, Chinese, Finnish, French, Turkish, Swedish, and Russian.

IES. 2006. Institute for Education Sciences <http://www.ed.gov/about/offices/list/ies/index.html?src=oc>.

IES Style Guide. 2005. Online at <http://nces.ed.gov/statprog/styleguide/pdf/styleguide.pdf> (2.4 MB, 171 pages). Appendix A. "Abbreviations List: Organizations, Agencies, Surveys, And Terms" on page A2 lists "CEBP: Coalition for Evidence-Based Policy."

Martin, B.H. 2004. "The Coalition for Evidence-Based Policy: It's Impact on Policy and Practice," submitted to the William T. Grant Foundation, online at <http://coexgov.securesites.net/admin/FormManager/filesuploading/indep_evaln_for_WT_Grant.pdf> (668 kB).

Mosteller, F. 1995. "Tennessee Study of Class Size in the Early School Grades," The Future of Children 5(2), Summer/Fall, and references therein; online at
<http://www.futureofchildren.org/usr_doc/vol5no2ART8.pdf> (300 kB).

Mosteller, F., R.J. Light, & J.A. Sachs. 1996. "Sustained Inquiry in Education: Lessons from Skill Grouping and Class Size." Harvard Educational Review 66(4): 797- 842. An abstract is online at
<http://gseweb.harvard.edu/~hepg/wint96.html#sachs>.

Mosteller, F. & R. Boruch, eds. 2002. "Evidence Matters: Randomized
Trials in Education Research." Brookings Institution.

NAPA. 2006. National Academy of Public Administration, "Academy Fellows: Biographical Sketches 2005-2006," online at <http://www.napawash.org/about_academy/biobook2005.pdf> (2.5 MB)

Nuhfer, E.B. 2006a. "Re: pre/post testing to determine student progress - Phooey! POD post of 21 Sep 2006 22:42:21-0600; online at <http://listserv.nd.edu/cgi-bin/wa?A2=ind0609&L=pod&O=D&P=28626>.

Nuhfer, E. 2006b. "A Fractal Thinker Looks at Measuring Change: Part
1: Pre-Post Course Tests and Multiple Working Hypotheses- Educating
in Fractal Patterns XVI," National Teaching and Learning Forum,
15(4), May. Online to subscribers at <http://www.ntlf.com/>. If your
institution doesn't have a subscription, IMHO it should.

Shadish, W.R., T.D. Cook, & D.T. Campbell. 2002. "Experimental and
Quasi-Experimental Designs for Generalized Causal Inference." Houghton Mifflin. A goldmine of references to the social-science literature of experimentation.

Theall, M. 2006. "Re: pre/post testing to determine student progress - Phooey! POD post of 22 Sep 2006 07:21:26-0400; online at <http://listserv.nd.edu/cgi-bin/wa?A2=ind0609&L=pod&O=D&P=28904>.

USDE. 2006. Press Release: "Secretary Spellings to Announce Action Plan on the Future of Higher Education - Secretary to focus on affordability, accessibility and consumer-friendly recommendations"; online at <http://www.ed.gov/news/pressreleases/2006/09/09222006c.html>.

Whitehurst, G. 2003. "The Institute of Education Sciences: New Wine, New Bottles, a Presentation by IES Director Grover (Russ) Whitehurst," online at <http://www.ed.gov/rschstat/research/pubs/ies.html>.