Skip navigation

Assessing the Validity and Applicability of Research Findings

The U.S. Department of Education is asking administrators and technology and curriculum directors to justify their program and policy decisions based on scientific research evidence. [link to material below]

Clearly, if a school or district expects to implement a program under federal funding, it has to show that the proposed program is based on "rigorous, systematic, and objective procedures...applied to obtain valid knowledge" about the effectiveness of methods and materials.

This site is organized to give you guidance and access to a variety of resources to help you select appropriate research to support your planning.

To be considered scientific, a research design must meet the following criteria:

Conducting scientific research in education presents significant challenges for several reasons. First, there are multiple disciplines and diverse variables that bear on the study of education. Second, ethical treatment must be ensured when conducting experimental research on human beings. Further compounding the process is the need to balance the rigors of science with the art of instructional practice. Ascertaining which interventions increase student achievement requires a long-term relationship between research and practice.

A wide variety of legitimate scientific designs are available for education research, depending on the purpose of the study and how the results will be applied. The gold standard of scientific research, and the method preferred by the USED, involves an experimental or quasi-experimental research design, preferably with random assignment.

How do education decision makers assess the validity and applicability of research results provided by vendors and published in professional journals? How do they evaluate the effectiveness and appropriateness of the software or instructional approaches they plan to use?

To evaluate the achievement outcomes of education programs, judging research quality is relatively straightforward. Valid research for this purpose uses meaningful measures of achievement to compare several schools that used a given program with several carefully matched control schools that did not.

The most convincing form of a control group comparison is a randomized experiment in which students, teachers, or schools are assigned by chance to a group. Randomized experiments have been very rare in education, but when carried out successfully, can be very influential because the random assignment makes it very likely that the experimental and control groups were identical at the outset so any differences at the end are sure to have resulted from the treatment program..

Matched studies are far more common than randomized ones. In a matched program evaluation, researchers compare students in a given program with those in a control group that is similar in prior achievement, poverty level, demographics, and so on. This type of study can be valid if the experimental and control groups are very similar. If the differences are small, statistical methods are used to "control for" differences. Problems in validity arise if the differences are unmeasured characteristics and there is no way to determine how these differences affected the outcome. For example, when the experimental group is given the choice of opting in and the control group of opting out of a program, selection bias is usually introduced.

Random selection is the gold standard because it virtually eliminates selection bias. While there has been a movement toward greater use of randomized experiments in education, there are many more matched experiments because random selection is expensive and difficult to do.

When searching the research base for programs, decision makers should carefully evaluate well-matched experiments. The goal is to look for studies that minimize bias by using closely matched experimental and control groups, include an adequate number of schools, and avoid comparing volunteers with non-volunteers.

Resources about Research

Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide. U.S. Department of Education Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, December 2003. Report prepared by the Coalition for Evidence-Based Policy and sponsored by The Council for Excellence in Government.

A Reader's Guide to Scientifically-Based Research by Robert E. Slavin. Educational Leadership, Februrary 2003, pages12-16.

A Policymaker's Primer on Education Research: How to Understand, Evaluate, and Use It by Patricia A. Lauer. A joint effort of Mid-continent Research for Education and Learning and the Education Commission of the States. http://www.ecs.org/html/educationIssues/Research/primer

Don't Get Buried Under a Mountain of Research: Use these strategies to sort through the pile and find the information you need by Danielle Carnahan and Michele Fitzpatrick.