Full Report (1 MB) || Educator's Summary (263 KB)|| Educator's Guide (322 KB)
An exhaustive search considered hundreds of published and unpublished articles. It included those that met the following criteria:
- Schools or classrooms using each program had to be compared to randomly assigned or well-matched control groups
- Study duration had to be at least 12 weeks
- Outcome measures had to be assessments of the mathematics being taught in all classes. Almost all are standardized tests or state assessments.
- The review placed particular emphasis on studies in which schools, teachers, or students were assigned at random to experimental or control groups.
Programs were rated according to the overall strength of the evidence supporting their effects on math achievement. “Effect size” (ES) is the proportion of a standard deviation by which a treatment group exceeds a control group. Large studies are those involving a total of at least 10 classes or 250 students. The categories are as follows:
Strong Evidence of Effectiveness: At least two large studies, of which at least one is a randomized or randomized quasi-experimental study, or multiple smaller studies, with an effect size of at least +0.20.
Moderate Evidence of Effectiveness: Two large matched studies or multiple smaller studies with a collective sample size of 500 students, with a weighted mean effect size of at least +0.20.
Limited Evidence of Effectiveness: At least one qualifying study with a significant positive effect and/or weighted mean effect size of +0.10 or more.
Insufficient Evidence: Studies show no significant differences
N No Qualifying Studies: No studies met inclusion standards |