All posts by pamrevie

Mental Health

Past meta-analyses in mental health interventions failed to use stringent inclusion criteria and diverse moderators, therefore, there is a need to employ more rigorous methods to provide evidence-based and updated results on this topic. This study presents an updated meta-analysis of interventions targeting anxiety or depression using more stringent inclusion criteria (e.g., baseline equivalence, no significant differential attrition) and additional moderators (e.g., sample size and program duration) than previous reviews. This meta-analysis includes 29 studies of 32 programs and 22,420 students (52% female, 79% White). Among these studies, 22 include anxiety outcomes and 24 include depression outcomes. Overall, school-based mental health interventions in grades K-12 are effective at reducing depression and anxiety (ES = 0.24, p=.002). Moderator analysis shows that improved outcomes for studies with anxiety outcomes, cognitive behavioral therapy, interventions delivered by clinicians, and secondary school populations. Selection modeling reveals significant publication and outcome selection bias. This meta-analysis suggests school-based mental health programs should strive to adopt cognitive behavioral therapy and deliver through clinicians at the secondary school level where possible.

Technical Report

Zhang, Q., Wang, J., & Neitzel, A. (2022). School-based mental health interventions targeting depression or anxiety: A meta-analysis of rigorous randomized controlled trials for school-aged children and adolescents. Baltimore, MD: Center for Research and Reform in Education, Johns Hopkins University.

Published Report

Zhang, Q., Wang, J. & Neitzel, A. (2022). School-based Mental Health Interventions Targeting Depression or Anxiety: A Meta-analysis of Rigorous Randomized Controlled Trials for School-aged Children and Adolescents. J Youth Adolescence.

Preventing Special Education Assignment

The number of students assigned to special education has increased in the past decades, in spite of efforts for more inclusion. For students with mild learning or behavioral difficulties, special education assignment might be prevented if appropriate support is provided in general education. In this study, research on programs that could reduce the number of students assigned to special education is reviewed systematically. The review focuses on students in elementary schools.

In total, 12 experimental or quasi-experimental studies of nine programs were reviewed. Programs were categorized based on what they were designed to improve: academic achievement, behavior, or both, and the multi-tiered Response to Intervention (RTI) framework was used to describe the intensity of the programs. It was found that several programs did reduce the number of students assigned to special education, while others did not or yielded mixed results. Three common elements of programs deemed effective were identified: an emphasis on tutoring, professional development and parental involvement. Implications for research and practice are discussed.

Published Report:

Mariëtte Hingstman, Amanda J. Neitzel & Robert E. Slavin (2022):
Preventing Special Education Assignment for Students with Learning or Behavioral Difficulties: A Review of Programs, Journal of Education for Students Placed at Risk (JESPAR), DOI: 10.1080/10824669.2022.2098131

COVID Learning Loss

COVID-19-related school closures disrupted students’ academic learning on a global scale. With increasing evidence calculating the extent of real COVID learning loss, no research synthesis has attempted to compile findings from different studies. To fill this gap in the literature, our interim findings illustrate that the learning loss is real and significant compared to previous school years, and provide a basis for refining the research on COVID learning loss.

Pre-Print

Storey, N., & Zhang, Q. (2021, September 10). A Meta-analysis
of COVID Learning Loss. : https://edarxiv.org/qekw2/

COVID Learning Loss

COVID-19-related school closures disrupted students’ academic learning on a global scale. With increasing evidence calculating the extent of real COVID learning loss, no research synthesis has attempted to compile findings from different studies. To fill this gap in the literature, our interim findings illustrate that the learning loss is real and significant compared to previous school years, and provide a basis for refining the research on COVID learning loss.

Pre-Print

Storey, N., & Zhang, Q. (2021, September 10). A Meta-analysis of COVID Learning Loss https://edarxiv.org/qekw2/

Special and Remedial Education

This article proposes a strategy to accelerate the learning of struggling learners that uses proven reading and mathematics programs to ensure student success. Based on Response to Intervention (RTI), the proposed policy, Response to Proven Intervention (RTPI), uses proven whole-school or whole-class programs as Tier 1, proven one-to-small group tutoring programs as Tier 2, and proven one-to-one tutoring as Tier 3. The criteria for “proven” are the “strong” and “moderate” evidence levels specified in the Every Student Succeeds Act (ESSA). This article lists proven reading and mathematics programs for each tier, and explains how evidence of effectiveness within an RTI framework could become central to improving outcomes for struggling learners.

Technical Report

Evidence Based Reform

Evidence-based reform in education refers to policies that enable or encourage the use of programs and practices proven to be effective in rigorous research. This article discusses the increasing role of evidence in educational policy, rapid growth in availability of proven approaches, and development of reviews of research to summarize the evidence. A highlight of evidence-based reform was the 2015 passage of the Every Student Succeeds Act (ESSA), which defines strong, moderate, and promising levels of evidence for educational programs and ties certain federal funding to use of proven approaches. To illustrate how coordinated use of proven approaches could substantially improve educational outcomes, the article proposes use of proven programs to populate each of Tiers 1, 2, and 3 in response to intervention (RTI) policies. This article is adapted from an address for the E.L. Thorndike Award for Distinguished Psychological Contributions to Education, August 7, 2018.

Technical Report

Published Report

Slavin, R. E. (2020). How evidence-based reform will transform research and practice in education. Educational Psychologist, 55 (1), 21-31. DOI: 10.1080/00461520.2019.1611432.

Education policies should support the use of programs and practices with strong evidence of effectiveness. The Every Student Succeeds Act (ESSA) contains evidence standards and incentives to use programs that meet them. This provides a great opportunity for evidence to play a stronger role in decisions about education programs and practices. However, for evidence-based reform to prevail, three conditions must exist: many practical programs with solid evidence; trusted and user-friendly reviews of research; and more education policies that provide incentives for use of proven programs. The article discusses recent progress in each of these areas and notes difficulties in each. It makes a case that if these difficulties can be effectively addressed, evidence-based reform may begin to make a meaningful difference in education outcomes at the national level.

Technical Report

Published Report

Slavin, R. (2017). Evidence-based reform in education. Journal of Education for Students Placed at Risk, 22 (3), 178-184.  

Summer School Meta-Analysis

There has long been interest in using summertime to provide supplemental education to students who need it. But are summer programs effective? This review includes 19 randomized studies on the effects of summer intervention programs on reading and mathematics, based on rigorous quality criteria. In reading, there were two types of summer programs: summer school and summer book reading approaches. In mathematics, there was only summer school. The mean effect of summer school programs on reading achievement were positive (mean ES = +0.23), but there were no positive effects, on average, of summer book reading programs (ES=0.00). In mathematics, positive mean effects were also found for summer programs (ES=+0.17). However, the positive-appearing means for summer schools were not statistically significant in a metaregression, and depended on just two reading and one math study with very large impacts. These successful interventions focused on well-defined objectives with intensive teaching.

Technical Report

Quantitative Synthesis of Success for All

Success for All (SFA) is a comprehensive whole-school approach designed to help high-poverty elementary schools increase the reading success of their students. It is designed to ensure success in grades K-2 and then build on this success in later grades. SFA combines instruction emphasizing phonics and cooperative learning, one-to-small group tutoring for students who need it in the primary grades, frequent assessment and regrouping, parent involvement, distributed leadership, and extensive training and coaching. Over a 33-year period, SFA has been extensively evaluated, mostly by researchers unconnected to the program. This quantitative synthesis reviews the findings of these evaluations. Seventeen U.S. studies meeting rigorous inclusion standards had a mean effect size of +0.24 (p < .05) on independent measures. Effects were largest for low achievers (ES = +0.54, p < .01). Although outcomes vary across studies, mean impacts support the effectiveness of Success for All for the reading success of disadvantaged students.

Technical Report

Published Report

Cheung, A., Xie, C., Zhang, T., Neitzel, A., & Slavin, R. E. (2021). Success for All: A quantitative synthesis of evaluations. Journal of Research on Educational Effectiveness, 14 (1), 90-115..

READING / ELEMENTARY

This report systematically reviews research on the achievement outcomes of four types of approaches to improving the reading success of children in the elementary grades:

  • Reading Curricula
  • Instructional Technology
  • Instructional Process Programs
  • Combinations of Curricula and Instructional Process

Technical Report

Neitzel, A., Lake, C., Byun, S., Shi, C., & Slavin, R. (2019) . Effective Tier 1 reading for elementary schools: A systematic review https://osf.io/xsw2p/

Archive of Earlier, Published Report

Slavin, R.E., Lake, C., Chambers, B., Cheung, A., & Davis, S. (2009).  Effective reading programs for the elementary grades: A best-evidence synthesis.  Review of Educational Research, 79 (4), 1391-1466.

RESEARCH Methods / Methodological Features and Effect Sizes

As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this article is to examine how methodological features such as types of publication, sample sizes, and research designs affect effect sizes in experiments. A total of 645 studies from 12 recent reviews of evaluations of reading, mathematics, and science programs were studied. The findings suggest that effect sizes are roughly twice as large for published articles, small-scale trials, and experimenter-made measures, than for unpublished documents, large-scale studies, and independent measures, respectively.  In addition, effect sizes are significantly higher in quasi-experiments than in randomized experiments. Explanations for the effects of methodological features on effect sizes are discussed, as are implications for evidence-based policy.

Technical Report

Published Report

Cheung, A., & Slavin, R. (2016). How methodological features affect effect sizes in education. Educational Researcher, 45 (5), 283-292.

Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties. Using study data from the What Works Clearinghouse, we find evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties. We explore potential reasons for the existence of a “developer effect” and provide evidence that interventions evaluated by developers were not simply more effective than those evaluated by independent parties. We conclude by discussing plausible explanations for this phenomenon as well as providing suggestions for researchers to mitigate potential bias in evaluations moving forward.

Technical Report

Published Report

Wolf, R., Morrison, J.M., Inns, A., Slavin, R. E., & Risman, K. (2020). Average effect sizes in developer-commissioned and independent evaluations. Journal of Research on Educational Effectiveness. DOI: 10.1080/19345747.2020.1726537

The Every Student Succeeds Act has made the use of evidence even more relevant to policymakers and practitioners. The What Works Clearinghouse (WWC) serves a role as reviewer of evidence as policymakers and practitioners attempt to navigate the broad expanse of educational research. For this reason, it is of vital interest to understand whether WWC policies lead to fair estimates of effect sizes. Previous research (e.g. Cheung &amp; Slavin, 2016) has indicated that small studies are associated with greatly inflated effect sizes. This study examined whether this could be explained by methodological and program factors of studies in the WWC database. Using the outcomes from all accepted studies in the reading and mathematics areas, we found a non-linear trend of effect size decreasing as the sample size increases. Even controlling for research design, level of assignment, type of intervention, and type of outcome, there was still an effect of sample size. While publication bias, intervention strength, methodological rigor, and superrealization may each contribute partially to this sample size effect, neither individually nor together do they explain the full impact of sample size on effect size. We suggest the WWC institute a policy of weighting studies by inverse variance before averaging effect sizes, and setting minimum sample criteria to reduce the inflationary effect of small study bias.

Conference Paper

Neitzel, A., Pellegrini, M., Lake, C., & Slavin, R. (2018, August 1). Do small studies add up in the What Works Clearinghouse? The 128th meeting of the American Psychological Association, San Francisco, CA. https://osf.io/preprints/edarxiv/qy7ez

Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and data analysis. It also discusses means of increasing the chances that large randomized experiments will find positive effects, and interpreting effect sizes.

Technical Report

Published Report

Slavin, R., & Cheung, A. (2017). Lessons learned from large-scale randomized experiments.Journal of Education for Students Placed at Risk. Doi: http://dx.doi.org/10.1080/10824669.2017.1360774

Program effectiveness reviews in education seek to provide educators with scientifically valid and useful summaries of evidence on achievement effects of various interventions. Different reviewers have different policies on measures of content taught in the experimental group but not the control group, called here treatment-inherent measures. These are contrasted with treatment-independent measures of content emphasized equally in experimental and control groups. The What Works Clearinghouse (WWC) averages effect sizes from such measures with those from treatment-independent measures, while the Best Evidence Encyclopedia (BEE) excludes treatment-inherent measures. This article contrasts effect sizes from treatment-inherent and treatment-independent measures in WWC reading and math reviews to explore the degree to which these measures produce different estimates. In all comparisons, treatment-inherent measures produce much larger positive effect sizes than treatment-independent measures. Based on these findings, it is suggested that program effectiveness reviews exclude treatment-inherent measures, or at least report them separately.

Technical Report

Published Report

Slavin, R.E., & Madden, N.A. (2011). Measures inherent to treatments in program effectiveness reviews. Journal of Research on Educational Effectiveness, 4 (4), 370-380.

Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples.  This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of the Best-Evidence Encyclopedia. As predicted, there was a significant negative correlation between sample size and effect size. The differences in effect sizes between small and large experiments were much greater than those between randomized and matched experiments. Explanations for the effects of sample size on effect size are discussed.

Technical Report

Published Report

Slavin, R.E., & Smith, D. (2009).  The relationship between sample sizes and effect sizes in systematic reviews in education.  Educational Evaluation and Policy Analysis, 31 (4), 500-506.