Please ensure Javascript is enabled for purposes of website accessibility Page 1: Evaluating the Effectiveness of an Evidence-Based Practice
  • IRIS Center
  • Resources
    • IRIS Resource Locator
      Modules, case studies, activities,
      & more
    • Evidence-Based Practice
      Summaries
      Research annotations
    • High-Leverage Practices
      IRIS resources on HLPs
    • Films
      Portrayals of people with
      disabilities
    • Children's Books
      Portrayals of people with
      disabilities
    • Glossary
      Disability related terms
    • For PD Providers
      Sample PD activities, planning forms,
      & more
    • For Faculty
      Top tips, coursework planning,
      & more
    • Website Navigation Videos
      Getting around our Website
      & modules
    • New & Coming Soon
      Latest modules & resources
    • IRIS Archived Resources
      Modules, alignment tools,
      & more
  • PD Options
    • PD Certificates for Educators
      Our certificate, your PD hours
    • Log in to Your IRIS PD
    • For PD Providers
      Sample PD activities, planning forms, & more
    • IRIS+ School & District Platform
      A powerful tool for school leaders
  • Articles & Reports
    • Articles
      Articles about IRIS use & efficacy
    • Internal IRIS Reports
      Reports on IRIS use & accomplishments
    • External Evaluation Reports
      Evaluations of the IRIS Center
    • IRIS Stories
      Our resources, your stories
    • News & Events
      What, when, & where it's happening
  • About
    • Who We Are
      Our team & IRIS Ambassadors
    • What We Do
      Our resources & process
    • Contact Us
      Get in touch with IRIS
    • Careers at IRIS
      Join our team
  • Help
    • Help & Support
      Get the full benefit from our resources
    • Website Navigation Videos
      Getting around our Website & modules
  • Evidence-Based Practices (Part 3): Evaluating Learner Outcomes and Fidelity
Challenge
Initial Thoughts
Perspectives & Resources

What is the process for evaluating the effectiveness of an EBP with your children or students?

  • 1: Evaluating the Effectiveness of an Evidence-Based Practice

How do you measure infant, child, or student performance?

  • 2: Identifying a Progress Monitoring Measure
  • 3: Monitoring Progress
  • 4: Evaluating Progress

How do you know whether you are correctly implementing an EBP?

  • 5: Identifying a Fidelity Measure
  • 6: Monitoring Fidelity of Implementation
  • 7: Evaluating Fidelity of Implementation

How do you know whether an EBP is effective with your children or students?

  • 8: Evaluating the Relation Between Outcomes and Fidelity

Resources

  • 9: References, Additional Resources, and Credits
Wrap Up
Assessment
Provide Feedback

What is the process for evaluating the effectiveness of an EBP with your children or students?

Page 1: Evaluating the Effectiveness of an Evidence-Based Practice

treacher confering with student at table group

Implementing an evidence-based practice or program (EBP) increases the likelihood that your children or students’ performance will improve. An EBP is one that is supported by rigorous research demonstrating its effectiveness. However, even the most effective EBPs do not work for all children* or students. Further, the more a practice or program is implemented with fidelity—as intended by the researchers or developers—the greater the likelihood that it will produce positive child or student outcomes. To judge a program’s effectiveness, one should:

  1. Systematically monitor learner outcomes: The purpose of monitoring progress is to determine whether individuals are improving. One of the best ways to measure improvement is progress monitoring, a type of formative assessment in which learning is evaluated on a regular basis.
    x

    formative assessment

    Frequent evaluation of an individual’s performance, which provides continual feedback to both learners and instructors and helps guide instructional decision-making.

  2. Systematically monitor fidelity of implementation: The purpose of monitoring fidelity is to ensure that the EBP is being implemented as intended, which will increase the likelihood of improved young child or student outcomes.
  3. Examine the relation between learner outcomes and fidelity of implementation: The purpose of comparing the two sets of data is to determine whether the EBP is effective for children or students with whom you are working.

If fidelity is high, increases in performance can be attributed to the evidence-based practice or program. Likewise, if fidelity is high and there is no change in performance, it can be inferred that the practice or program was not effective for those children or students. However, if fidelity is low, the relation between the practice or program and child or student outcome data is unclear.

Listen as Bryan Cook discusses the importance of collecting both progress monitoring data and implementation fidelity data (time: 1:39).

Bryan Cook, PhD
Professor, Special Education
University of Hawai’i at Mānoa

/wp-content/uploads/module_media/ebp_03_media/audio/ebp_03_p01_bc_a.mp3

View Transcript

Bryan Cook

Transcript: Bryan Cook, PhD

When we say an evidence-based practice causes improved learner outcomes, we don’t mean it causes improved learner outcomes for each and every learner. We mean that it improves outcomes for most learners most of the time. Even though it’s not a 100% guaranteed bet, it’s still approximately 90%. I do think that it’s critically important that we realize that there are what are oftentimes referred to as non-responders or treatment-resisters. Nothing is going to work for everybody. The most evidence-based practice in the world, there’s going to be some students that it doesn’t work for. And these are very often our at-risk learners, our kids with disabilities, our culturally and linguistically diverse students. And so this really points to the importance of taking good progress monitoring data and realizing that, even when we implement an evidence-based practice with fidelity, there’s probably going to be some learners that it doesn’t work for. And that’s okay. It’s still a very good place to start. But then we have to be ready to progress monitor, to look at our implementation fidelity data. And if we’re implementing the practice with fidelity and it’s not producing the outcomes that we desire, we have to think about either moving onto another evidence-based practice, or promising practice, or consider ways that we can make the intervention more intensive or adapted in other ways to make it more effective if it looks like the practice is having some positive effects but just not to the degree that we’d like it to.

Next, Bryan Cook and Sam Odom explain why an EBP might not be effective for all students.

Bryan Cook

Bryan Cook, PhD
Professor, Special Education
University of Hawai’i at Mānoa

(time: 1:07)

/wp-content/uploads/module_media/ebp_03_media/audio/ebp_03_p01_bc_b.mp3

View Transcript

Sam Odom

Sam Odom, PhD
Professor, Special Education
Director, Frank Porter Graham Child Development Institute
University of North Carolina at Chapel Hill

(time: 1:12)

/wp-content/uploads/module_media/ebp_03_media/audio/ebp_03_p01_so.mp3

View Transcript

Transcript: Bryan Cook, PhD

Determining whether an evidence-based practice, or really any intervention for that matter, is working for a particular learner or group of learners is really like detective work. You’ve got to crack the case of whether the practice works, and sometimes we’re going to be right if we just use our intuition and our general sense of things. But we’re not always going to be right, and so we need to look for solid clues. And our clues are really the data, specifically reliable progress monitoring data, and implementation fidelity data. We’re going to make the most-informed decision when we use both of those types of information, both of those sets of clues, not just one. And we’re most confident that an evidence-based practice is working when we implement it with fidelity, and we have evidence of improved student performance. If we just have improved student performance but we’re not implementing the intervention with fidelity, we’re not really sure if the EBP is what caused those improvements in student outcomes.

Transcript: Sam Odom, PhD

Evidenced-based practices are never effective for all students. The evidence might have been based on students with specific characteristics that are different from the student that the teacher’s working with. The context may be different. The research might have been collected in inclusive classrooms, and the teacher may be in a non-inclusive special ed. classroom, or vice versa, and there may be features of that environment that affect whether the practice works or doesn’t work. I think another reason that it might be effective is that issue around fidelity. It might not be implemented at a high enough level of fidelity to result in positive outcomes for students. Another feature, I think, is that the child might not like the things that happen in the practice, so it could be the practice might be quite solidly grounded research, but it’s just sort of boring for the child or doesn’t match their interests. And I think that’s where the caregiver/practitioner knowledge and expertise, and also parent information about the child, helps in selection.

The following pages include more information about evaluating the effectiveness of an EBP. The first section discusses monitoring child or student progress. The second describes monitoring fidelity of implementation. In each section, you will learn how to:

  • Identify measures
  • Monitor performance
  • Evaluate performance

The final section discusses how to evaluate the relation between young child or student outcomes and fidelity of implementation.

* In this module, “children” refers to infants, toddlers, and preschool children.

Print Friendly, PDF & Email
Back Next
123456789
Join Our E-Newsletter Sign Up
  • Home
  • About IRIS
  • Sitemap
  • Web Accessibility
  • Glossary
  • Terms of Use
  • Careers at IRIS
  • Contact Us
Join Our E-Newsletter Sign Up

The IRIS Center Peabody College Vanderbilt University Nashville, TN 37203 [email protected]. The IRIS Center is funded through a cooperative agreement with the U.S. Department of Education, Office of Special Education Programs (OSEP) Grant #H325E220001. The contents of this website do not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the Federal Government. Project Officer, Sarah Allen.

Copyright 2025 Vanderbilt University. All rights reserved.

* For refund and privacy policy information visit our Help & Support page.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

  • Vanderbilt Peabody College
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok