What is the process for evaluating the effectiveness of an EBP with your children or students?
Page 1: Evaluating the Effectiveness of an Evidence-Based Practice
Implementing an evidence-based practice or program (EBP) increases the likelihood that your children or students’ performance will improve. An EBP is one that is supported by rigorous research demonstrating its effectiveness. However, even the most effective EBPs do not work for all children* or students. Further, the more a practice or program is implemented with fidelity—as intended by the researchers or developers—the greater the likelihood that it will produce positive child or student outcomes. To judge a program’s effectiveness, one should:
- Systematically monitor learner outcomes: The purpose of monitoring progress is to determine whether individuals are improving. One of the best ways to measure improvement is progress monitoring, a type of formative assessment in which learning is evaluated on a regular basis.
x
formative assessment
Frequent evaluation of an individual’s performance, which provides continual feedback to both learners and instructors and helps guide instructional decision-making.
- Systematically monitor fidelity of implementation: The purpose of monitoring fidelity is to ensure that the EBP is being implemented as intended, which will increase the likelihood of improved young child or student outcomes.
- Examine the relation between learner outcomes and fidelity of implementation: The purpose of comparing the two sets of data is to determine whether the EBP is effective for children or students with whom you are working.
If fidelity is high, increases in performance can be attributed to the evidence-based practice or program. Likewise, if fidelity is high and there is no change in performance, it can be inferred that the practice or program was not effective for those children or students. However, if fidelity is low, the relation between the practice or program and child or student outcome data is unclear.
Listen as Bryan Cook discusses the importance of collecting both progress monitoring data and implementation fidelity data (time: 1:39).
Transcript: Bryan Cook, PhD
When we say an evidence-based practice causes improved learner outcomes, we don’t mean it causes improved learner outcomes for each and every learner. We mean that it improves outcomes for most learners most of the time. Even though it’s not a 100% guaranteed bet, it’s still approximately 90%. I do think that it’s critically important that we realize that there are what are oftentimes referred to as non-responders or treatment-resisters. Nothing is going to work for everybody. The most evidence-based practice in the world, there’s going to be some students that it doesn’t work for. And these are very often our at-risk learners, our kids with disabilities, our culturally and linguistically diverse students. And so this really points to the importance of taking good progress monitoring data and realizing that, even when we implement an evidence-based practice with fidelity, there’s probably going to be some learners that it doesn’t work for. And that’s okay. It’s still a very good place to start. But then we have to be ready to progress monitor, to look at our implementation fidelity data. And if we’re implementing the practice with fidelity and it’s not producing the outcomes that we desire, we have to think about either moving onto another evidence-based practice, or promising practice, or consider ways that we can make the intervention more intensive or adapted in other ways to make it more effective if it looks like the practice is having some positive effects but just not to the degree that we’d like it to.
Next, Bryan Cook and Sam Odom explain why an EBP might not be effective for all students.
Bryan Cook, PhD
Professor, Special Education
University of Hawai’i at Mānoa
(time: 1:07)
Sam Odom, PhD
Professor, Special Education
Director, Frank Porter Graham Child Development Institute
University of North Carolina at Chapel Hill
(time: 1:12)
Transcript: Bryan Cook, PhD
Determining whether an evidence-based practice, or really any intervention for that matter, is working for a particular learner or group of learners is really like detective work. You’ve got to crack the case of whether the practice works, and sometimes we’re going to be right if we just use our intuition and our general sense of things. But we’re not always going to be right, and so we need to look for solid clues. And our clues are really the data, specifically reliable progress monitoring data, and implementation fidelity data. We’re going to make the most-informed decision when we use both of those types of information, both of those sets of clues, not just one. And we’re most confident that an evidence-based practice is working when we implement it with fidelity, and we have evidence of improved student performance. If we just have improved student performance but we’re not implementing the intervention with fidelity, we’re not really sure if the EBP is what caused those improvements in student outcomes.
Transcript: Sam Odom, PhD
Evidenced-based practices are never effective for all students. The evidence might have been based on students with specific characteristics that are different from the student that the teacher’s working with. The context may be different. The research might have been collected in inclusive classrooms, and the teacher may be in a non-inclusive special ed. classroom, or vice versa, and there may be features of that environment that affect whether the practice works or doesn’t work. I think another reason that it might be effective is that issue around fidelity. It might not be implemented at a high enough level of fidelity to result in positive outcomes for students. Another feature, I think, is that the child might not like the things that happen in the practice, so it could be the practice might be quite solidly grounded research, but it’s just sort of boring for the child or doesn’t match their interests. And I think that’s where the caregiver/practitioner knowledge and expertise, and also parent information about the child, helps in selection.
The following pages include more information about evaluating the effectiveness of an EBP. The first section discusses monitoring child or student progress. The second describes monitoring fidelity of implementation. In each section, you will learn how to:
- Identify measures
- Monitor performance
- Evaluate performance
The final section discusses how to evaluate the relation between young child or student outcomes and fidelity of implementation.
* In this module, “children” refers to infants, toddlers, and preschool children.