Evidence-Based Practices (Part 3): Evaluating Learner Outcomes and Fidelity
Wrap Up
Educators are expected to implement evidence-based practices and programs to improve the outcomes of the children or the students with whom they work. Unfortunately, not all EBPs are effective for all children or students. Typically, there will be a small percentage who do not respond to a given EBP. For this reason, when implementing a new EBP, you need to determine whether the practice or program is effective for your children or students. To do this, you must:
- Systematically monitor child or student outcomes
- Systematically monitor fidelity of implementation
- Examine the relation between child or student outcomes and fidelity of implementation
It is important to review both progress monitoring data and fidelity data and to compare the results of the two. This will allow you to determine whether the EBP is effective for your children or students and can help you make informed instructional decisions.
Improved Child/Student Outcomes (i.e., above or on the goal line) |
Inadequate Child/Student Outcomes (i.e., below the goal line) |
|
High Fidelity | Continue using the EBP and continue monitoring performance | Change instruction because the EBP is not effective for your children or students |
Low Fidelity | Decision is unclear: 1) Continue using the EBP and monitoring performance OR 2) Improve implementation fidelity and collect progress monitoring data to see whether outcomes further improve. | Improve implementation fidelity and collect more progress monitoring data to see whether outcomes improve. |
Listen as Lisa Sanetti summarizes how to make informed instructional decisions by assessing learner outcome data along with an educator’s fidelity data (time: 3:26).
Lisa Sanetti, PhD
Co-PI, Project PRIME
Associate Professor, Neag School of Education
University of Connecticut
Transcript: Lisa Sanetti, PhD
When you’re evaluating the effectiveness of an intervention that’s being implemented, it’s really important to look at both progress monitoring data and teacher fidelity data to make valid conclusions about the intervention’s effectiveness. Historically, we’ve only looked at progress monitoring data. And so when you do that in a problem-solving model and you look at the data and the data’s going to tell you either the student is making expected progress, in which case you’re more likely to continue that intervention and keep moving down the path where you’re going, or they’re going to say that the student is not making the expected progress. And a lot of times that’s meant changing the intervention and sometimes implementing a more-intensive intervention for that student. When you look at both progress monitoring data and teacher fidelity data, the number of options you have really increases. So if our progress monitoring data look good and the teacher fidelity data are high then, absolutely, keep doing what you’re doing. You’re absolutely on the right track, and the student’s likely going to make the gains that you are looking for.
Another option, though, is that the progress monitoring data look good, and the teacher fidelity data are lower than you’d like to see. And that may be a case where, although the student’s making progress, they might not be making the level of progress that they could be making. So they might be able to reach their intervention goals sooner if those fidelity levels are higher. If the fidelity data are really low, it might be something going on outside of the school that’s actually resulting in these increases. So did mom and dad hire a tutor to help the student? Are there other supports going on at home that are actually resulting in the progress monitoring data increasing, and it’s not the intervention at all? So there are some questions there that you would need to look into.
A third scenario is the progress monitoring data don’t look very good, but the teacher fidelity data are high. And so there the teacher is doing everything they’re supposed to be doing in terms of implementing the intervention, but the student is just not responding. And that’s where you might want to go back and look at the intervention and go through that problem-solving process again to potentially identify another intervention for that student and again track that intervention in terms of progress monitoring and teacher fidelity data.
The fourth scenario is that the progress monitoring data don’t look so good, and the teacher fidelity data are low. And, in that case, your first goal should be to increase that fidelity data. That may be a teacher who needs someone to go in and model the intervention for them, observe them implementing, and give them some feedback, who might need to look at their implementation data on a more ongoing way so that they can increase that fidelity data. So you’re able to make a decision about whether or not the intervention is working or not, because you’re really not able to make that decision until you know that it’s being implemented the way it was supposed to be. So by looking at both progress monitoring data and teacher fidelity data, you’re really able to make a valid conclusion about the intervention’s effectiveness, and it allows you some more options in that data-based decision making framework
Revisiting Initial Thoughts
Think back to your initial responses to the following questions. After working through the resources in this module, do you still agree with your Initial Thoughts? If not, what aspects of your answers would you change?
What is the process for evaluating the effectiveness of an EBP with your children or students?
How do you measure infant, child, or student performance?
How do you know whether you are correctly implementing an EBP?
How do you know whether an EBP is effective with your children or students?
When you are ready, proceed to the Assessment section.