How can an educator implement an evidence-based practice or program with fidelity?
Page 4: Follow Implementation Procedures
Once you have prepared to implement the EBP, you need to implement it with fidelity—that is, as intended by the researchers or developers. Implementing an EBP with fidelity increases the likelihood that its intended outcomes will be achieved. Fidelity of implementation consists of three key components:
- Adherence—Following the instructional procedures of the practice or program as they were intended and implementing all components of the EBP in the correct order
- Exposure/duration—Implementing the practice or program for the recommended:
- Length of session (e.g., 40 minutes)
- Duration of EBP (e.g., 12 weeks, one semester, one academic year)
- Frequency (e.g., daily, three times per week)
- Quality of delivery—Delivering the EBP using good teacher practices (e.g., implementing with enthusiasm, making time for student questions and feedback, managing transitions)
For Your Information
When you implement an EBP, it is important to make sure that the children or students are engaged and motivated. However, this may prove challenging for those students with learning difficulties who have experienced repeated academic failures and who, by the time they have entered middle or high school, are often disengaged and unmotivated. These students frequently have negative perceptions about their abilities, and might be unmotivated or unwilling to learn a new strategy even when educators discuss how doing so will improve their academic performance.
Larry Wexler and Scott McConnell discuss the importance of fidelity of implementation.
Larry Wexler, PhD
Director, Research to Practice Division
Office of Special Education Programs
US Department of Education
(time: 0:49)
Scott McConnell, PhD
Professor, Educational Psychology
University of Minnesota
(time: 0:46)
Transcript: Larry Wexler, PhD
Why is fidelity of implementation important, given everything else that’s going on? It is critical for the implementer of a program to implement it as it was researched. Length of the sessions is an issue in fidelity. If the program was researched for, you know, thirty-minute sessions and you don’t have time to do that so you’ll do it in fifteen-minute sessions, it’s unlikely that you’ll get the same results as the research. Same with the frequency of sessions. If the program was researched to be implemented five times a week and you decide to implement it twice a week, it’s unlikely that it’ll be effective. So fidelity is really critical.
Transcript: Scott McConnell, PhD
Fidelity is important for two reasons. The first one is that, when I’m implementing an intervention with fidelity, I’m doing it the same way as the evaluators did it, so I can have more confidence that what I’m going to do is going to produce the same results that those researchers got. So it keeps me close to the specified practice. I think the other part of it is that it makes it easier. If an intervention is really well described, if the researchers have gone to enough trouble to make it easy to maintain fidelity then that makes my job easier. I just need to do what they told me to do. I don’t have to invent or make up those procedures. So it both keeps me close to what I know works, and in many instances it’ll be easier.
Risks to Fidelity
It is common practice for educators to borrow ideas and strategies from colleagues and to change them in a variety of ways (e.g., combine ideas, use parts of strategies) to meet their own needs and the needs of their students. Educators who implement an EBP are often tempted to do the same. Below are four common reasons educators change EBPs, intentionally or otherwise:
Reason 1: | Educators eliminate components or shorten the implementation time because they underestimate the time that it will take to implement the EBP. |
Reason 2: | Some educators implement pieces of an EBP that appeal to them and eliminate others. |
Reason 3: | Educators implement an EBP incorrectly or poorly because of confusion created by ambiguous or unclear procedural guidelines. |
Reason 4: | Educators attempt to implement an EBP that is overly complicated and for which they have not received adequate training and support to do so with fidelity. |
When implementing an EBP, teachers might mistakenly believe that they can make these types of changes to the EBP and still see positive results. However, changes to one or more of the key components violate the fidelity of the practice or program and can undermine its success. Other ways educators might compromise fidelity of implementation is through drift and adaptation.
Drift
When teachers implement an EBP, they usually start off by doing so with fidelity, but over time they might drift. In other words, over time they inadvertently modify or omit the recommended procedures or activities that make up a practice or program. Even when teachers think they are following the prescribed procedures, they often stray. This typically happens when teachers believe that they have mastered the implementation procedures and consequently stop referring to the procedural guidelines. This decrease in fidelity can occur in as few as one to ten days. For this reason, teachers should periodically monitor and evaluate their implementation, something that will be discussed in greater detail on Perspectives & Resources page 5.
Research Shows
Of those teachers who implement preventive programs, 41% to 84% change components or procedures over time (i.e., drift) or discontinue their use.
(Tappe, Galer-Unit, & Bailey, 1995)
Adaptation
Educators often intentionally change the components of an EBP for some or all of their children or students. This is referred to as adaptation. When educators adapt or change any of the components of an EBP, they risk failing to achieve the desired outcomes. Although they might find the EBP is as effective or even more effective when components are adapted, more often than not the EBP is less effective. Therefore, when educators adapt an EBP, they risk turning a practice or program that has proved successful into one that is ineffective.
The experts below discuss why educators often have difficulty implementing a practice or program with fidelity.
Bryan Cook, PhD
Professor, Special Education
University of Hawai’i at Mānoa
(time: 1:16)
Tom Kratochwill, PhD
Professor, Educational Psychology
Co-PI, Project PRIME
University of Wisconsin-Madison
(time: 1:53)
Lisa Sanetti, PhD
Co-PI, Project PRIME
Associate Professor,
Neag School of Education
University of Connecticut
(time: 1:55)
Transcript: Bryan Cook, PhD
I think teachers implement an evidence-based practice without fidelity for lots of different reasons, including they think they can make it more effective. And I think they sometimes can make it more effective by adapting it to their particular characteristics and needs of their learners into their environment , but of course you always run a risk when you do that. Probably more frequently they don’t implement it with fidelity, because they can’t. They just don’t have the resources that are required. They don’t have the training that was required, so they literally don’t know how to implement it. They don’t have the time in their class to implement it as it was designed. Drift, obviously, is another big reason why teachers don’t end up implementing the practice with fidelity, and it’s just kind of natural that over time we fall into patterns, and that if you’re not actively evaluating your fidelity all the sudden you’re doing something in a different way then you originally intended. And this is especially the case if you’re not receiving ongoing support and don’t have someone who’s helping you engage in evaluating your implementation.
Transcript: Tom Kratochwill, PhD
There’s lots of reasons why there may be challenges with treatment fidelity. One is that there could be social or cultural challenges to implementation of the intervention. People may, in fact, be biased against a particular type of intervention. For example, there’s been a tremendous amount of controversy over the use of time-out. Another thing is there are frequently organizational or structural challenges that exist in an applied setting. For example, there might be issues related to having enough time or enough support related to being able to implement the intervention. Some programs may require a certain number of sessions or a dosage-level that is problematic within the context of a school setting or whatever environment in which the intervention is being introduced.
Another concern, the lack of skill may in fact be a huge influence on why a program isn’t implemented with great fidelity. So many programs require a certain skill level or level of professional development before they can be implemented well. There are also challenges, I think, in the research-base. Some interventions do not provide good guidance, in terms of how to implement a program with good fidelity, such as checklists or rating scales or guides or directions. So those are some of the challenges that I see, in terms of the difficulty in implementing evidence-based practices with fidelity.
Transcript: Lisa Sanetti, PhD
There are several risks to fidelity across time. What we’ll see sometimes is that educators will get to a point where they’re implementing an evidence-based practice fairly fluently and consistently, but across time they sort of drift away from really implementing exactly the way a plan or an intervention was set up. Maybe they start dropping some components or adapting some components, and those drifts and those adaptations are really important to take note of. And the thing that’s so important is not to assume that they are going to have negative outcomes, because maybe they will, maybe they won’t. We need to be paying attention to them to make sure that the core components of that intervention are still being implemented. And if there were small adaptations made to make it fit into the classroom, some surface-level adaptations, then maybe that’s appropriate and will facilitate sustained implementation. But if we’re starting to drift away or adapt from very major parts of those core components where we’re drifting away from implementing daily to implementing twice a week or implementing from 50 minutes down to 30 minutes, we know that we’re likely going to stop seeing the rate of student progress that we would see if the intervention was implemented with a higher level of fidelity.
The other area where we see a big risk to fidelity is when, for some reason, an educator stops implementing, and there can be very good reasons for stopping implementation. Sometimes, we’re implementing an intervention with a student and then the student’s absent for a long period of time. And what we can see is that sometimes teachers have a hard time then starting implementation again when that student comes back.
For Your Information
To ensure that you are implementing an EBP as intended, you must monitor your fidelity of implementation. This can help you identify whether you have deviated from the procedures (e.g., drifted, added components). In addition to monitoring fidelity of implementation, it is important to monitor student outcomes to determine whether the students’ performance is improving. To learn more about this process, view the IRIS Module: