By Ray Beck
Among the challenges in bringing the Response to Intervention (RtI) effort to the shop floor is the fact many of our general classroom teachers leave preservice programs without the course work, training, and skill sets necessary to implement specific behavioral interventions. With curriculum and instructional methods being the focus, behavior management has often been left on the back burner.
However, with the advancement of RtI, most general education teachers will encounter and be asked to teach atypical learners who often exhibit behaviors described as hyperactive, off-task, aggressive, inappropriate, troublesome, rude, mean-spirited, disruptive, unruly, distractive, disorderly, and so on.
One of the problems—whether it be a new teacher recently out of a university program or a seasoned veteran—is providing quality staff development, training, and consultation embedding evidence-based interventions that are practical, user-friendly, and cost-effective. Frankly, given the resources available, teachers find many intervention trainings beyond the scope of reality.
If staff development is complicated, time-consuming, or unfriendly, teachers simply won’t use the interventions. If, on the other hand, staff development is practical and user-friendly, teachers will be eager to implement the interventions.
Professional development modules should provide quality training in: (1) choosing an appropriate/practical intervention to match the problem behavior; (2) developing the skill set associated with the intervention; (3) implementing the intervention with fidelity; and (4) collecting data and evaluating the extent to which the intervention was effective.
Using a research-based intervention, Check-In/Check-Out (Hawken et al.), “fidelity of implementation” and “evaluating effectiveness” will be briefly explored.
Challenges of Implementing Interventions With Fidelity
When interventions fail, we might hear: “I tried that, and it didn’t work; … too many other students; … not a good match between the intervention and problem behavior; … no follow-up; … not research-based; …too many other things going on; … I need more help; …he/she should be removed from my class; … my focus is academics; … and, as for RtI, this too shall likely pass.
Bottom line: Many interventions are determined ineffective for lack of implementation fidelity.
Fidelity, as used here, refers to ensuring the intervention was implemented as the designers intended. Three critical steps are necessary to ensure positive outcomes when implementing interventions:
- First, there needs to be a good match between a problem behavior and a research-based intervention (primal screaming is not research-based).
- Second, teachers must be appropriately trained in using the intervention the way it was designed.
- Third, a follow-up plan must be in place to measure implementation fidelity.
In one fidelity study (Noell et al.), teachers were trained in specific behavioral interventions and divided into three groups. Each group was followed using behavioral coaches as consultants. The first group received brief weekly visits where the coach asked, through a scripted interview, how things were going. The second group received brief weekly visits where the coach reminded the teachers of their commitment to the intervention. The third group had weekly meetings where the coach reviewed “performance data,” including charts, graphs, incident reports, etc. Researchers found the most promising fidelity model was weekly consultations where frequent performance data were discussed.
Challenges to Determining the Level of Effectiveness
There exist a number of designs to measure effectiveness/impact of interventions, but if data collection becomes too complicated or time-consuming, teachers may neglect this critical element. Data collection plans must be developed keeping in mind the “collector.”
Recently an argument has been made to consider social validity as a scale in determining the level of an intervention’s effectiveness.
The idea behind social validity (consumer satisfaction) is that it is simple, straightforward, and well-accepted within the research community (Kazdin). Basically, the concept of social validity as used here requires a teacher, on returning from in-service on a specific behavioral intervention, to answer three central questions: (1) To what extent did he/she use the intervention; (2) To what extent did he/she like the intervention; and (3) To what extent did he/she find the intervention effective?
From these data, we can make intervention decisions and ultimately determine a level of statistical and/or educational significance.
The Department of Defense Education Activity (DoDEA), through a contract with Sopris Educational Services, employed the use it, like it, effective scales in measuring a major RtI staff development initiative involving hundreds of teachers. The Georgia State Department of Education used the same scales to measure the effectiveness of a Sopris product, resulting in a statewide adoption. Results from the Georgia study showed a strong correlation between teachers who “liked” the intervention and found it “effective.”
Check-In/Check-Out (CICO) is a targeted support intervention designed so that students can receive feedback about their behavior throughout the school day. After each subject or period, teachers provide a simple evaluation of progress toward specific goals using a behavior report card. Progress is graphed, and students are reinforced when criterion for performance is met. Lastly, parents may be asked to sign students’ daily progress report to facilitate communication between home and school.
To ensure fidelity of implementation, a “coaching card” with a Likert-type scale might be used by the behavior coach and/or teacher to rate the extent to which the following CICO steps were completed:
- Identify the student and describe one to four “appropriate behavior” goals.
- Teach the student appropriate behaviors.
- Determine through functional behavioral assessment (FBA) if the student uses problem behavior to gain/get something or escape/avoid something.
- Develop a numeric rating (0-2) to award points with a procedure to summarize daily scores and evaluate progress.
- Teach the student how points are awarded on the daily progress report. Explicitly explain all aspects of the CICO program.
- Write a behavior contract that defines expectations for students, the CICO coordinator (designated school person if available), and parents/guardian.
- Summarize weekly data and monitor progress on meeting daily points. Use data to determine if a student should be continued, modified, or faded from the program.
- Behavior report cards are signed by a parent/guardian and brought to school the next day.
From the eight steps listed in the CICO intervention, participants are asked to judge on a five-point scale the extent to which they used, liked, and found effective each step and requirement. Keep in mind you can use something, but not like it; or like it, but not use it; or like and use something, but not find it effective.
When it comes to implementing behavioral interventions with fidelity and measuring their effectiveness, a coaching card listing the essential elements of the specific intervention, along with a social-validity scale, can simplify the process and significantly increase positive results.
Ray Beck, Ed.D., is a consultant, author, and former vice president for Sopris Learning. As a school psychologist and former director of special education in Great Falls, Montana, he directed the development of two U.S. Department of Education-validated programs: Project RIDE (Responding to Individual Differences in Education) and Basic Skill Builders.
Books by Ray Beck: RIDE Behavior Intervention Bank, One Minute Fluency Builders Series, One-Minute Academic Functional Assessment and Interventions, Practicing Basic Skills in Reading, Practicing Basic Skills in Language Arts Book, Practicing Basic Skills in Math, Basic Skill Builders