|
Supporting the Implementation of Evidence-based Practices: Technical Assistance, Monitoring and implementation fidelity |
Saturday, May 23, 2009 |
2:30 PM–3:50 PM |
North 121 BC |
Area: EDC; Domain: Service Delivery |
Chair: Teri Lewis-Palmer (Independent Consultant) |
Discussant: Cynthia M. Anderson (University of Oregon) |
Abstract: In a recent article, Biglan and Ogden (2008) make the case that there is sufficient knowledge to produce significant positive outcomes on a large scale if evidence-based interventions were adopted and implemented. The difficulty lies in the lack of knowledge about how to influence organizations to adopt and implement evidence-based interventions. The question is how do we transport our scientific knowledge base to practice settings without losing the power of the intervention. The research to practice gap is more than having evidence-based practices available. Krachtowill, Albers, & Steele Shernoff (2004) indicate that practice sites are challenged by cumbersome organization, lack of skills and resources and limited emphasis on prevention. Furthermore, Fixsen and colleagues (2005) have suggested that sustainability is a function of how well adoption and implementation has been handled. This symposium focuses on adoption and implementation of evidence-based practices. Each of the three presentations will present a different aspect of practice site implementation including building training and technical assistance into existing local resources, establishing monitoring systems that are reliable and accessible and using fidelity of implementation to increase accuracy and sustainability of practitioner efforts. |
|
Foundations of implementation: Establishing and maintaining systems for higher level implementation of evidence based practices |
R. KENTON DENNY (Louisiana State University) |
Abstract: One of the greatest challenges for going to scale with evidence based practices is the ability to establish and maintain fidelity implementation across distances and time. In this presentation we will look at the factors to be considered in the design of large scale systems of implementation especially as it relates to the implementation of behavioral support practices. Supporting practices and challenges will be identified for both universal and targeted group interventions. Efforts to integrate fidelity of implementation measures within state level compliance and school performance monitoring will be presented. |
|
Reliability of Behavior Ratings for Daily Behavior Report Cards |
MACK D. BURKE (Texas A&M University), Kimberly Vannest (Texas A&M University) |
Abstract: Daily behavior report cards (DBRCs) have long been used in Applied Behavior Analysis as illustrated in the seminal study by Bailey, Wolf, and Phillips (1970) on the use of daily behavior report cards, home-based reinforcement, and problem behavior. DBRCs continue to be a user friendly approach to (a) communicating with parents, (b) documenting intervention effects, (c) anchoring contingencies, and (d) progress monitoring IEP goals and objectives. DBRCs may be used for progress monitoring of individual goals and objectives for students with disabilities or for monitoring progress toward meeting school expectations. DBRCs can be embedded into check in/out programs, reinforcement programs, and behavior intervention plans. In this presentation, we will review initial results of the reliability of a categorical rating approach that represents a hybrid between direct observations and traditional behavior rating scales. |
|
The Importance of Fidelity Measurement to Interpret Intervention Results and Improve Implementation |
SHANNA HAGAN-BURKE (Texas A&M University), Eric Oslund (Texas A&M University), Melissa Fogarty (Texas A&M University), Caitlin Johnson (Texas A&M University) |
Abstract: Well designed measures of implementation fidelity provide vital data for informing educational research and practice. This session will present the fidelity measures used in an early reading intervention study and describe how those data were used to (a) capture the differences and samenesses between experimental and comparison conditions and (b) determine formative feedback for interventionists delivering an early reading intervention to kindergarteners at-risk for reading problems. Observation measures were developed for both the intervention and comparison conditions. These protocols were designed to document the extent to which fundamental intervention elements were delivered (intervention version) and evaluate the quality of instructional delivery (intervention and comparison versions). The protocols also allowed observers to document the extent to which students attended to instruction and refrained from problem behaviors. These fidelity data provided a context for interpreting intervention results and helped researchers isolate the intervention features required for student success |
|
|