Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

34th Annual Convention; Chicago, IL; 2008

Event Details


Previous Page

 

Symposium #341
CE Offered: BACB
Factors that Influence Occurrence of Data Collection, Observer Accuracy, and Measures of Interobserver Agreement
Monday, May 26, 2008
9:00 AM–10:20 AM
Stevens 3
Area: DDA/CBM; Domain: Applied Research
Chair: Nicole Heal (The May Institute)
CE Instructor: Nicole Heal, Ph.D.
Abstract:

Making data-based treatment decisions has been and continues to be a hallmark of applied behavior analytic programming. However, the integrity of data collection behavior and measures of interobserver agreement may influence the interpretation of the data thus influencing treatment decisions. The presentations in this symposium present evaluations regarding the occurrence of data collection, the accuracy of the data collected, and measures of interobserver agreement. The first study examined the effects of an antecedent intervention and an antecedent intervention with feedback and public posting on staff data collection. The second investigation assessed the effects of visual performance feedback in the form of graphed data on staff data collection. The third study evaluated the feasibility and utility of a laboratory model within the framework of Signal Detection Theory for examining observer accuracy. The final study compared five measures of interobserver agreement and evaluated the sensitivity to differences in rate of each measure.

 
Antecedent and Consequence Strategies to Increase Data Collection among School Staff.
CATHERINE COTE (The May Center for Education and Neurorehabilitation), Gary M. Pace (The May Institute), Serra R. Langone (The May Center For Education and Neurorehabilitation)
Abstract: A critical component of any applied behavior analytical program is consistent data collection (Baer, Wolf, & Risley, 1968). This study compared the effects of an antecedent strategy in isolation and in combination with public posting and feedback to increase staff data collection using a multiple baseline design (interobserver agreement was 100%) across classrooms. Baseline observations revealed that data collection was inconsistent in all classrooms. During the antecedent condition, a timer was implemented to prompt staff every 30 min to record data on students problem behaviors. The antecedent strategy was then paired with public posting and feedback. In all classrooms, the antecedent strategy improved data collection among staff. For one classroom, the antecedent strategy was effective when implemented alone; however, when data was taken once a month, a decrease was observed. Higher levels of data collection were observed in two classrooms when the antecedent strategy was combined with public posting and feedback. Results depicted a decrease in data collection for all classrooms when the schedule of public posting and feedback was thinned to once a month. The practical implications of these strategies are discussed in that they can be easily implemented in classrooms and other environments (i.e., home, residential settings, etc.).
 
Using Visual Performance Feedback without Additionally Arranged Incentives in Increasing Amount of Data Collection.
JAMES E COOK (The New England Center for Children), Amelia McGoldrick (The New England Center for Children), Sima Hansalia (The New England Center for Children), Jason C. Bourret (The New England Center for Children)
Abstract: Previous research has shown that performance feedback combined with putative reinforcement contingencies can improve staff performance and increase the frequency of staff collecting data. Recent research has also shown that performance feedback can affect behavior without additional arranged incentives. In the current study, the effects of visual performance feedback in the form of graphed data on performance were examined. Data collection in teachers of students diagnosed with autism was targeted for increase. Results showed that visual performance feedback alone was effective in increasing the amount of data collected. Data also showed that the reliability of data was high without the need for explicit intervention.
 
Applying Signal Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment.
ALYSON N. HOVANETZ (University of Houston, Clear Lake), Dorothea C. Lerman (University of Houston, Clear Lake), Allison Serra Tetreault (West Virginia University), Hilary J. Karp (University of Houston, Clear Lake), Angela Mahmood (University of Houston, Clear Lake), Maggie Strobel (University of Houston, Clear Lake), Shelley Kay Mullen (University of Houston, Clear Lake), Alice A. Keyl (Utah State University)
Abstract: The purpose of this study was to evaluate the feasibility and utility of a laboratory model for examining observer accuracy within the framework of Signal Detection Theory (SDT). Thirty individuals collected data on aggression while viewing videotaped segments of simulated child-teacher interactions. The segments consisted of clear and ambiguous samples of the target behavior and ambiguous non-examples of the behavior. Consistent with previous research on SDT, response bias occurred when observers were provided with brief feedback about their performance and consequences for either hits or false alarms. Changes in scoring were more likely to involve samples designated as ambiguous rather than as clear, providing some validity for the designations made. Thus, preliminary findings support the viability of the methodology for evaluating variables that may influence observer accuracy and bias in behavioral assessment.
 
Evaluations of Interobserver Agreement.
ANDREW SAMAHA (University of Florida), Timothy R. Vollmer (University of Florida), Amanda Bosch (University of Florida)
Abstract: Interobserver agreement scores can be used to detect measurement problems when data are collected by human observers. One benefit of interobserver agreement is that low scores may reflect differences in recording likely to affect the interpretation of data. In applied behavior analysis, many interpretations are based on the rate of some target event. This study evaluated measures of interobserver agreement according to their sensitivity to differences in rate when comparing data from two observers. Five measures of interobserver agreement were compared: proportional agreement within intervals, proportional agreement within occurrence intervals, occurrence agreement, nonoccurrence agreement, and exact agreement. Measures tended to reflect one of three possible outcomes: (1) interobserver agreement changed appropriately with differences in rate, (2) interobserver agreement remained high despite relatively substantial differences in rate, or (3) interobserver agreement was low despite only minor differences in rate.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":false}