Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

34th Annual Convention; Chicago, IL; 2008

Event Details


Previous Page

 

Symposium #251
CE Offered: BACB
Implementing Behavior Programs and Preference Assessments: How Can we Improve our Practices?
Sunday, May 25, 2008
3:00 PM–4:20 PM
Continental B
Area: AUT/OBM; Domain: Applied Research
Chair: Shawn E. Kenyon (The New England Center for Children)
Discussant: Ronnie Detrich (Wing Institute)
CE Instructor: Shawn E. Kenyon, M.S.
Abstract:

This symposium discusses the use of different feedback types in order to increase treatment integrity while implementing behavior programs. It further discusses the degree to which the use of a single trial preference assessment would be comparable to the use of a full assessment. The first paper utilizes written quizzes and feedback on quiz performance to increase accuracy in behavior program implementation. Three graduate students performances were evaluated while implementing a behavior plan with a 17-year-old student diagnosed with autism. Data from a multiple baseline intervention across staff members indicate that quizzes and feedback on quiz performance were effective in increasing behavior program implementation. The second paper implements video feedback and self-monitoring for the same purpose. Three graduate students performances were evaluated while implementing a behavior plan with a 15-year-old student diagnosed with autism. Data from a multiple baseline intervention across staff members indicate that self-monitoring via video samples was effective in increasing behavior program implementation. Finally, a third study evaluated the degree to which the results of one-trial multiple-stimulus preference assessments conducted with two individuals diagnosed with autism corresponded with those obtained from full, standard preference assessments. Results indicated that outcomes of one-trial and full preference assessments were comparable. The first two papers provide alternatives to standard feedback while the third paper provides an alternative to full, standard preference assessments. Taken together the three studies suggest methods that could save time and effort on the part of the clinician while not jeopardizing treatment integrity.

 
Evaluating the Effects of Feedback on Procedural Integrity.
UTAH W. NICKEL (The New England Center for Children), Paula Ribeiro Braga-Kenyon (The New England Center for Children), Erin C. McDermott (The New England Center for Children), Shawn E. Kenyon (The New England Center for Children), Bethany L. McNamara (The New England Center for Children), William H. Ahearn (The New England Center for Children), Eileen M. Roscoe (The New England Center for Children)
Abstract: A high level of procedural integrity, the precision with which the independent variable is applied, is necessary to ascertain the effects of treatment. One method for increasing procedural integrity is providing feedback based on direct observations. The present study evaluates the effectiveness of feedback in the form of weekly quizzes on the implementation of a problem behavior treatment plan for one student diagnosed with autism and that engaged in high rates of severe forms of self-injurious behaviors. Procedural integrity data were collected during a 10 minute observation period. IOA was collected in 55.3% of observations (avg. 93.6%). Weekly quizzes consisted of fill in the blank questions regarding treatment protocol. Quizzes resulted in an increase in procedural integrity for one teacher and no change in a second teacher until verbal feedback on observations was delivered. These data replicate findings of prior research, and also indicate that the type and amount of feedback may vary among teachers. These data, along with suggestions for future research, are discussed.
 
Increasing Procedural Integrity through Video Self-Monitoring.
KELLY A. PELLETIER (The New England Center for Children/Northeastern University), Bethany L. McNamara (The New England Center for Children), Paula Ribeiro Braga-Kenyon (The New England Center for Children)
Abstract: We examined the effects of a training program using video self-monitoring on the procedural integrity of staff implementing behavioral guidelines for one child with autism. Three staff members with low or declining scores were asked to be involved in the treatment. Treatment incorporated a mock guideline implementation video that allowed the participant to learn to score with a procedural integrity scoring tool. Each participant then watched one of their own baseline videos and scored it in tandem with the experimenter; a comparison of the scores paired with verbal feedback from the experimenter concluded a training session. IOA was conducted in 33% of sessions and ranged from 98-100%. Data for one participant showed an increase in level from baseline to perfect implementation in the context of three video observations. Treatment for the remaining two participants will begin shortly and a maintenance probe is scheduled to be conducted for all participants.
 
A Comparison of the Outcomes of One-Trial and Full Standard Preference Assessments.
JASON CODERRE (The New England Center for Children), Jason C. Bourret (The New England Center for Children)
Abstract: Preference assessments are conducted in order to identify items that will be used as reinforcers for adaptive behavior. The purpose of this study is to evaluate the degree to which the results of one-trial multiple-stimulus preference assessments correspond with those obtained from full, standard preference assessments. Two individuals diagnosed with autism participated in the study. Results showed correlation between the outcomes of one-trial and full preference assessments and a consistent hierarchy in preference over an extended period of time for all assessments for both participants. Findings are discussed in terms of the effects of degrading the number of trials and replications conducted during preference assessments.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE