Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

40th Annual Convention; Chicago, IL; 2014

Event Details


Previous Page

 

Symposium #355
CE Offered: BACB
Effective Training Strategies and Performance Feedback
Monday, May 26, 2014
10:00 AM–11:50 AM
W194a (McCormick Place Convention Center)
Domain: Applied Research
Chair: Ellie Kazemi (California State University, Northridge)
Discussant: Stephanie M. Peterson (Western Michigan University)
CE Instructor: Ellie Kazemi, Ph.D.
Abstract:

For decades, researchers have highlighted the importance of establishing effective training strategies and provided evidence that incorrect or unsystematic implementation of behavioral procedures result in variable and poor treatment outcomes. In this symposium, we will present four research studies in which we focus on cost-effective, efficient, and effective training strategies. The first and second presenter will discuss the results of replications of Graff and Karsten (2012), who provided evidence that a self-instructional package could be used to teach special education teachers to implement, score, and interpret the outcomes from both the paired-stimulus and multiple-stimulus without replacement assessments. The third presenter will discuss the results of a component analysis of performance feedback. Lastly, the fourth presenter will discuss the methodological challenges that restrict the current training and supervision literature and will offer possible solutions. We will end the symposium by discussing the implications of these presentations for clinical supervisors who conduct trainings and for researchers invested in effective use of performance feedback.

 

How Can we Maximize a Supervisor's Efficiency?

MARNIE NICOLE SHAPIRO (The Ohio State University), Melissa L. Mendoza (California State University, Northridge), Meline Pogosjana (California State University, Northridge), Ellie Kazemi (California State University, Northridge)
Abstract:

Researchers have developed supervisor-facilitated training to teach staff to implement preference assessments with fidelity. However, it is not time-efficient for supervisors to model appropriate skills, role-play, or provide feedback if the use of a self-instructional package is sufficient to bring staff to mastery. Graff & Karsten (2012) were the first researchers to provide evidence that a self-instructional package could be used to teach staff to implement, score, and interpret the outcomes from both the paired-stimulus and multiple-stimulus without replacement preference assessments. Thus, our objective was to replicate the results obtained by Graff and Karsten. We employed a multiple baseline design across participants and taught 7 undergraduate students to implement, score, and interpret the outcomes from a paired-stimulus preference assessment. We found that 5 out of 7 participants met mastery after we introduced a modified version of the self-instructional package; the remaining 2 participants needed brief sessions of feedback to achieve mastery. We conclude that the use of a self-instructional package may be sufficient for many individuals to acquire the skills for conducting a stimulus-preference assessment. For some individuals, however, a few sessions of brief performance-specific feedback in conjunction with modeling may be necessary for meeting mastery.

 

Can Behavioral Staff be Trained to Implement Paired-Stimulus Preference Assessments Using Only a Self-instructional Package?

MELISSA L. MENDOZA (California State University, Northridge), Marnie Nicole Shapiro (The Ohio State University), Meline Pogosjana (California State University, Northridge), Ellie Kazemi (California State University, Northridge)
Abstract:

Researchers have focused on designing effective and time-efficient strategies to maximize supervisors' time spent on training behavioral staff. Graff and Karsten (2012) found that a written instructional package was sufficient to train 11 special education teachers to conduct, score, and interpret the results from both the paired-stimulus and multiple-stimulus without replacement preference assessments and that the skills generalized to clients. Thus, our main objective was to replicate the study conducted by Graff and Karsten with 5 behavioral staff who provide services to children with developmental disabilities in their homes. We used a multiple baseline across subjects and conducted generalization probes in-field with actual clients. We found that 3 of the 5 participants met mastery after reading the self-instructional package. Of the 2 remaining participants, 1 met mastery after we introduced a slightly modified version of the self-instructional package and the other required brief sessions of feedback and modeling to meet mastery. Results of this study suggest that self-instructional packages can be used to teach staff to conduct paired-stimulus preference assessments; however, some staff may need the addition of feedback and modeling to acquire the skill.

 

A Component Analysis of Feedback

DENICE RIOS (California State University, Northridge), Meline Pogosjana (California State University, Northridge), Candice Hansard (California State University Northridge), Ellie Kazemi (California State University, Northridge)
Abstract:

Feedback interventions have included some or all of the following components: information regarding performance criteria or accuracy of previous performance, strategies for correct responding, delivery of praise or tangibles contingent on correct responding, and opportunities to ask questions. Given the variability in the use of feedback across studies, it is unclear which specific components are necessary for feedback to be effective. This variability may be why researchers have reported inconsistencies in the overall effectiveness of feedback. In this study, using a multiple baseline design, we conducted a component analysis of feedback by exposing 5 undergraduate students to 3 different levels of feedback in an additive sequence. The feedback intervention consisted of the following components: (1) stating the performance criteria, (2) specifying the accuracy of previous performance and (3) modeling plus strategies for future correct responding. We found that the first two feedback components in the sequence were sufficient in bringing the performance of 4 of the 5 individuals to mastery criterion. The implications of these findings for clinical supervisors who provide performance feedback will be discussed.

 

Can a Robot Serve as a Simulated Client?

LISA STEDMAN-FALLS (California State University, Northridge), Denice Rios (California State University, Northridge), Melissa L. Mendoza (California State University, Northridge), Ellie Kazemi (California State University, Northridge)
Abstract:

There are methodological challenges when applied researchers try to isolate effective training variables because in many instances the trainee's performance depends on client responses. Variance in client responding could affect the trainee's opportunities for correct responding and possibly threaten the study's internal validity. To circumvent this problem, some researchers use standardized scripts to train simulated clients (e.g., research assistants) and monitor procedural fidelity as the simulated client interacts with the trainee. We propose the use of a humanoid robot as another potential solution because a robot can be programmed to produce consistent responses eternally. To test if a robot is an effective simulated client in training research, we taught 6 undergraduate students to implement a paired-stimulus preference assessment with either the robot (3 participants) or human simulated client (3 participants). We used a multiple baseline across subjects design and found that all participants implemented a PS preference assessment at the mastery criteria following training and skills generalized across both simulated clients. We conclude that a humanoid robot can serve as a viable simulated client to test training intervention effectiveness. In the future, researchers could evaluate methodological advantages to using a humanoid robot in lieu of a human simulated client.

 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE