Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.


40th Annual Convention; Chicago, IL; 2014

Event Details

Previous Page


Symposium #145
An Evaluation of Various Methods of Feedback on Performance Across a Variety of Treatment and Intervention Settings
Sunday, May 25, 2014
9:00 AM–9:50 AM
W196a (McCormick Place Convention Center)
Area: EDC/OBM; Domain: Applied Research
Chair: Sean Field (Western Michigan University)

The role feedback plays in the training and effective maintenance of skill is of great importance. However, it is not always clear as to what methods or amount of feedback are considered to be most effective depending on the skills being trained. The current symposium will present three studies attempting to evaluate further various methods of feedback and its impact on performance in a variety of skills. Skills being evaluated include the implementation of functional analysis procedures, Direct Instruction, and common office tasks. Specifically, the papers will discuss relevant features of feedback including temporal delivery of feedback, frequency of feedback and the role self rating may impact performance. The outcomes of these research papers provide insight into which methods of feedback may be most effective and potential areas for future research.


The Evaluation of Two Feedback Schedules on Teaching Performance of Undergraduate Applied Behavior Analysis Students Delivering Direct Instruction lessons

ELIAN ALJADEFF-ABERGEL (Western Michigan University), Stephanie M. Peterson (Western Michigan University), Mariah Cole (Western Michigan University), Kristin Hagen (Western Michigan University), Becky Wiskirchin (Western Michigan University)

Despite the common use of feedback in most training settings, it is not clear yet what behavioral function feedback serves. Most researchers consider feedback to function as a consequence and advocate for its immediate delivery in the form of on-the-spot supervision or after-session conferencing. The literature suggests that when compared, on-the-spot supervision is found more effective than after session conferencing. Despite these findings, most supervisors are still implementing after-session conferencing probably due to the limited feasibility performing on-the-spot supervision when supervening teachers implementing whole class or small group instruction. One way to overcome the feasibility issue of on-the-spot supervision and still providing feedback effectively, is by providing feedback before the next opportunity to perform. The purpose of this study was to evaluate the effectiveness of feedback provided (a) after the teaching session versus (b) before the following teaching session on (1) accuracy of error correction procedure and (2) rate of specific praise of undergraduate Applied Behavior Analysis students implementing Direct Instruction (DI) to a small group of children. An adapted alternating design was utilized to evaluate the effects of feedback in its two forms, on the teachers performance. Results will be presented and findings will be discussed.

The Effects of the Temporal Placement of Feedback on Performance
NATHAN T. BECHTEL (Western Michigan University), Heather M. McGee (Western Michigan University)
Abstract: Performance feedback is the most prevalent intervention in the field of OBM; however, there is little research regarding the temporal placement of feedback. This study compared the effects of the temporal placement of feedback on performance and skill acquisition of a data entry task. Two temporal placements were examined: feedback immediately after performance and feedback immediately prior to performance. A Latin square design that combined one between-group and two within-subjects factors was utilized. Participants were randomly assigned to one of three groups, which differed in the order in which experimental conditions were implemented. The primary dependent variable was the number of correctly completed patient records per experimental session. Feedback in the immediately after condition was based upon the results of the performance it followed, while feedback in the immediately prior condition was based upon the previous performance results. Participants in the baseline condition received no feedback. Overall, there were no differences between the conditions. Participants indicated a strong preference for any type of feedback over no feedback, as well as a strong preference for feedback prior to performance over feedback after performance. This is the first study to demonstrate participant preference for feedback prior to performance over feedback after performance.
Assessing Observer Effects on the Fidelity of Implementation of Functional Analysis Procedures
SHAUNA COSTELLO (Western Michigan University), Sean Field (Western Michigan University), Jessica E. Frieder (Western Michigan University), Heather M. McGee (Western Michigan University), Stephanie M. Peterson (Western Michigan University)
Abstract: Instructing and training others in the use of Functional Analyses (FA) can be a cumbersome and time-consuming task. Not only must students and practitioners learn all the various components of establishing conditions and analyzing the results, they must also gain experience in the running of the various conditions. The current study assessed the fidelity of individuals implementing a FA directly after observing and rating the fidelity of videos of others implementing FA procedures. Evaluation of effect was demonstrated in a multiple baseline research design across FA conditions. Video models of each of the four training conditions were provided throughout each condition; however, during intervention participants were only asked to provide fidelity ratings of a single video that corresponded intervention condition. Results indicate that participating in scoring fidelity of a video model can increase the fidelity performance of individuals implementing FA’s directly following providing fidelity measures for the video. Further research should investigate the impact of video quality (high or low fidelity) and possibly the accuracy of fidelity ratings and its subsequent effect on the raters ability to implement those procedures.



Back to Top
Modifed by Eddie Soh