|
Analysis and Use of Single-Case Designs in Applied Behavior Analysis Practice |
Sunday, May 29, 2016 |
10:00 AM–11:50 AM |
Columbus Hall AB, Hyatt Regency, Gold East |
Area: PRA |
Chair: D. Reed Bechtel (Bechtel Behavioral Services) |
|
Functional Analysis Celeration Chart and Challenging Behavior: Is There More to Know? |
Domain: Theory |
SAL RUIZ (The Pennsylvania State University), Richard M. Kubina Jr. (Penn State) |
|
Abstract: Functional Analysis is an integral part of determining challenging behavior. Often regarded as the gold standard in the field of Applied Behavior Analysis. Offering a different perspective on graphical analysis may provide information that can be useful in determining functions. Examining level and bounce the Functional Analysis Celeration Chart (FACC) may be a reliable tool in determining the function of challenging behavior. The use of the FACC has the potential to provide in-depth analysis and quantify the challenging behavior in exciting ways. By recharting data on to the FACC a precise quantification of level presented in a non-sequential view provides potential clarity in undifferentiated FAs. By measuring variability (i.e. bounce) control can be demonstrated. Additionally, presenting data in a sequential view allows for a proportional display of data regardless of frequency, reducing some of the variability. Participants will learn how to use the FACC and a discussion on potential benefits will take place. |
|
Nonoverlap Analysis, Tau U, and Effect Size in Single Case Design Applications |
Domain: Applied Research |
D. REED BECHTEL (University of West Florida) |
|
Abstract: Evaluating behavioral change In Single Case Design involves a variety of components including level, trend, overlap and immediacy. While visual analysis has been the hallmark of ABA, will this method alone continue to serve the field in the future? Comparing Single Case Design versus statistical outcomes has always been a major point of distinction/departure for ABA versus other disciplines. Finding a method of assessment of effects for ABA interventions that both supports the interventionists visual analysis and that could be compared with other interventions has been an ongoing effort. One possible solution currently available is overlap analysis. The paper will present an introduction to various overlap analysis procedures, a brief conceptual basis for the use of the Tau U (Parker, Vannest, Davis, & Sauber, 2011) overlap statistic, three Single Case Design applications utilizing both visual analysis and demonstrating a number of issues related to evaluating the results of Single Case Designs with Tau U, and estimating effect size of Single Case Design's via the Tau U statistic. Possible implications for current practice will be discussed. |
|
Software for Graphing Time-Series Data |
Domain: Applied Research |
JENNIFER N. HADDOCK (University of Florida), Brian A. Iwata (University of Florida) |
|
Abstract: Given the heavy emphasis on visual analysis of graphic data in our field and the use of computer-generated graphs for publication and clinical work, we conducted an exploratory review of graphing software applications. We used an Internet search to identify comprehensive graphing applications and summarized their key features and capabilities. We also surveyed editors of the Journal of the Experimental Analysis of Behavior (JEAB) and the Journal of Applied Behavior Analysis (JABA) about their graphing software preferences, uses, and limitations. The majority of respondents reported using Excel, Prism, and SigmaPlot, but the majority of limitations were reported for Excel. The information provided may be useful to behavior analysts who are seeking or considering a change in graphing software. |
|
Using Single-Case Experiments to Support Evidence-Based Clinical Decisions: How Much Is Enough? |
Domain: Service Delivery |
MARC J. LANOVAZ (Université de Montréal), John T. Rapp (Auburn University) |
|
Abstract: For practicing behavior analysts, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that a specific behavioral intervention will be effective with an individual from a given population? Although standards have been proposed to address this question, current guidelines do not appear to be strongly grounded in theory or empirical research. The purpose of the presentation is to address this issue by presenting guidelines to facilitate evidence-based decisions by adopting a simple statistical approach to quantify the support for behavioral interventions that have been validated using SCEDs. Specifically, the presentation will focus on the use of success rates as a supplement to support evidence-based clinical decisions. The proposed methodology allows behavior analysts to aggregate the results from single-case experiments in order to estimate the probability that a given intervention will produce a successful outcome. As an illustrative example, the procedures will be applied to examine the support for the use of noncontingent matched stimulation to reduce engagement in vocal stereotypy in children with autism spectrum disorders. Considerations and limitations associated with this approach will also be discussed. |
|
|
|