Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

46th Annual Convention; Washington DC; 2020

Event Details


Previous Page

 

Symposium #448
CE Offered: BACB
Innovations in the Use Single-Case Methodology: Artificial Intelligence, Aids to Clinical Decision-Making, and Hybrid Designs
Monday, May 25, 2020
8:00 AM–9:50 AM
Marriott Marquis, Level M1, University of D.C. / Catholic University
Area: PCH; Domain: Translational
Chair: Marc J. Lanovaz (Université de Montréal)
Discussant: David Richman (Texas Tech University)
CE Instructor: Marc J. Lanovaz, Ph.D.
Abstract:

Single-case designs have been central to the development of a science of behavior analysis. However, other health and social sciences have not embraced their adoption as widely as behavior analysts. Potential explanations for this lack of adoption include the complexity of analyzing single-case data objectively as well as the limited consideration of group data. The purpose of our symposium is to present recent research that addresses the aforementioned limitations. The first presentation will describe a script designed to automatically analyze functional analysis data based on previously published rules. The second presentation will examine whether artificial intelligence can accurately make decisions using AB graphs. The third presentation will discuss the validity of using nonoverlap effect size measures to aid clinical decision-making. The final presentation will introduce hybrid designs, which involve a combination of single-case and group methodologies. As whole, the presentations will provide an overview of innovations in the use of single-case methodology for both practitioners and researchers.

Instruction Level: Intermediate
Keyword(s): Artificial intelligence, Clinical decision-making, Functional analysis, Single-case designs
Target Audience:

BCBAs BCBA-Ds

 
Automating Functional Analysis Interpretation
(Applied Research)
JONATHAN E. FRIEDEL (National Institute for Occupational Safety and Health), Alison Cox (Brock University)
Abstract: Functional analysis (FA) has been an important tool in behavior analysis. The goal of an FA is to determine problem behavior function (e.g., access to attention) so that treatments can be designed to target causal mechanisms (e.g., teaching a socially appropriate response for attention). Behavior analysts traditionally rely on visual inspection to interpret an FA. However, existing literature suggests interpretations can vary across clinicians (Danov & Symons, 2008). To increase objectivity and address interrater agreement across FA outcomes, Hagopian et al. (1997) created visual-inspection criteria to be used for FAs. Hagopian and colleagues reported improved agreement but limitations of the criteria were noted. Therefore, Roane, Fisher, Kelley, Mevers, and Bouxsein (2013) addressed these limitations when they created a modified version. Here, we describe a computer script designed to automatically interpret FAs based on the above-mentioned criteria. A computerized script may be beneficial because it requires objective criteria (e.g., 10% higher vs. ‘substantially’ higher) to make decisions and it is fully replicable (i.e., does not rely on interobserver agreement). We outline several areas where the published criteria required refinement for the script. We also identify some conditions in which the script provides interpretations that disagree with expert clinician interpretations.
 
Artificial Intelligence to Analyze Single-Case Data
(Applied Research)
MARC J. LANOVAZ (Université de Montréal), Antonia R. Giannakakos (Manhattanville College), Océane Destras (Polytechnique Montréal)
Abstract: Visual analysis is the most commonly used method for interpreting data from single-case designs, but levels of interrater agreement remain a concern. Although structured aids to visual analysis such as the dual-criteria (DC) method may increase interrater agreement, the accuracy of the analyses may still benefit from improvements. Thus, the purpose of our study was to (a) examine correspondence between visual analysis and models derived from different machine learning algorithms, and (b) compare the accuracy, Type I error rate and power of each of our models with those produced by the DC method. We trained our models on a previously published dataset and then conducted analyses on both nonsimulated and simulated graphs. All our models derived from machine learning algorithms matched the interpretation of the visual analysts more frequently than the DC method. Furthermore, the machine learning algorithms outperformed the DC method on accuracy, Type I error rate, and power. Our results support the somewhat unorthodox proposition that behavior analysts may use machine learning algorithms to supplement their visual analysis of single-case data, but more research is needed to examine the potential benefits and drawbacks of such an approach.
 

Using AB Designs With Nonoverlap Effect Size Measures to Support Clinical Decision Making: A Monte Carlo Validation

(Applied Research)
ANTONIA R. GIANNAKAKOS (Manhattanville College), Marc J. Lanovaz (Université de Montréal)
Abstract:

Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below .05, and then examined if using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice.

 
Unique Applications of Single-Case Experimental Designs: “Hybrid Designs” in Research and Practice
(Theory)
ODESSA LUNA (St. Cloud State University), John T. Rapp (Auburn University)
Abstract: The purpose of experimental designs is to determine the extent to which an independent variable is responsible for the observed changed in the dependent variable and to ensure the produced change is not due to extraneous variables. In behavior-analytic practice and research, we often use single-case experimental designs to evaluate the effect of a treatment with relatively few participants. As our field expands and extends beyond individualized assessment and treatment for individuals with disabilities, researchers and clinicians may need to consider alternative methods to evaluate functional control of the collective unit of behaving individuals. For example, behavior analysts may be tasked with changing a group of individuals’ behaviors in nontraditional settings such as detention centers or foster homes. Currently, the literature lacks any guidance in how we measure a group of behaving individuals within a single-subject framework. The purpose of this talk is to propose a term “hybrid designs” in which both modified single-case experimental and group designs are used to study group behavior. This talk will review different ways single-case experimental designs have been used in the literature when studying groups of individuals and applications for future research and practice. By proposing these hybrid designs, the talk aims to outline how we may (a) expand the range of experimental questions that behavior analysts can ask and (b) extend the utility of these designs to other disciplines with differing dependent variables.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":false}