|
Taking it to the Streets: Technology Transfer in Applied Behavior Analysis |
Tuesday, May 31, 2005 |
10:30 AM–11:50 AM |
Williford A (3rd floor) |
Area: CBM; Domain: Applied Research |
Chair: Cynthia M. Anderson (West Virginia University) |
CE Instructor: Cynthia M. Anderson, Ph.D. |
Abstract: Technology transfer involves disseminating assessment and/or intervention tools to individuals in typical settings. In this symposium, several examples of technology transfer are presented, the focus of each research study is on a different population, ranging from children with disabilities to foster families. |
|
Factors Associated with Running away Among Youth in Foster Care |
LUANNE WITHERUP (University of Florida), Timothy R. Vollmer (University of Florida), Carole M. Van Camp (University of Florida), John C. Borrero (University of Florida) |
Abstract: We conducted two studies designed to evaluate running away among children in foster care. The purpose of Study 1 was to evaluate potential risk and protective factors for running away. Participants included over 32,000 children receiving services from the Florida Department of Children and Families (FDCF). Various characteristics were evaluated including gender, age, race, custodial status, dependency goal, most recent placement type, and time spent in foster care. All data were extracted from existing databases managed by FDCF. Probability analyses were conducted to identify factors associated with an increased or decreased likelihood of running away. Results highlighted several factors associated with running away that may be used to identify at risk children in need of individualized assessment and preventive intervention. In Study 2, the likelihood of running away from various placement types was evaluated for individual foster children. For each participant, we calculated the probability of running away from each type of placement they had experienced. Results demonstrated the usefulness of this analysis in identifying placements associated with an increased or decreased likelihood of running away. |
|
Comparing Indirect and Experimental Methods of Functional Analysis |
JENNIFER R. ZARCONE (Life Span Institute), Katie Hine (Parsons State Hospital), Rachel L. Freeman (University of Kansas), Marie Constance Tieghi-Benet (University of Kansas), Chris Smith (University of Kansas), Pat Kimbrough (University of Kansas) |
Abstract: Different methods of conducting functional analysis or assessment were compared using children and adults with significant problem behavior. Participants were selected by trainees participating in a Positive Behavior Support statewide training program. As part of the teaching process, trainees conducted a functional assessment using several different indirect and descriptive methods of functional assessment. Based on the results of the assessment, trainees developed a hypothesis regarding the function(s) of the problem behavior. A second team then conducted an independent analog functional analysis. The degree of convergent validity was then assessed between the two evaluation methods. Data from five participants indicated fairly good convergent validity between the two evaluation approaches (70% agreement). In two cases, the hypothesis and the results of the analog functional analysis were in exact agreement and in the other 3 cases, the functional analysis identified one function whereas the trainee’s hypothesis indicated two possible functions. There were several limitations to the assessments conducted however, making a clear comparison often difficult. |
|
Evaluating Progress in Behavioral Programs for Children with Pervasive Developmental Disorders: Continuous Versus Intermittent Data Collection |
ANNE CUMMINGS (Western Michigan University), James E. Carr (Western Michigan University) |
Abstract: It is well documented that intensive behavioral treatment of early childhood autism can result in significant improvements in adaptive behavior. The typical teaching format in such programs is based on the restricted operant (aka, discrete trial) in which the performance of an exemplar skill follows a clear instruction and precedes programmed reinforcement or error correction. Because of the often-intensive nature of behavioral treatment, it is not unusual for thousand of learning opportunities to be presented each week. There currently exists a professional debate regarding the frequency of data collection necessary in autism treatment programs. One side of the argument favors collecting data on every learning opportunity for a complete assessment of child performance. The other side favors intermittent data collection to facilitate more efficient instruction. Unfortunately, little published empirical evidence exists to inform the debate. Thus, current study was designed to evaluate continuous (i.e., trial by trial) versus intermittent (i.e., first-trial only) data collection systems across a number of curriculum areas in behavioral treatment programs for children with pervasive developmental disabilities. In our study, 6 children were taught numerous exemplars in 2-4 curricular areas using established behavioral procedures. The exemplars within each curricular area were randomly assigned to one of the data collection conditions. Each condition was evaluated based on the number of sessions to reach a mastery criterion for an exemplar and the percentage correct score for that exemplar at a 3-week follow-up assessment. Our results indicate that type of data collection generally failed to substantially impact acquisition rates or maintenance performance. Although the experimental preparation employed in this study is not representative of all teaching circumstances, our data suggest that collecting data on only the first trial of a session might be a reasonable tactic. |
|
Evaluating Functional Assessment Outcomes Based on Hand-Scored Data Versus Computerized Data Collection |
CYNTHIA M. ANDERSON (West Virginia University), Emily O. Garnett (West Virginia University), Deanna Perrine (West Virginia University), Ellen J. McCartney (West Virginia University) |
Abstract: Computerized data collection allows for real time coding and a complex evaluation of environment-behavior relations. Although useful, computerized coding requires sophisticated equipment that many behavioral practitioners do not have access to. The purpose of this research study was to comopare outcomes from descriptive functonal asssessments using hand-scoring and computerized data collection systems. Results suggest that hand-scoring is a useful way to develop hypotheses about environment-behavior relations. |
|
|