|
Examination of Data Analysis Methods |
Monday, May 26, 2008 |
10:30 AM–11:50 AM |
Stevens 2 |
Area: DDA/AUT; Domain: Applied Research |
Chair: Jason C. Bourret (The New England Center for Children) |
Discussant: William H. Ahearn (The New England Center for Children) |
CE Instructor: Jason C. Bourret, Ph.D. |
Abstract: Analysis and interpretation of data in order to identify functional relations is a hallmark of behavior analysis. The talks included in this symposium focus on the evaluation of differing methods of analyzing data and, in particular, the degree to which differing data analysis methods facilitate the detection of extinction bursts, treatment effects, and changes in the reliability of data collection. |
|
Within- versus Between-Session Examination of Responding during Extinction. |
JENNIFER N. FRITZ (University of Florida), Brian A. Iwata (University of Florida), Griffin Rooker (University of Florida) |
Abstract: Although extinction (EXT) is the most direct method for reducing the frequency of problem behavior, its use has been associated with several side effects, the most common being the EXT burst. EXT bursting typically is defined as a temporary increase in response rate above that observed during baseline. Reports of EXT bursting are relatively rare; however, their occurrence may be masked when data are presented as overall session rates. In other words, it is possible that a burst or responding may occur at the beginning of initial EXT sessions, but overall session rates might not detect this phenomenon. We first compared local rates of problem behavior during contingent-reinforcement and extinction-only conditions by conducting within-session analyses of behavioral patterns to determine whether the occurrence of EXT bursts can be masked by the use of average session rates. Second, we examined the effectiveness of other treatment procedures (in conjunction with EXT) for reducing the magnitude or occurrence of EXT bursts. |
|
Within-Session Response Patterns as Predictors of Treatment Outcome. |
GRIFFIN ROOKER (University of Florida), Brian A. Iwata (University of Florida), Jennifer N. Fritz (University of Florida) |
Abstract: Analysis of within-session response patterns has been used in several studies to examine data from functional analyses of problem behavior but generally has not been used to evaluate treatment effects. Rapid detection of changes in responding may be especially helpful when comparing the effects of two or more treatments. We conducted an analysis of within-session response patterns during treatment comparisons to determine whether any initial differences could be detected and, if so, whether they were predictive of treatment outcome. |
|
Comparison of Proportional and Exact Agreement in Measuring Improvement in Data Collection. |
STACIE BANCROFT (The New England Center for Children), Jason C. Bourret (The New England Center for Children) |
Abstract: Interobserver agreement (IOA) is often calculated to enhance the believability of data. Proportional and exact-agreement are two common calculations used to determine the percentage of agreement between two independent observers. Participants included four teachers working in a research group as data collectors in partial fulfillment of graduate program requirements or in preparation for a graduate program. Participants served as secondary data collectors to experienced primary data collectors across several different studies. Both proportional and exact agreement scores were calculated for each of their first several sessions of data collection. The first data analysis for each participant showed the progression of agreement scores over time for both proportional and exact agreement calculations. While little progress was shown with the proportional agreement scores, steady increases in agreement were shown with the exact agreement scores. A second data analysis plotted exact agreement scores against proportional agreement scores showing the calculations to produce similar scores at higher agreements. However, sessions with lower levels of agreement showed significantly lower scores for exact agreement. These data suggest that exact agreement calculations may be a more sensitive measure of changes in agreement, and may be more useful in measuring progress in training new data collectors. |
|
|