|
Computers, Complexity, and Contingencies, Oh My! |
Monday, May 30, 2016 |
10:00 AM–11:50 AM |
Crystal Ballroom A, Hyatt Regency, Green West |
Area: DEV/TPC; Domain: Translational |
Chair: Alexandria Leidt (University of Mississippi) |
Discussant: Patrice Marie Miller (Salem State University) |
CE Instructor: Darlene E. Crone-Todd, Ph.D. |
Abstract: In this multi-domain symposium, the intersection of the use of computers and contingencies to address complex human behavior is addressed in various ways by each presentation. All of the talks focus on the assessment of either the tasks involved, the behavior emitted by humans, or both. The assessment of tasks is required for a clear task analysis of what is required in the various domains in which behavior change is desired. The assessment of behavior in terms of how well it matches, under-matches, or over-matches the tasks is an important part of the assessment involved in shaping behavior. In this symposium, high inter-observer reliability scores are reported by several presentations, along with effective strategies to change socially important behavior as a result of contingencies that are applied to human behavior. The use of these reliable and valid methods of assessment is important for the field of behavior analysis and beyond. |
Keyword(s): complex behavior, computers, task analysis |
|
Shaping Complex Repertoires in Undergraduate Courses |
DARLENE E. CRONE-TODD (Salem State University) |
Abstract: Complex, higher-order thinking is expected of university students, and the complexity of the tasks increases as one completes different levels of higher education. Traditional methods of assessing the complexity of tasks and of student performance typically result in low inter-scorer reliability (ISR). However, a model of hierarchical complexity shows promise as a more reliable and valid measure of both academic tasks and performance. In this presentation, data will be presented related to educational interventions (i.e., inter-teaching methods) at the undergraduate and graduate levels that are related to developing complex behavioral repertoires. Specifically, pre- and post-test data, along with two exams are analyzed in terms of the levels at which questions are asked and the percentage of students who can answer questions at each of these levels. In addition, ISR was at, or above, 85%. This suggests that the model can be useful for academic assessment purposes, and that inter-teaching interventions can be used to increase complex thinking. |
|
Slow Your Roll: Using Behavioral Principles to Decrease Response Speed in Speedy Survey Participants |
YASH BHAMBHANI (University of Mississippi), Solomon Kurz (University of Mississippi), Kelly G. Wilson (University of Mississippi), Karen Kate Kellum (University of Mississippi) |
Abstract: Most survey research in psychology relies on undergraduate student samples. Data obtained from these samples is often of poor quality and questionable validity. One of the issues is that up to one quarter of students participating in survey research complete instruments too quickly. The purpose of the present study is to examine the effectiveness of two interventions—a warning condition and a warning plus time penalty condition—for slowing down speedy responders compared to a no intervention condition. Participants will be a large sample of undergraduates from a public university who will be invited to complete a lengthy online battery survey for course credit. The survey is intentionally long so as to burden participants and occasion hasty responding. We will examine to what extent the intervention conditions slow down speedy responders. Our secondary analyses will assess how conditions differed with respect to straightlining (e.g., answering all questions with the same response, such as 1 1 1), missing data, answering correctly to attention-check items, and a number of multivariate outliers. We will also examine whether speedy responders differ by demographic variables. Finally, we will discuss future directions for using behavioral interventions to improve the validity of survey data |
|
Effects of a Rubric on Inter-Observer Agreement in Narrative Task Analysis |
ANA CAROLINA SELLA (Universidade Federal de Alagoas, Brazil), Daniela Mendonça Ribeiro (Universidade Federal de Alagoas) |
Abstract: Since 2005 our group has conducted research involving the assessment and teaching of narrative skills. Narratives are complex verbal behavior units and several dependent variables can be targeted for analysis in any given task involving these repertoires. Usually, the data analysis process consists of reading a story transcription several times and transforming the target dependent variables into quantifiable data (e.g., presence of story categories, mean length of utterance, episode complexity). A recurrent problem we have been facing is achieving acceptable inter-observer agreement (IOA) when one of the observers is an undergraduate student (i.e., at least 80% agreement). The purpose of this study was to evaluate the effects of reading a rubric on the percentage of IOA for four different dependent variables: presence of story categories, total number of words, number of different words, and number of conjunctions. No other procedures were used. Three undergraduate students took part in the study. Overall, the rubric alone was effective to increase IOA in the last three dependent variable measures. Other procedures, such as immediate feedback and discrimination activities, might be necessary to increase IOA regarding the presence or absence of story categories. |
|
Creating a Measure that Measures Up: Exploring Self-Report, Experience Sampling, and Behavioral Measures of Body Image Flexibility |
JESSICA AUZENNE (University of Louisiana at Lafayette), Nolan Williams (University of Louisiana at Lafayette), Grayson Butcher (University of Louisiana at Lafayette), Gina Quebedeaux Boullion (University of Louisiana at Lafayette), Heather Chiasson (University of Louisiana at Lafayette), Michael Bordieri (Murray State University), Emily Kennison Sandoz (University of Louisiana at Lafayette) |
Abstract: Body image flexibility involves a pattern of responding where effective, values-consistent action can be taken, even in the presence of aversive experiences of one’s body. As body image flexibility is associated with more favorable clinical outcomes, the ability to assess this behavior in ways that accurately reflect behavior of the individual become important in research and practice. To date, the primary way of assessing this behavior is through the use of self-report measures, which are typically single-administration, retrospective reports. A tool with the ability to model the body image flexibility in a lab setting while also assessing the behavior might be of even greater utility if related to observations of individuals’ day-to-day behavior. This paper will examine the relationships among a developing computer-based-behavioral measure of body image flexibility, single-administration retrospective reports and samples of day-to-day experiences of body image flexibility. Reports of the unique contributions of each assessment along with data to the current validity and utility of this novel computer-based assessment will be discussed. |
|
|