Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

The relative impact of student affect on performance models in a spoken dialogue tutoring system

Forbes-Riley, K and Rotaru, M and Litman, DJ (2008) The relative impact of student affect on performance models in a spoken dialogue tutoring system. User Modeling and User-Adapted Interaction, 18 (1-2). 11 - 43. ISSN 0924-1868

[img] Plain Text (licence)
Available under License : See the attached license file.

Download (1kB)

Abstract

We hypothesize that student affect is a useful predictor of spoken dialogue system performance, relative to other parameters. We test this hypothesis in the context of our spoken dialogue tutoring system, where student learning is the primary performance metric. We first present our system and corpora, which have been annotated with several student affective states, student correctness and discourse structure. We then discuss unigram and bigram parameters derived from these annotations. The unigram parameters represent each annotation type individually, as well as system-generic features. The bigram parameters represent annotation combinations, including student state sequences and student states in the discourse structure context. We then use these parameters to build learning models. First, we build simple models based on correlations between each of our parameters and learning. Our results suggest that our affect parameters are among our most useful predictors of learning, particularly in specific discourse structure contexts. Next, we use the PARADISE framework (multiple linear regression) to build complex learning models containing only the most useful subset of parameters. Our approach is a value-added one; we perform a number of model-building experiments, both with and without including our affect parameters, and then compare the performance of the models on the training and the test sets. Our results show that when included as inputs, our affect parameters are selected as predictors in most models, and many of these models show high generalizability in testing. Our results also show that overall, the affect-included models significantly outperform the affect-excluded models. © 2007 Springer Science+Business Media B.V.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: Article
Status: Published
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Forbes-Riley, K
Rotaru, M
Litman, DJdlitman@pitt.eduDLITMAN
Centers: University Centers > Learning Research and Development Center (LRDC)
Date: 1 February 2008
Date Type: Publication
Journal or Publication Title: User Modeling and User-Adapted Interaction
Volume: 18
Number: 1-2
Page Range: 11 - 43
DOI or Unique Handle: 10.1007/s11257-007-9038-5
Schools and Programs: Dietrich School of Arts and Sciences > Computer Science
Dietrich School of Arts and Sciences > Intelligent Systems
Refereed: Yes
ISSN: 0924-1868
Date Deposited: 16 Oct 2014 18:00
Last Modified: 02 Feb 2019 15:59
URI: http://d-scholarship.pitt.edu/id/eprint/23189

Metrics

Monthly Views for the past 3 years

Plum Analytics

Altmetric.com


Actions (login required)

View Item View Item