Pham, Phuong
(2018)
Improving and Scaling Mobile Learning via Emotion and Cognitive-state Aware Interfaces.
Doctoral Dissertation, University of Pittsburgh.
(Unpublished)
This is the latest version of this item.
Abstract
Massive Open Online Courses (MOOCs) provide high-quality learning materials at low cost to millions of learners. Current MOOC designs, however, have minimal learner-instructor communication channels. This limitation restricts MOOCs from addressing major challenges: low retention rates, frequent distractions, and little personalization in instruction. Previous work enriched learner-instructor communication with physiological signals but was not scalable because of the additional hardware requirement. Large MOOC providers, such as Coursera, have released mobile apps providing more flexibility with “on-the-go” learning environments. This thesis reports an iterative process for the design of mobile intelligent interfaces that can run on unmodified smartphones, implicitly sense multiple modalities from learners, infer learner emotions and cognitive states, and intervene to provide gains in learning.
The first part of this research explores the usage of photoplethysmogram (PPG) signals collected implicitly on the back-camera of unmodified smartphones. I explore different deep neural networks, DeepHeart, to improve the accuracy (+2.2%) and robustness of heart rate sensing from noisy PPG signals. The second project, AttentiveLearner, infers mind-wandering events via the collected PPG signals at a performance comparable to systems relying on dedicated physiological sensors (Kappa = 0.22). By leveraging the fine-grained cognitive states, the third project, AttentiveReview, achieves significant (+17.4%) learning gains by providing personalized interventions based on learners’ perceived difficulty.
The latter part of this research adds real-time facial analysis from the front camera in addition to the PPG sensing from the back camera. AttentiveLearner2 achieves more robust emotion inference (average accuracy = 84.4%) in mobile MOOC learning. According to a longitudinal study with 28 subjects for three weeks, AttentiveReview2, with the multimodal sensing component, improves learning gain by 28.0% with high usability ratings (average System Usability Scale = 80.5).
Finally, I show that technologies in this dissertation not only benefit MOOC learning, but also other emerging areas such as computational advertising and behavior targeting. AttentiveVideo, building on top of the sensing architecture in AttentiveLearner2, quantifies emotional responses to mobile video advertisements. In a 24-participant study, AttentiveVideo achieved good accuracy on a wide range of emotional measures (best accuracy = 82.6% across 9 measures).
Share
Citation/Export: |
|
Social Networking: |
|
Details
Item Type: |
University of Pittsburgh ETD
|
Status: |
Unpublished |
Creators/Authors: |
|
ETD Committee: |
|
Date: |
28 June 2018 |
Date Type: |
Publication |
Defense Date: |
11 October 2017 |
Approval Date: |
28 June 2018 |
Submission Date: |
27 March 2018 |
Access Restriction: |
No restriction; Release the ETD for access worldwide immediately. |
Number of Pages: |
220 |
Institution: |
University of Pittsburgh |
Schools and Programs: |
Dietrich School of Arts and Sciences > Computer Science |
Degree: |
PhD - Doctor of Philosophy |
Thesis Type: |
Doctoral Dissertation |
Refereed: |
Yes |
Uncontrolled Keywords: |
mobile learning, MOOC, affective computing, physiological signal, facial expression, personalized intervention |
Date Deposited: |
28 Jun 2018 19:27 |
Last Modified: |
28 Jun 2018 19:27 |
URI: |
http://d-scholarship.pitt.edu/id/eprint/34067 |
Available Versions of this Item
-
Improving and Scaling Mobile Learning via Emotion and Cognitive-state Aware Interfaces. (deposited 28 Jun 2018 19:27)
[Currently Displayed]
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
|
View Item |