Gonzalez-Brenes, Jose and Huang, Yun
(2015)
The Leopard Framework: Towards understanding educational technology interventions with a Pareto Efficiency Perspective.
In: The ICML 2015 Workshop on Machine Learning for Education.
![[img]](http://d-scholarship.pitt.edu/style/images/fileicons/text_plain.png) |
Plain Text (licence)
Available under License : See the attached license file.
Download (1kB)
|
Abstract
Adaptive systems teach and adapt to humans; their promise is to improve education by minimizingthe subset of items presented to students while maximizing student outcomes (Cen et al., 2007). In this context, items are questions, problems, or tasks that can be graded individually. The adaptive tutoring community has tacitly adopted conventions for evaluating tutoring systems (Dhanani et al., 2014) by using classification evaluation metrics that assess the student model component— student models are the subsystems that forecast whether a learner will answer the next item correctly. Unfortunately, it is not clear how intuitive classification metrics are for practitioners with little machine learning background. Moreover, our experiments on real and synthetic data reveal that it is possible to have student models that are very predictive (as measured by traditional classification metrics), yet provide little to no value to the learner. Additionally, when we compare alternative tutoring systems with classification metrics, we discover that they may favor tutoring systems that require higher student effort with no evidence that students are learning more. That is, when comparing two alternative systems, classification metrics may prefer a suboptimal system. We recently proposed Learner Effort-Outcomes Paradigm (Leopard) for automatic evaluation of adaptive tutoring (Gonz´alez-Brenes & Huang, 2015). Leopard extends on prior work on alternatives to classification evaluation metrics (Lee & Brunskill, 2012). At its core, Leopard quantifies both the effort and outcomes of students in adaptive tutoring. Even though these metrics are novel by itself, our contribution is approximating both without a randomized control trial. In this talk, we will describe our recently published results on meta-evaluating Leopard and conventional classification metrics. Additionally, we will present preliminary results of framing the value of an educational intervention as multi-objective programming. We argue that human-propelled machine learning, and educational technology in particular, aim to optimize the Pareto boundary of effort and outcomes of humans.
Share
Citation/Export: |
|
Social Networking: |
|
Details
Item Type: |
Conference or Workshop Item
(Paper)
|
Status: |
Published |
Creators/Authors: |
Creators | Email | Pitt Username | ORCID  |
---|
Gonzalez-Brenes, Jose | | | | Huang, Yun | yuh43@pitt.edu | YUH43 | |
|
Date: |
2015 |
Date Type: |
Publication |
Access Restriction: |
No restriction; Release the ETD for access worldwide immediately. |
Journal or Publication Title: |
The ICML 2015 Workshop on Machine Learning for Education |
Event Title: |
The ICML 2015 Workshop on Machine Learning for Education |
Event Type: |
Conference |
Institution: |
University of Pittsburgh |
Schools and Programs: |
Dietrich School of Arts and Sciences > Intelligent Systems |
Refereed: |
Yes |
Official URL: |
http://dsp.rice.edu/ML4Ed_ICML2015 |
Date Deposited: |
27 Aug 2015 19:01 |
Last Modified: |
25 Aug 2017 04:59 |
URI: |
http://d-scholarship.pitt.edu/id/eprint/26058 |
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
 |
View Item |