Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Controllability and explainability in a hybrid social recommender system

Tsai, Chun-Hua (2020) Controllability and explainability in a hybrid social recommender system. Doctoral Dissertation, University of Pittsburgh. (Unpublished)

[img]
Preview
PDF
Download (7MB) | Preview

Abstract

The growth in artificial intelligence (AI) technology has advanced many human-facing applications. The recommender system is one of the promising sub-domain of AI-driven application, which aims to predict items or ratings based on user preferences. These systems were empowered by large-scale data and automated inference methods that bring useful but puzzling suggestions to the users. That is, the output is usually unpredictable and opaque, which may demonstrate user perceptions of the system that can be confusing, frustrating or even dangerous in many life-changing scenarios. Adding controllability and explainability are two promising approaches to improve human interaction with AI. However, the varying capability of AI-driven applications makes the conventional design principles are less useful. It brings tremendous opportunities as well as challenges for the user interface and interaction design, which has been discussed in the human-computer interaction (HCI) community for over two decades. The goal of this dissertation is to build a framework for AI-driven applications that enables people to interact effectively with the system as well as be able to interpret the output from the system. Specifically, this dissertation presents the exploration of how to bring controllability and explainability to a hybrid social recommender system, included several attempts in designing user-controllable and explainable interfaces that allow the users to fuse multi-dimensional relevance and request explanations of the received recommendations. The works contribute to the HCI fields by providing design implications of enhancing human-AI interaction and gaining transparency of AI-driven applications.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: University of Pittsburgh ETD
Status: Unpublished
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Tsai, Chun-HuaCHT77@pitt.eduCHT770000-0001-9188-0362
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Committee ChairBrusilovsky, Peterpeterb@pitt.eduPETERB0000-0002-1902-1464
Committee MemberLin, Yu-RuYURULIN@pitt.eduYURULIN0000-0002-8497-3015
Committee MemberPelechrinis, Konstantinoskpele@pitt.edukpele0000-0002-6443-3935
Committee MemberO'Donovan, Johnjod@cs.ucsb.edu
Date: 23 January 2020
Date Type: Publication
Defense Date: 22 August 2019
Approval Date: 23 January 2020
Submission Date: 26 November 2019
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 197
Institution: University of Pittsburgh
Schools and Programs: School of Computing and Information > Information Science
Degree: PhD - Doctor of Philosophy
Thesis Type: Doctoral Dissertation
Refereed: Yes
Uncontrolled Keywords: explainable controllable transparency recommendation user interface artificial intelligence
Date Deposited: 23 Jan 2020 19:21
Last Modified: 23 Jan 2020 19:21
URI: http://d-scholarship.pitt.edu/id/eprint/37899

Metrics

Monthly Views for the past 3 years

Plum Analytics


Actions (login required)

View Item View Item