Lewis, Michael and Li, Huao and Sycara, Katia
(2020)
Deep Learning, transparency and trust in Human Robot Teamwork.
In:
Trust in Human-Robot Interaction.
Elsevier, New York, NY.
ISBN UNSPECIFIED
(In Press)
Abstract
For Autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in Deep Neural Networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations.
Share
Citation/Export: |
|
Social Networking: |
|
Details
Item Type: |
Book Section
|
Status: |
In Press |
Creators/Authors: |
|
Date: |
2020 |
Date Type: |
Publication |
Publisher: |
Elsevier |
Place of Publication: |
New York, NY |
Schools and Programs: |
School of Information Sciences > Information Science |
Refereed: |
Yes |
Title of Book: |
Trust in Human-Robot Interaction |
Editors: |
Editors | Email | Pitt Username | ORCID |
---|
Nam, Chang | UNSPECIFIED | UNSPECIFIED | UNSPECIFIED | Lyons, Joseph | UNSPECIFIED | UNSPECIFIED | UNSPECIFIED |
|
Date Deposited: |
27 May 2020 14:30 |
Last Modified: |
27 May 2020 14:30 |
URI: |
http://d-scholarship.pitt.edu/id/eprint/39097 |
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
|
View Item |