Fu, Xinyu
(2023)
Integrating Imperfect Machines and Unmindful Users: Assessing Human-Bot Hybrid Designs for Managing Discussions in Online Communities.
Doctoral Dissertation, University of Pittsburgh.
(Unpublished)
Abstract
Modern intelligent machines, developed through large-scale data instead of rules, are no longer passive tools waiting to be used, but take proactive actions and work as a co-worker with human agents. They are, admittedly, still imperfect. As a result, machines are often deployed in hybrid "assemblage" configurations that require active human interventions in the workflow. This dissertation examined such a collaborative workflow involving an artificial intelligence (AI) bot and a human to manage inappropriate discussions in the context of online communities. I simulated a bot-assisted news discussion forum where users can affirm or override bot-flagged inappropriate comments. Based on the Information Systems (IS) delegation framework, I examine the main attributes of human agents (cognition), agentic IS artifacts (imperfection), and the delegation mechanisms between these two agents: complacency potential. Cognition, or the "generation effect," occurs when people are asked to explain the bot's activities rather than being told of its flaws. The imperfection has two folds: bots' accuracy and bots' valence. Complacency potential refers to users' tendencies to over-rely on the bot and lack awareness of monitoring the bot's actions. With 1650 subjects over five lab experiments, I found that, on average, users aided by the bot achieved higher decision quality than the ones unaided by the bot. The users that were prompted to provide self-explanation were able to better detect the bot’s errors than others. By deploying the bot at lower accuracy levels, I found that digital platforms may compensate for the lower accuracy of bots by getting users’ active involvement, but there is a threshold level of a bot’s accuracy below which the bot will not improve the performance of users. Furthermore, the users who encountered a positive bot that recommended good content had higher levels of engagement with the online community relative to those who encountered a negative bot that flagged inappropriate content. Lastly, I found that users who provided a self-generated explanation about bots’ actions perceived a higher level of responsibility to monitor the bots’ performance, but remained willing to delegate the moderation work to the bot.
Share
Citation/Export: |
|
Social Networking: |
|
Details
Item Type: |
University of Pittsburgh ETD
|
Status: |
Unpublished |
Creators/Authors: |
|
ETD Committee: |
|
Date: |
2 July 2023 |
Date Type: |
Publication |
Defense Date: |
14 April 2023 |
Approval Date: |
2 July 2023 |
Submission Date: |
1 July 2023 |
Access Restriction: |
No restriction; Release the ETD for access worldwide immediately. |
Number of Pages: |
120 |
Institution: |
University of Pittsburgh |
Schools and Programs: |
Joseph M. Katz Graduate School of Business > Business Administration |
Degree: |
PhD - Doctor of Philosophy |
Thesis Type: |
Doctoral Dissertation |
Refereed: |
Yes |
Uncontrolled Keywords: |
Human-AI collaboration; algorithmic errors; error detection; generation effect; complacency potential |
Date Deposited: |
02 Jul 2023 22:47 |
Last Modified: |
02 Jul 2023 22:47 |
URI: |
http://d-scholarship.pitt.edu/id/eprint/45056 |
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
|
View Item |