Vee, Annette
(2024)
The Moral Hazards of Technical Debt in Large Language Models: Why Moving Fast and Breaking Things Is Bad.
Critical AI, 2 (1).
Abstract
Companies such as OpenAI and other tech start-ups often pass on “technical debt” to consumers—that is, they roll out undertested software so that users can discover errors and problems. The breakneck pace of our current AI “arms race” implicitly encourages this practice and has resulted in consumer-facing large language models (LLMs) that have problems with bias and truth and unclear social implications. Yet, once the models are out, they are rarely retracted. The result of passing on the technical debt of LLMs to users is a “moral hazard,” where companies are incentivized to take greater risks because they do not bear their full cost. The concepts of technical debt and moral hazards help to explain the dangers of LLMs to society and underscore a need for a critical approach to AI to balance the ledger of AI risks.
Share
Citation/Export: |
|
Social Networking: |
|
Details
Metrics
Monthly Views for the past 3 years
Plum Analytics
Actions (login required)
 |
View Item |