*ChatGPT is bullshit
*
Michael Townsen Hicks1 · James Humphries1 · Joe Slater1
/Abstract
/Recently, there has been considerable interest in large language
models: machine learning systems which produce human- like text and
dialogue. Applications of these systems have been plagued by persistent
inaccuracies in their output; these are often called “AI
hallucinations”. We argue that these falsehoods, and the overall
activity of large language models, is better understood as bullshit in
the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the
models are in an important way indifferent to the truth of their
outputs. We distinguish two ways in which the models can be said to be
bullshitters, and argue that they clearly meet at least one of these
definitions. We further argue that describing AI misrepresentations as
bullshit is both a more useful and more accurate way of predicting and
discussing the behaviour of these systems.
https://link.springer.com/article/10.1007/s10676-024-09775-5
Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. /Ethics Inf
Technol/ *26*, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5
*
Published 08 June 2024
*
DOIhttps://doi.org/10.1007/s10676-024-09775-5