blogs.lse.ac.uk<https://blogs.lse.ac.uk/europpblog/2026/02/09/artificial-intelligence-hype-public-discourse-critique/>
Should you believe the AI hype? Probably not
Blog Team
8–10 minutes
________________________________

Is hype about Artificial Intelligence justified? Cristobal Garibay-Petersen, 
Marta Lorimer and Bayar Menzat argue public discourse around AI has gone beyond 
what its technical specifications warrant, limiting the space for democratic 
engagement with and control of the technology.

________________________________

Since the release of ChatGPT in November 2022, speculation about how a new era 
of Artificial 
Intelligence<https://blogs.lse.ac.uk/europpblog/tag/artificial-intelligence/> 
(AI) would change society has been rife. From evangelising claims that AI would 
save the world<https://a16z.com/ai-will-save-the-world/>, to catastrophist 
warnings that it could destroy 
it<https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/>, to 
admonishments to remain 
sceptical<https://theconversation.com/is-ai-a-con-a-new-book-punctures-the-hype-and-proposes-some-ways-to-resist-257015>,
 positions on the AI-debate truly run the gamut. In a recent 
paper<https://doi.org/10.1177/20539517251396079>, we look at these debates and 
ask: how is AI being presented in public discourse, and to what political 
effects?

Testing four claims about AI

Four sets of discourses, made primarily by influential tech commentators, 
interest us: claims that AI somewhat resembles human intelligence and affects 
“humanity”; ideas surrounding AI having “agency”; statements about the economic 
implications of AI; and confident declarations about the need to act “urgently” 
in response to AI developments.

These claims we find problematic for two reasons: first, they are inaccurate 
from a technological point of view; second, their wide-ranging implications for 
democratic governance are being obscured.

Take claims that AI systems are getting closer to matching human capabilities 
in certain areas and even surpassing them in others. From a technical 
standpoint, the idea that AI resembles human intelligence in a meaningful way 
is spurious. Contrary to popular belief, models like ChatGPT do not “learn” 
continuously.

Because they operate with fixed parameters (weights), interactions do not leave 
any trace on the underlying model. This is very different from biological 
learning processes, which involve continuous, dynamic interactions with the 
environment that produce durable changes in neural connectivity.

A false equivalence

>From a critical analytical perspective, the idea that human and machine 
>intelligence can and should be compared is equally problematic because it 
>creates a false equivalence between human decision-making and computer 
>decision-making. This false equivalence can become the basis for a series of 
>dubious practices, such as the replacement of (fallible) human judgement with 
>(allegedly superior, but no less fallible) machine judgement.

Claims that AI is an issue that affects “humanity” are similarly politically 
problematic because they conceal the way in which different groups are affected 
by AI. Some jobs are more at risk of others, and some groups (usually, those 
already marginalised) are more likely to be negatively affected by AI.

Failing to recognise the differential impacts of AI hampers the ability to 
mobilise these groups politically. Identifying a constituency is essential for 
the articulation of political problems and solutions. Suggesting that 
“humanity” is in this together negates the very existence of the kind of 
divisions one could mobilise around.

Problems of agency

One should also be wary of claims that AI technologies are about to acquire the 
ability to make autonomous decisions. These claims frequently present 
conflicting views of the role of humans in the development of AI.

On the one hand, AI is frequently presented as self-developing and at constant 
risk of escaping human control. On the other hand, the development of AI also 
becomes an opportunity to reassert human agency and dictate its future. 
Ascribing agential qualities to AI is questionable from a technological 
perspective because it overestimates its ability to produce truly novel content 
or generalise outside of training data.

>From a political perspective, this complex mix of machine agency and human 
>reaction is also troublesome. The attribution of agency to AI conceals the 
>ways in which humans are implicated in its development.

At the same time, presenting human agency as purely reactive promotes a form of 
politics that leaves limited scope for political choice. Because it takes the 
political agenda as externally set, it removes it from the realm of democratic 
deliberation: the technology is developing in a certain direction and politics, 
policymakers and citizens need to accept it and respond to it in pre-determined 
terms.

AI and the economy

Then there are the economic claims that are made about AI. Discourses on AI 
have tended to portray the development of the technology as essentially linked 
to a specific conception of what economic reality is and ought to be. These 
assumptions are problematic to the extent that they uncritically reflect key 
pillars of a certain form of liberal capitalism and its depoliticising 
tendencies.

For example, the idea that competition-driven markets should act as the 
benchmark against which success, legality or rightfulness ought to be measured 
reduces the scope to consider what other criteria one might want to keep in 
mind when developing new technologies. Likewise, that a very specific 
conception of what constitutes “good” or “responsible” economic practice is set 
in stone is problematic to the extent that economic practice and theory, like 
everything else, are subject to change over time.

The myth of urgency

The final, and perhaps most important, set of claims that concerns us are the 
temporal assumptions that are made about AI. Public discourse about AI points 
towards a sense of acceleration, urgency and temporal linearity. AI is 
presented as a force in motion, which is already changing societies and will do 
so even more radically in the future. The claim of a sudden acceleration 
prepares the ground for demands to act urgently to speed it along or to halt it 
in its tracks.

Many of these claims are not fully warranted from a technical standpoint 
because they by and large overstate the speed at which AI is developing and 
create narratives of certainty where scientific knowledge is significantly less 
certain.

For example, many claims concerning AI’s ability to improve seem to be based on 
the notion that augmenting the size of AI models (“scaling”) might address 
various extant challenges in the field. Reality, however, presents a less 
optimistic picture, particularly given the huge computational resources needed 
for the kind of scaling that is envisaged.

Instead, the confident timelines and perceived urgency for action seem to be 
driven by political – ideological, even – attempts to move the present towards 
the future they predict. This future, for better or for worse, is one with AI.

AI and democracy

This occupation of the future, and the appeals to urgency that go with it, 
should worry us. The reduction of the future to one where AI will take over 
suggests that only one future is possible. From a democratic standpoint, this 
reduces the space to choose between political alternatives. The appeal to 
urgency further reduces the space to imagine, let alone bring about, 
alternative futures.

Framing something as a problem in need of an urgent response reduces the space 
for deliberative, “slow” decision processes and for democratic choice itself. 
AI discourses pointing to the need to urgently address the problems or 
opportunities that AI causes reduce the space to collectively define what the 
problem that needs addressing is, how it should be addressed and who is best 
placed to address it.

We should, therefore, remain critical of AI-hype. AI has become more than what 
its technical specification should warrant. It is now a mobilising political 
concept: an idea that is used to push political development in a certain, not 
necessarily understood and not necessarily chosen, direction. Unfortunately, 
this is being done in a way that limits the space for democratic engagement 
with, and control of, the technology itself.

For more information, see the authors’ recent 
paper<https://doi.org/10.1177/20539517251396079> in Big Data & 
Society<https://journals.sagepub.com/home/BDS>.

________________________________

Note: This article gives the views of the authors, not the position of LSE 
European Politics or the London School of Economics.

Image credit: naimurrahman21<https://www.shutterstock.com/g/naimurrahman21> 
provided by Shutterstock.

________________________________


________________________________

-- 
http:www.antic.org
--- 
You received this message because you are subscribed to the Google Groups 
"SERBIAN NEWS NETWORK" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/senet/PH0PR13MB5446618D294560A83C6C1954AE62A%40PH0PR13MB5446.namprd13.prod.outlook.com.

Reply via email to