Buongiorno, Maurizio.

Vediamo se si realizzerà la previsione di Daron Acemoglu,

Get Ready for the Great AI Disappointment
Rose-tinted predictions for artificial intelligence’s grand achievements will 
be swept aside by underwhelming performance and dangerous results.

In the decades to come, 2023 may be remembered as the year of generative AI 
hype, where ChatGPT became arguably the fastest-spreading new technology in 
human history and expectations of AI-powered riches became commonplace. The 
year 2024 will be the time for recalibrating expectations.

Of course, generative AI is an impressive technology, and it provides 
tremendous opportunities for improving productivity in a number of tasks. But 
because the hype has gone so far ahead of reality, the setbacks of the 
technology in 2024 will be more memorable.

More and more evidence will emerge that generative AI and large language models 
provide false information and are prone to hallucination—where an AI simply 
makes stuff up, and gets it wrong. Hopes of a quick fix to the hallucination 
problem via supervised learning, where these models are taught to stay away 
from questionable sources or statements, will prove optimistic at best. Because 
the architecture of these models is based on predicting the next word or words 
in a sequence, it will prove exceedingly difficult to have the predictions be 
anchored to known truths.

Anticipation that there will be exponential improvements in productivity across 
the economy, or the much-vaunted first steps towards “artificial general 
intelligence”, or AGI, will fare no better. The tune on productivity 
improvements will shift to blaming failures on faulty implementation of 
generative AI by businesses. We may start moving towards the (much more 
meaningful) conclusion that one needs to know which human tasks can be 
augmented by these models, and what types of additional training workers need 
to make this a reality.

Some people will start recognizing that it was always a pipe dream to reach 
anything resembling complex human cognition on the basis of predicting words. 
Others will say that intelligence is just around the corner. Many more, I fear, 
will continue to talk of the “existential risks” of AI, missing what is going 
wrong, as well as the much more mundane (and consequential) risks that its 
uncontrolled rollout is posing for jobs, inequality, and democracy.

We will witness these costs more clearly in 2024. Generative AI will have been 
adopted by many companies, but it will prove to be just “so-so automation” of 
the type that displaces workers but fails to deliver huge productivity 
improvements.

The biggest use of ChatGPT and other large language models will be in social 
media and online search. Platforms will continue to monetize the information 
they collect via individualized digital ads, while competition for user 
attention will intensify. The amount of manipulation and misinformation online 
will grow. Generative AI will then increase the amount of time people spend 
using screens (and the inevitable mental health problems associated with it).

There will be more AI startups, and the open source model will gain some 
traction, but this will not be enough to halt the emergence of a duopoly in the 
industry, with Google and Microsoft/OpenAI dominating the field with their 
gargantuan models. Many more companies will be compelled to rely on these 
foundation models to develop their own apps. And because these models will 
continue to disappoint due to false information and hallucinations, many of 
these apps will also disappoint.

Calls for antitrust and regulation will intensify. Antitrust action will go 
nowhere, because neither the courts nor policymakers will have the courage to 
attempt to break up the largest tech companies. There will be more stirrings in 
the regulation space. Nevertheless, meaningful regulation will not arrive in 
2024, for the simple reason that the US government has fallen so far behind the 
technology that it needs some time to catch up—a shortcoming that will become 
more apparent in 2024, intensifying discussions around new laws and 
regulations, and even becoming more bipartisan.

https://www.wired.com/story/get-ready-for-the-great-ai-disappointment/


Un caro saluto,
Daniela
________________________________________
Da: Maurizio Borghi <[email protected]>
Inviato: mercoledì 28 febbraio 2024 09:57
A: Daniela Tafani; Nexa
Oggetto: Re: [nexa] Google Is Paying Publishers to Test an Unreleased Gen AI 
Platform

Grazie Daniela, molto interessante. Soprattutto questo paragrafo:

> The beta tools let under-resourced publishers create aggregated content more 
> efficiently by indexing recently published reports generated by other 
> organizations, like government agencies and neighboring news outlets, and 
> then summarizing and publishing them as a new article.

Del resto, cosa c'è di più sicuro, per giornali in ristrettezze economiche 
(cioè quasi tutti), che riciclare propaganda governativa a basso costo? Con 
buona pace del ruolo politico del giornalismo.

--
_______________
Maurizio Borghi
Università di Torino
https://www.dg.unito.it/persone/maurizio.borghi
Co-Director Nexa Center for Internet & Society<https://nexa.polito.it/>
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to