whatever, ma fa un po' pensare che mentre i ricercatori di Oxford parlano
di introdurre una "Legal Duty to Tell the Truth" per i LLM, a pochi
chilometri migliaia di hooligan imbufaliti bruciano le macchine perché
qualcuno ha diffuso fake news sull'assassino di quelle tre bambine a
Southport, dicendo che era un immigrato clandestino musulmano.

A parte l'imbecillità neopositivista degli oxfordiani sulla quale potremo
tornare, il problema come dice Stefano è la diffusione, non la genesi.

Isn't it?

G.

On Fri, 9 Aug 2024 at 15:20, Alberto Cammozzo via nexa <
[email protected]> wrote:

> Il politico e un LLM non sono confrontabili.
>
> Per favore smettiamola di paragonare sistemi industriali e persone solo
> perché producono la stessa tipologia di output (o di inquinamento).
> Sarebbe come confrontare uno che brucia un pezzo di carta plasticata in
> giardino con il disastro di Seveso solo perché in entrambi i casi si è
> prodotta diossina.
>
> A.
>
> PS: No god in the machine: the pitfalls of AI worship
>
> <https://www.theguardian.com/news/article/2024/aug/08/no-god-in-the-machine-the-pitfalls-of-ai-worship>
> <https://www.theguardian.com/news/article/2024/aug/08/no-god-in-the-machine-the-pitfalls-of-ai-worship>
>
> The rise of artificial intelligence has sparked a panic about computers
> gaining power over humankind. But the real threat comes from falling for
> the hype
>
> [...]
> On 07/08/24 15:54, Fabio Alemagna wrote:
>
> Mi sembra la descrizione perfetta del comportamento di un politico medio.
> Se dovessimo  obbligare per legge gli LLM a dire la verità, per quale
> ragione dovremmo esentare i politici dal fare altrettanto?
>
> Il mer 7 ago 2024, 12:55 J.C. DE MARTIN <[email protected]>
> ha scritto:
>
>> *OII | Large Language Models pose a risk to society and need tighter
>> regulation, say Oxford researchers*
>>
>> Written by
>> Sandra Wachter, Brent Mittelstadt and Chris Russell
>>
>> *Leading experts in regulation and ethics at the Oxford Internet
>> Institute, part of the University of Oxford, have identified a new type of
>> harm created by LLMs which they believe poses long-term risks to democratic
>> societies and needs to be addressed*
>>
>> Large Language Models pose a risk to society and need tighter regulation,
>> say Oxford researchers
>>
>> Leading experts in regulation and ethics at the Oxford Internet
>> Institute, part of the University of Oxford, have identified a new type of
>> harm created by LLMs which they believe poses long-term risks to democratic
>> societies and needs to be addressed by creating a new legal duty for LLM
>> providers.
>>
>> In their new paper ‘Do large language models have a legal duty to tell
>> the truth?’, published by the Royal Society Open Science, the Oxford
>> researchers set out how LLMs produce responses that are plausible, helpful
>> and confident but contain factual inaccuracies, misleading references and
>> biased information.  They term this problematic phenomenon as ‘careless
>> speech’ which they believe causes long-term harms to science, education and
>> society.
>>
>> continua qui:
>> https://www.oii.ox.ac.uk/news-events/do-large-language-models-have-a-legal-duty-to-tell-the-truth/
>>
>

Reply via email to