da Schneier on Security
«Adversarial ML Attack that Secretly Gives a Language Model a Point of View»
https://www.schneier.com/crypto-gram/archives/2022/1115.html#cg6
che parla di
«Spinning Language Models: Risks of Propaganda-As-A-Service and
Countermeasures»
https://arxiv.org/abs/2112.05224
scrive Schneier:
One example of an extension of this technology is the “persona bot,”
an AI posing as an individual on social media and other online groups.
Persona bots have histories, personalities, and communication styles.
They don’t constantly spew propaganda. They hang out in various
interest groups: gardening, knitting, model railroading, whatever.
They act as normal members of those communities, posting and
commenting and discussing. Systems like GPT-3 will make it easy for
those AIs to mine previous conversations and related Internet content
and to appear knowledgeable. Then, once in a while, the AI might post
something relevant to a political issue, maybe an article about a
healthcare worker having an allergic reaction to the COVID-19 vaccine,
with worried commentary. Or maybe it might offer its developer’s
opinions about a recent election, or racial justice, or any other
polarizing subject. One persona bot can’t move public opinion, but
what if there were thousands of them? Millions?
Maurizio
(e viene a proposito la signature random: s'il n'y a même plus l'humour
pour nous alléger / comment lutter?)
------------------------------------------------------------------------
s'il n'y a même plus l'humour pour nous alléger
comment lutter
prohom, comment lutter
------------------------------------------------------------------------
Maurizio Lana - 347 7370925
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa