Nel caso ti servisse un esempio eclatante, Marco, quando parli di bolle 
giuridiche:

Extracting Training Data from Diffusion Models
Abstract
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have 
attracted significant attention due
to their ability to generate high-quality synthetic images.
In this work, we show that diffusion models memorize individual images from 
their training data and emit them
at generation time. With a generate-and-filter pipeline, we extract over a 
thousand training examples from stateof-
the-art models, ranging from photographs of individual people to trademarked 
company logos. We also train
hundreds of diffusion models in various settings to analyze how different 
modeling and data decisions affect
privacy. Overall, our results show that diffusion models are much less private 
than prior generative models such
as GANs, and that mitigating these vulnerabilities may require new advances in 
privacy-preserving training.


<https://t.co/LQuTtAskJ9>


Buona serata,
Daniela
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to