Well, it did not take too long for this particular bottom to fell out... BTW, "a partial fix". So the hole is not fully plugged yet?
[ Hacker plants false memories in ChatGPT to steal user data in perpetuity Emails, documents, and other untrusted content can plant malicious memories. Dan Goodin - 9/24/2024, 10:56 PM https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/ ] QUOTE START When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern. So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month. QUOTE END -- Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:[email protected] ** -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/ZvRt40ApoiikY0%2Bh%40tau1.ceti.pl.

