This is the script of my national radio report yesterday discussing the
expected massive rise in cyberattacks due to the leveraging of
generative AI systems such as Google Gemini, and changes already made
by the new administration. As always, there may have been minor wording
variations from this script as I presented this report live on air.
- - -
Well, the short answer is that it looks like it could be getting very,
very bad. Probably worse than any of us might have imagined even a year
or two ago. There are multiple reasons for this that are creating what
might be called a perfect storm of opportunities for cyberattacks of all
sorts at all scales, from individuals to the largest firms and
government agencies being targeted for phishing attacks, devastating
malware, and all the rest.
A key issue is that these generative AI systems that we've so often
discussed are now being leveraged to create dramatically more effective
automated phishing attacks, and these attacks are a foundational vector
for malware injections, data extraction, and various other cyberattack
scenarios.
Up until recently, it was almost a running joke that so many phishing
emails were so badly worded in broken English that in many cases it made
them fairly obviously not from the sources they claimed to be from,
though they still over the course of billions of emails sent found
enough takers to be continuing profit centers for the senders.
There have actually been two schools of thought about these. One is that
they've been written by persons for whom English is not their native
language and so that's why they so often had so many grammatical and
other pretty obvious errors. But another view is that some of these
were PURPOSELY written that way in an effort to find the most vulnerable
recipients who might be more likely to respond. Probably it was
actually a mix of both.
But now these generative AI systems are being used to create VASTLY
higher quality phishing attacks, with completely convincing English and
often personalized to the targeted victims. This is creating an enormous
resource boost for cyberattackers. While it appears that various of the
generative AI systems are already being used in this way, Google has
admitted that they've definitely seen this happening with their Gemini
AI of which I've been so critical.
I'm glad that Google is admitting that this is happening, but the real
question is, what are they going to do about it? I wouldn't hold my
breath for effective solutions to such Gemini-related problems, because
Google mostly seems to be concerned with trying to trick or coerce
people into using Gemini whether they want to or not, and then making it
as difficult as possible to ever turn it off.
Another factor that now has some cybersecurity experts concerned about
potentially significant increases in cyberattacks -- and of course we'll
have to wait and see what actually happens -- is that the current
administration -- at least at this very, very early stage -- appears to
so far be signaling the weakening or elimination of some existing
regulatory and advisory mechanisms that were developed to help fight
against AI abuse and cyberattacks.
Some federal AI safety guidances were withdrawn and the cybersecurity
review board essentially closed down. While those efforts were fairly
new and certainly imperfect, they did seem to be on a path toward
establishing basic guardrails for helping to prevent AI abuses and
cyberattacks.
So since there is such widespread concern among both experts and the
public about these issues, it's hoped that the administration will
ultimately decide to move decisively to establish new and better forms
of protections to help prevent these problems from rapidly expanding.
There's no way to stuff the AI genie back into the bottle, but working
together we can push to harness it for its many potential positive
benefits, rather than being steamrolled by the ways in which it can be
and certainly is being abused.
- - -
L
- - -
--Lauren--
Lauren Weinstein
[email protected] (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
google-issues mailing list
https://lists.vortex.com/mailman/listinfo/google-issues