<https://www.forbes.com/sites/forbestechcouncil/2022/04/11/supply-chain-attacks-on-ai/>
I recently listened to a great podcast by Google's cybersecurity experts
on the topic of secure artificial intelligence (AI). The range of issues
raised seemed, to me, very interesting and relevant. But because the
podcast was limited in time, some of the extremely important topics were
covered only too briefly. That's why I decided to take a closer look at
some of the topics raised in the podcast—and this has become the article
you're reading right now.
AI Risks
The topic of AI risks, especially when related to security, is currently
gaining more popularity. For example, back in 2020, when my company
collected information on the most critical security-related AI incidents
of the past decade, we were only able to mention six. However, more
recently, it was difficult for us to determine the most significant ones
for the list of the 10 most notorious AI incidents in 2021.
I won't go into detail on each incident now, but I will mention the
facial recognition bypass incident in which a man attempted to steal
over $2.5 million in state funds, and the Zillow case, which resulted in
a $6 billion drop in the company's valuation. As a result, Zillow had to
lay off 25% of its workforce and sell 7,000 homes with a total asset
value in excess of the $2.8 billion previously acquired. The reason for
this is that routine machine learning models can lead to extremely
serious consequences if mismanaged.
Although the last case is not directly related to security and was
caused more by an algorithm error, it nonetheless demonstrates one very
important idea: The losses associated with incidents in AI can be very
large-scale. The point is that AI applications, by their very nature,
will be responsible for making smarter decisions than traditional
software. As a rule, the "smarter" the system is, the more important and
critical decisions are entrusted to this system—more important decisions
involve greater responsibility, and, as a result, greater consequences
can occur if the decisions are wrong.
Just imagine the consequences of attacks in cyber-physical worlds with
autonomous cars, smart cities and a variety of systems based on computer
vision automation. If you think such attacks are implausible, I hasten
to disappoint you, as all the systems mentioned above have already been
attacked as part of many research projects.
Such situations occurring in reality is only a matter of time. In fact,
cybercriminals just have to find an attractive victim and use case with
a good "attack-market fit." And it seems that one of the attack-market
fit examples can happen quite soon.
Supply Chain Attacks On AI
One of the interesting topics mentioned in the podcast was supply chain
attacks, which are an incredibly big threat right now. We mainly see
examples of them related to traditional software, where an open-source
library that is implemented in several or even millions of places is
vulnerable—which can potentially lead to the collapse of almost any
enterprise around the world, as happened recently with Log4j. But very
little information about similar cases related to AI is known to a wide
audience.
The podcast, however, mentioned one example of a supply chain attack on
AI called poisoning. And this, alas, is a completely realistic
situation. Someone can poison a dataset in such a way that when it is
used to train a system, the system will be poisoned and make wrong
decisions. Such a situation could put at risk every company that deals
with users' data, such as internet platforms and social networks.
AI algorithms can be retrained to make the wrong decisions, and this
presents a great opportunity for third parties: Scammers can use this to
make AI more biased and then blame the AI companies for it.
However, there are other variants of supply chain attacks that were not
mentioned in the podcast, such as AI trojans and AI backdoors. AI
trojans are methods of modifying a trained AI model so that it will only
perform certain actions when given certain inputs.
Let's say you have affected the image classification model in such a way
that a certain type of weapon is not detected by the AI model that
manages their transfer and storage. In terms of carrying out such an
attack, someone can hack into the model hosting server and upload Trojan
models. This is a very realistic scenario because most AI solutions use
pre-trained models, and there is a good chance that this model will be
ubiquitous.
AI backdoors are almost the same, but the difference is that they are
implemented at the model training stage by the authors of the model. For
example, some facial recognition vendors may train a model to recognize
a particular face as a "universal" key, then this system will be sent
out and implemented at hundreds of facilities.
These backdoors are very difficult to identify because AI is usually
based on deep neural networks, and we still struggle to understand how
they work in normal situations, let alone adversarial ones. There needs
to be completely new ways of identifying the presence of such backdoors
and better methods of dealing with them. While nothing is being done,
know that in the hands of attackers, these exploits present huge
opportunities that can fatally harm your company.
Solutions
What can you do right now? Of course, if your core business functions
are dependent on AI decisions or you develop AI, then it's critical to
enforce security measures in each step of your AI development—from the
design and development process itself to testing and operations. For
companies that consume AI solutions, it's essential to start with asset
inventory and understand where you use AI, who is responsible for it,
whether deployed AI systems are based on open-source models with
vulnerabilities and whether the vendors are taking care of AI security.
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa