On 17/09/2016 11:39 AM, Kim Holburn wrote:
Interesting points he (Opsec) raises about AI.
"He is a fast talker when he’s onto a subject. His mind seems to race
most of the time. Currently he is designing an autonomous system for
detecting network attacks and taking action in response.
The system is based on machine learning and artificial intelligence.
In a typical burst of words, he said, 'But the automation itself might
be hacked. Is the A.I. being gamed? Are you teaching the computer, or is
it learning on its own?
If it’s learning on its own, it can be gamed. If you are teaching it,
then how clean is your data set? Are you pulling it off a network that
has already been compromised?
Because if I’m an attacker and I’m coming in against an A.I.-defended
system, if I can get into the baseline and insert attacker traffic into
the learning phase, then the computer begins to think that those things
are normal and accepted
I’m teaching a robot that ‘It’s O.K.! I’m not really an attacker, even
though I’m carrying an AK-47 and firing on the troops.’
And what happens when a machine becomes so smart it decides to betray
you and switch sides?' "
Link mailing list