On Thu, Dec 28, 2017 at 11:05:22PM -0800, Morlock Elloi wrote:
Longer version and remarks: current ML systems appear to be linear, so
it's possible to synthesize diversions even without knowing how a
particular system works. Systems can be fooled to miscatergorize visual
images (turtle gets recognized as a rifle in live video, face is not
classified as face in still photo, etc.), and autonomous vehicles can
be made to misread signals and signs (apparently most research goes
there.)
I'm in the middle of watching the video (thanks for the reference) and
it immediately reminded me of this paper https://arxiv.org/abs/1412.6572
Abstract:
---
Several machine learning models, including neural networks, consistently
misclassify adversarial examples---inputs formed by applying small but
intentionally worst-case perturbations to examples from the dataset,
such that the perturbed input results in the model outputting an
incorrect answer with high confidence. Early attempts at explaining this
phenomenon focused on nonlinearity and overfitting. We argue instead
that the primary cause of neural networks' vulnerability to adversarial
perturbation is their linear nature. This explanation is supported by
new quantitative results while giving the first explanation of the most
intriguing fact about them: their generalization across architectures
and training sets. Moreover, this view yields a simple and fast method
of generating adversarial examples. Using this approach to provide
examples for adversarial training, we reduce the test set error of a
maxout network on the MNIST dataset.
---
Happy new year,
JMM.
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: http://mx.kein.org/mailman/listinfo/nettime-l
# archive: http://www.nettime.org contact: [email protected]
# @nettime_bot tweets mail w/ sender unless #ANON is in Subject: