https://arxiv.org/abs/1801.02608

> Most works on adversarial examples for deep-learning based image classifiers 
> use noise that, while small, covers the entire image. We explore the case 
> where the noise is allowed to be visible but confined to a small, localized 
> patch of the image, without covering any of the main object(s) in the image. 
> We show that it is possible to generate localized adversarial noises that 
> cover only 2% of the pixels in the image, none of them over the main object, 
> and that are transferable across images and locations, and successfully fool 
> a state-of-the-art Inception v3 model with very high success rates. 

-- 
☣ uǝlƃ

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to