Re: Deep Fool

2017-12-30 Thread José María Mateos
On Sun, Dec 31, 2017 at 11:52:45AM +1300, Douglas Bagnall wrote: Just by looking at the Athalye et. al. turtle, you can see that the system associates rifles with the texture of polished wood. And indeed when you look in the ImageNet rifle category you see a lot of wood, not only in the guns

Re: Deep Fool

2017-12-30 Thread Morlock Elloi
It will become progressively harder to have big secret database: it's just one planet with only so many possible frames, and anything on a disk eventually becomes (more or less) public. The race for data then ends as everyone has everything. Ultimately, the situation gets analogous with

Re: Deep Fool

2017-12-30 Thread Douglas Bagnall
On 29/12/17 20:05, Morlock Elloi wrote: > Longer version and remarks: current ML systems appear to be linear, > so it's possible to synthesize diversions even without knowing how a > particular system works. Systems can be fooled to miscatergorize > visual images (turtle gets recognized as a

Re: Deep Fool

2017-12-29 Thread Morlock Elloi
I have only casual understanding of ML, but it was always counter-intuitive to me that simple polynomial units can somehow produce magical macro effect which no one understands but it "just works". If it turns out that it's just a roundabout way of conditioning a linear system, the magic goes

Re: Deep Fool

2017-12-29 Thread José María Mateos
On Thu, Dec 28, 2017 at 11:05:22PM -0800, Morlock Elloi wrote: Longer version and remarks: current ML systems appear to be linear, so it's possible to synthesize diversions even without knowing how a particular system works. Systems can be fooled to miscatergorize visual images (turtle gets

Deep Fool

2017-12-28 Thread Morlock Elloi
There was one remarkable presentation at 34C3 - "Deep Learning Blindspots" by Katharine Jarmul, available here https://media.ccc.de/v/34c3-8860-deep_learning_blindspots and here https://www.youtube.com/watch?v=BVJT-sE0WWQ . Tl;Dr: the arms race in machine learning ("AI" for consumers) has