I was under the impression AV tended to err on the side of false
negatives -- see the repeated clawback on heuristics.  I'm not sure
false positives would make a significant statistical difference given
that preference.  Could be convinced otherwise though.

On Wed, Sep 30, 2009 at 1:31 AM, Charles Miller
<[email protected]> wrote:
> You assume no false positives...
>
> On Sep 29, 2009, at 5:12 PM, Dan Kaminsky wrote:
>>
>> Methodology wouldn't be too bad -- there are things a manual auditor
>> can notice and alarm on quickly, that AV really can't just block or
>> even send back for further review.  So it's a matter of:
>>
>> 1) Gain legitimate access to a large number of systems, perhaps
>> through a PC repair service
>> 2) Separate the machines into buckets -- "No AV" "Norton" "McAfee"
>> "Trend Micro" etc
>> 3) For each bucket, scan with all AV scanners.  This will determine
>> the number of machines that are infected with known malware that at
>> least one other scanner was able to find.
>> 4) For each node that passed all automatic sweeps, manually sweep.
>> This should yield the a minimum size of the "long tail" (minimum,
>> because we might not find all).
>>
>> Note that we may want to qualify "infected".  Tracking cookies most
>> assuredly do not count.  Botnets most assuredly do.  Merely
>> self-replicating code, that's sort of up in the air.
>>
>> _______________________________________________
>> Fun and Misc security discussion for OT posts.
>> https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
>> Note: funsec is a public and open mailing list.
>
>

_______________________________________________
Fun and Misc security discussion for OT posts.
https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
Note: funsec is a public and open mailing list.

Reply via email to