I find it interesting to hear about such high false positive ratings as 
being normal and yet we do not see many discussions on the list that address
which plugins need fixed to address the problems.

IMHO there is a difference between a false positive and a potential
positive.
That is, if the scanner says, "This condition exists, therefore
you may have a vulnerability" then a false positive condition does not
exist.
False positive only exists if the result of a test reports that a
vulnerability
does exist when in fact it does not.  

We have talked about including confidence ratings with the results before.
If
this were available then you could exclude potential vulnerabilities in a 
report and get all absolute vulnerabilities.  I won't get into an argument
about whether this should be done.  I certainly understand that you have the
potential to miss something important, but different environments have 
requirements and the tool should not dictate how it is used by being
limiting.

So I propose a few things.  First, if there are known false positive or
false
negative conditions then we should be discussing them and trying to find
the most effective way to be absolutely accurate.  In cases where it is not 
possible, then the plugin output should state the potential inaccuracy.

Next we should discuss more thoroughly the confidence rating idea and how to
implement it, then get it done.

Dion
-
[EMAIL PROTECTED]: general discussions about Nessus.
* To unsubscribe, send a mail to [EMAIL PROTECTED] with
"unsubscribe nessus" in the body.
* To subscribe again, send a mail to [EMAIL PROTECTED] with
"subscribe nessus" in the body

Reply via email to