On Wed, Jul 09, 2014 at 03:33:28PM -0400, Theodore Ts'o wrote: > On Wed, Jul 09, 2014 at 06:20:49PM +0100, Ben Laurie wrote: > > On 9 July 2014 14:38, Paul Morriss <paul.morr...@tokenbay.co.uk> wrote: > > > I am keen to get more involved in the development of OpenSSL, I am > > > curious, > > > has the code been run through a static analysis tool (such as Coverity)? > > > > Coverity do run OpenSSL through their tool. The false positive rate is > > depressingly high (or was last I looked). > > Once you mark a failure as being a false positive via their web > interface, they won't bother with you about it going forward. And > we've had some success with the kernel getting them to make their tool > smarter. (At least in theory they are supposed to take the false > positive reports to improve their tool.)
We currently have 41 marked as false positive if I look at all issues in project. But it's only showing 20 as dismissed. I'm not sure why there is such a difference. I've marked some things as false positive, and I've seen that Tim has marked the same line in the same function also as a false positive months ago. I've added comments as to why it's a false positive since it should actually be obvious that the tool is doing something wrong. I'm just wondering on how we know they fixed such things, and what will happen with those defects if they fixed the tool. But I'm guessing it's not going to be updated until we submit a new build, it's been a few months since the last one, and I can't seem to submit builds myself. Kurt ______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org