On Mon, Apr 11, 2011 at 10:38 AM, Bram Moolenaar <[email protected]> wrote:
>
> Ben Fritz wrote:
>
>> Bram, I'm just curious: it looks like most of the very recent
>> changesets (up to patch 160) are problems of the sort that static
>> analysis or other automated tools would find. Were these the result of
>> some automated tool or did someone just find a bunch and report them
>> quietly? Which tool, if one was used?
>
> http://scan.coverity.com
>
> It finds some potential problems, and lots of false positives.
>

I had a suspicion that might have been it. I use Coverity at work and
have noted several bugs of the same sort as the recent changsets
fixed. You say it finds a lot of false positives (which it does
sometimes I admit, especially in auto-generated code); do you have any
thoughts on whether it's the tool, the configuration, or just the fact
that Vim's code is 20 years old and pretty solid already? I know from
talking to folks at Coverity that a lot of their code is just to
suppress false positives and they report an average false positive
rate of about 20% overall (I've seen about 30% on my projects at
work). It might be possible to get the Coverity folks who run the scan
project to tweak some options for better false positive rates,
depending on what type they're reporting.

I get the impression you're not too impressed with the tool, from your
false positives comment and a previous patch a long time ago which
basically said "make Coverity be quiet". I'm wondering, on the whole,
do you consider it useful or mostly a nuisance/distraction?

-- 
You received this message from the "vim_dev" maillist.
Do not top-post! Type your reply below the text you are replying to.
For more information, visit http://www.vim.org/maillist.php

Raspunde prin e-mail lui