Hi Hugo,

On Tue, Jan 25, 2022 at 12:13:16PM +0000, Hugo Lefeuvre wrote:
> Hi!
> 
> As part of research project, my colleagues and myself are measuring the
> test-suite coverage of HAProxy using gcov. We found the coverage numbers to
> be relatively low compared to other major cloud applications, capping at
> 13-14% line coverage (other projects such as Nginx, Redis, etc. are more in
> the order of 60-90%).
> 
> We are wondering if this is caused by our measurement approach (gcov,
> passing -fprofile-arcs -ftest-coverage in the CFLAGS and -lgcov to
> LDFLAGS), or if this is known to the HAProxy community. We reproduced these
> measurements across several recent versions of HAProxy, dev, 2.5, and 2.4.

Just to be sure, what test suite are you talking about ? I guess you mean
the regression tests in the "reg-tests" directory, but I'm not sure. If so,
we're well aware that they are still fairly limited, and were introduced
relatively recently. They're progressively being completed as new features
are added, but it's for sure that a number of areas of the code are not yet
covered by them. However we tried to make sure that most of them depend on
the most sensitive areas that are easily subject to break (and during
development, there's no single day a developer doesn't break a few regtests
locally during testing, which indicates that they're representative enough
for our purpose for now).

Also I don't know how the ratio of lines of code is counted in your case.
We do have quite a bunch of code that is platform-specific or that depends
on build options, and in this case I have no idea how that is counted. Also
a significant part of the code is dedicated to error handling and will never
be triggered by regtests. Events like I/O errors, out-of-memory errors and
unexpected conditions must never happen, yet collectively they probably
represent about half of the code. So without more details it's hard to
have a solid opinion on the subject.

Regards,
Willy

Reply via email to