I want to ask this question more widely before anything happens (I will
also send to OSWG, keeping from doing cross-post as per informal policy;
if something comes out of there I will summarize here).

As background, I think most of you probably know this, those iotivity
builds which are requested to run unit tests do so after the build -
each directory that builds a unit test binary has a call it makes which
builds up a command line to actually run the test, and marks it to
AlwaysBuild.  On the linux unit test target, that command line includes
running the test under control of a memory leak checker (valgrind).

Among the frequent fails in builds which we can call "spurious" in that
they can happen without having been in any way impacted by the code
change made valgrind fails are prominent (the other common ones are
provisioning failures, and a Windows race condition between building a
library and using it).

It's no secret people have been grumbling recently that builds fail too
often. For example, this ticket was filed just a couple of days and is
by no means the only complaint:

https://jira.iotivity.org/browse/IOT-3144

The question is, should we turn off using valgrind as part of patch
validation?  Physically, this is easy: we can either tell the test setup
call to not add valgrind to the command line (this is factorised so we
only need to pass VALGRIND_CHECKS=False to the build), or we could
disable the Jenkins valgrind plugin which actually tries to draw
conclusions from the results and seems particularly flaky - there's
upstream (Jenkins) chatter about how much trouble it causes but not real
solutions yet.

I'm not trying to remove a test of value, I'm looking for ways we can
let developers not be blocked by spurious fails and waste lots of time.
If we do turn it off, I'd hope we can instead get regular running and
examination of valgrind - maybe a job not tied to the "it built" +1
vote? Maybe someone who does it manually?  To be honest, there's not
been a lot of attention to the problems it reports anyway; we tried to
tune the failure threshold high enough not to fail people all the time,
but now it's failing all the time anyway, and those reports which didn't
cause a failure were no incentive to devs to go check the problems.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9828): 
https://lists.iotivity.org/g/iotivity-dev/message/9828
Mute This Topic: https://lists.iotivity.org/mt/24156308/21656
Group Owner: iotivity-dev+ow...@lists.iotivity.org
Unsubscribe: https://lists.iotivity.org/g/iotivity-dev/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to