Hi All,

I am at a point now where I have a patch ready to merge for Bangkok (pending 
final review), but after 4 reverify attempts, no luck.

As far as I can tell, these are issue not related to the code in the patch, but 
are Jenkins issues.  In case anyway wants to have a look the errors the patch 
is here : https://gerrit.iotivity.org/gerrit/#/c/24077/

Based on Mat’s response below I am assuming that there are ongoing efforts to 
solve the jenkins issues, and I guess I will sit tight, unless anybody has any 
other suggestions.

Kind Regards
Steve


On Feb 27, 2018, at 8:11 AM, Steve Saunders (CableLabs) 
<s.saunders-contrac...@cablelabs.com<mailto:s.saunders-contrac...@cablelabs.com>>
 wrote:

Thanks Mats, I appreciate your response.

Kind Regards
Steve

On Feb 27, 2018, at 8:07 AM, Mats Wichmann 
<m...@wichmann.us<mailto:m...@wichmann.us>> wrote:

On 02/27/2018 07:37 AM, Steve Saunders (CableLabs) wrote:
Hi All,

I am running into a few gerit issues which i believe are unrelated to my patch 
submission (but I could be wrong).

First

A well known problem: Intermittent valgrind invalid write:
JIra Issue IOT-2848: provisiontests has intermitent Invalid read failure 
https://jira.iotivity.org/browse/IOT-2848

In the past, per the advice of George, I was able to get past this by 
submitting a ‘reverify’ to gerit.  However, for my latest patch, I the last 3 
reverify’s have failed with this same problem.

Do we have any plans to address this issue?

This is one of several lurking intermittent problems that are very hard
to pin down - it would be much easier if we could reproduce "at home",
and/or they happened every run. I took a flyer at the one you mention
(gerrit 23851) which I'm far from convinced does anything about the
underlying problem, it just avoids walking through the pointer if the
pointer is not valid - and it may not be the only instance. I'm thinking
it's possible to be called in a bad context where something is not
alive any longer. In the last current comment, Aleksey suggested the
same theory, also without having proof - that maybe a request is
responded to so slowly it is canceled and then the response comes in and
the callback is invoked.


Second

After my reverify this morning, a whole slew of failures showed up that were 
not reported on the previous build of the exact same patch.  The errors look to 
me like some internal jenkins issues, things like:

13:42:48 ERROR: Error cloning remote repo 'origin'
cp: cannot access '/extlibs/boost/boost_1_58_0': Permission denied

Build with only valgrind issues : https://gerrit.iotivity.org/gerrit/#/c/24077/
Reverify on same patch where lots of new issues popped up: 
https://gerrit.iotivity.org/gerrit/#/c/24077/

Could it be possible that we have some Jenkins config or system issues going on?

We do.

Any thoughts on the above would be appreciated.

Thanks everybody, this is pretty important.  If we can’t get past Jenkins 
builds, we can’t merge.
Kind Regards
Steve

"Keep the faith"?  Linux Foundation have transitioned us to a different
infrastructure for the builders and it's having some teething pains.
They *are* working on it, but the current state is quite frustrating. We
are pestering them on a regular basis. Sorry, meant to say working
closely with. :)


_______________________________________________
iotivity-dev mailing list
iotivity-dev@lists.iotivity.org<mailto:iotivity-dev@lists.iotivity.org>
https://lists.iotivity.org/mailman/listinfo/iotivity-dev

_______________________________________________
iotivity-dev mailing list
iotivity-dev@lists.iotivity.org<mailto:iotivity-dev@lists.iotivity.org>
https://lists.iotivity.org/mailman/listinfo/iotivity-dev

_______________________________________________
iotivity-dev mailing list
iotivity-dev@lists.iotivity.org
https://lists.iotivity.org/mailman/listinfo/iotivity-dev

Reply via email to