Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Atin Mukherjee
On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee  wrote:

> I just went through the nightly regression report of brick mux runs and
> here's what I can summarize.
>
>
> =
> Fails only with brick-mux
>
> =
> tests/bugs/core/bug-1432542-mpx-restart-crash.t - Times out even after 400
> secs. Refer
> https://fstat.gluster.org/failure/209?state=2_date=2018-06-30_date=2018-07-31=all,
> specifically the latest report
> https://build.gluster.org/job/regression-test-burn-in/4051/consoleText .
> Wasn't timing out as frequently as it was till 12 July. But since 27 July,
> it has timed out twice. Beginning to believe commit
> 9400b6f2c8aa219a493961e0ab9770b7f12e80d2 has added the delay and now 400
> secs isn't sufficient enough (Mohit?)
>
> tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t
> (Ref -
> https://build.gluster.org/job/regression-test-with-multiplex/814/console)
> -  Test fails only in brick-mux mode, AI on Atin to look at and get back.
>
> tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t (
> https://build.gluster.org/job/regression-test-with-multiplex/813/console)
> - Seems like failed just twice in last 30 days as per
> https://fstat.gluster.org/failure/251?state=2_date=2018-06-30_date=2018-07-31=all.
> Need help from AFR team.
>
> tests/bugs/quota/bug-1293601.t (
> https://build.gluster.org/job/regression-test-with-multiplex/812/console)
> - Hasn't failed after 26 July and earlier it was failing regularly. Did we
> fix this test through any patch (Mohit?)
>
> tests/bitrot/bug-1373520.t - (
> https://build.gluster.org/job/regression-test-with-multiplex/811/console)
> - Hasn't failed after 27 July and earlier it was failing regularly. Did we
> fix this test through any patch (Mohit?)
>

I see this has failed in day before yesterday's regression run as well (and
I could reproduce it locally with brick mux enabled). The test fails in
healing a file within a particular time period.

*15:55:19* not ok 25 Got "0" instead of "512", LINENUM:55*15:55:19*
FAILED COMMAND: 512 path_size /d/backends/patchy5/FILE1

Need EC dev's help here.


> tests/bugs/glusterd/remove-brick-testcases.t - Failed once with a core,
> not sure if related to brick mux or not, so not sure if brick mux is
> culprit here or not. Ref -
> https://build.gluster.org/job/regression-test-with-multiplex/806/console
> . Seems to be a glustershd crash. Need help from AFR folks.
>
>
> =
> Fails for non-brick mux case too
>
> =
> tests/bugs/distribute/bug-1122443.t 0 Seems to be failing at my setup very
> often, with out brick mux as well. Refer
> https://build.gluster.org/job/regression-test-burn-in/4050/consoleText .
> There's an email in gluster-devel and a BZ 1610240 for the same.
>
> tests/bugs/bug-1368312.t - Seems to be recent failures (
> https://build.gluster.org/job/regression-test-with-multiplex/815/console)
> - seems to be a new failure, however seen this for a non-brick-mux case too
> - https://build.gluster.org/job/regression-test-burn-in/4039/consoleText
> . Need some eyes from AFR folks.
>
> tests/00-geo-rep/georep-basic-dr-tarssh.t - this isn't specific to brick
> mux, have seen this failing at multiple default regression runs. Refer
> https://fstat.gluster.org/failure/392?state=2_date=2018-06-30_date=2018-07-31=all
> . We need help from geo-rep dev to root cause this earlier than later
>
> tests/00-geo-rep/georep-basic-dr-rsync.t - this isn't specific to brick
> mux, have seen this failing at multiple default regression runs. Refer
> https://fstat.gluster.org/failure/393?state=2_date=2018-06-30_date=2018-07-31=all
> . We need help from geo-rep dev to root cause this earlier than later
>
> tests/bugs/glusterd/validating-server-quorum.t (
> https://build.gluster.org/job/regression-test-with-multiplex/810/console)
> - Fails for non-brick-mux cases too,
> https://fstat.gluster.org/failure/580?state=2_date=2018-06-30_date=2018-07-31=all
> .  Atin has a patch https://review.gluster.org/20584 which resolves it
> but patch is failing regression for a different test which is unrelated.
>
> tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
> (Ref -
> https://build.gluster.org/job/regression-test-with-multiplex/809/console)
> - fails for non brick mux case too -
> 

[Gluster-devel] bug-1432542-mpx-restart-crash.t failures

2018-08-01 Thread Nigel Babu
Hi Shyam,

Amar and I sat down to debug this failure[1] this morning. There was a bit
of fun looking at the logs. It looked like the test restarted itself. The
first log entry is at 16:20:03. This test has a timeout of 400 seconds
which is around 16:26:43.

However, if you account for the fact that we log from the second step or
so, it looks like the test timed out and we restarted it. The first log
entry is from a few steps in, this makes sense. I think your patch[2] to
increase the timeout to 800 seconds is the right way forward.

The last step before the timeout is this
[2018-07-30 16:26:29.160943]  : volume stop patchy-vol17 : SUCCESS
[2018-07-30 16:26:40.222688]  : volume delete patchy-vol17 : SUCCESS

There are 20 volumes, so it really needs at least a 90 second bump. I'm
estimating 30 seconds per volume to clean up. You probably want to some
extra time so it passes on lcov as well. So right now the 800 second clean
up looks good.

[1]: https://build.gluster.org/job/regression-test-burn-in/4051/
[2]: https://review.gluster.org/#/c/20568/2
-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-01 Thread Atin Mukherjee
On Thu, 2 Aug 2018 at 07:05, Susant Palai  wrote:

> Will have a look at it and update.
>

There’s already a patch from Mohit for this.


> Susant
>
> On Wed, 1 Aug 2018, 18:58 Krutika Dhananjay,  wrote:
>
>> Same here - https://build.gluster.org/job/centos7-regression/2024/console
>>
>> -Krutika
>>
>> On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee 
>> wrote:
>>
>>> tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times)
>>> running with master branch. As per my knowledge I've not seen this test
>>> failing earlier. Looks like some recent changes has caused it. One of such
>>> instance is https://build.gluster.org/job/centos7-regression/1955/ .
>>>
>>> Request the component owners to take a look at it.
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel

-- 
--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-01 Thread Susant Palai
Will have a look at it and update.

Susant

On Wed, 1 Aug 2018, 18:58 Krutika Dhananjay,  wrote:

> Same here - https://build.gluster.org/job/centos7-regression/2024/console
>
> -Krutika
>
> On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee 
> wrote:
>
>> tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times)
>> running with master branch. As per my knowledge I've not seen this test
>> failing earlier. Looks like some recent changes has caused it. One of such
>> instance is https://build.gluster.org/job/centos7-regression/1955/ .
>>
>> Request the component owners to take a look at it.
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Sankarshan Mukhopadhyay
On Thu, Aug 2, 2018 at 12:19 AM, Shyam Ranganathan  wrote:
> On 07/31/2018 12:41 PM, Atin Mukherjee wrote:
>> tests/bugs/core/bug-1432542-mpx-restart-crash.t - Times out even after
>> 400 secs. Refer
>> https://fstat.gluster.org/failure/209?state=2_date=2018-06-30_date=2018-07-31=all,
>> specifically the latest report
>> https://build.gluster.org/job/regression-test-burn-in/4051/consoleText .
>> Wasn't timing out as frequently as it was till 12 July. But since 27
>> July, it has timed out twice. Beginning to believe commit
>> 9400b6f2c8aa219a493961e0ab9770b7f12e80d2 has added the delay and now 400
>> secs isn't sufficient enough (Mohit?)
>
> The above test is the one that is causing line coverage to fail as well
> (mostly, say 50% of the time).
>
> I did have this patch up to increase timeouts and also ran a few rounds
> of tests, but results are mixed. It passes when run first, and later
> errors out in other places (although not timing out).
>
> See: https://review.gluster.org/#/c/20568/2 for the changes and test run
> details.
>

If I may ask - why are we always exploring the "increase timeout" part
of this? I understand that some tests may take longer - but 400s is
quite a non-trivial amount of time - what other efficient means are we
not able to explore?

> The failure of this test in regression-test-burn-in run#4051 is strange
> again, it looks like the test completed within stipulated time, but
> restarted again post cleanup_func was invoked.
>
> Digging a little further the manner of cleanup_func and traps used in
> this test seem *interesting* and maybe needs a closer look to arrive at
> possible issues here.
>
> @Mohit, request you to take a look at the line coverage failures as
> well, as you handle the failures in this test.


-- 
sankarshan mukhopadhyay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Shyam Ranganathan
Below is a summary of failures over the last 7 days on the nightly
health check jobs. This is one test per line, sorted in descending order
of occurrence (IOW, most frequent failure is on top).

The list includes spurious failures as well, IOW passed on a retry. This
is because if we do not weed out the spurious errors, failures may
persist and make it difficult to gauge the health of the branch.

The number at the end of the test line are Jenkins job numbers where
these failed. The job numbers runs as follows,
- https://build.gluster.org/job/regression-test-burn-in/ ID: 4048 - 4053
- https://build.gluster.org/job/line-coverage/ ID: 392 - 407
- https://build.gluster.org/job/regression-test-with-multiplex/ ID: 811
- 817

So to get to job 4051 (say), use the link
https://build.gluster.org/job/regression-test-burn-in/4051

Atin has called out some folks for attention to some tests, consider
this a call out to others, if you see a test against your component,
help around root causing and fixing it is needed.

tests/bugs/core/bug-1432542-mpx-restart-crash.t, 4049, 4051, 4052, 405,
404, 403, 396, 392

tests/00-geo-rep/georep-basic-dr-tarssh.t, 811, 814, 817, 4050, 4053

tests/bugs/bug-1368312.t, 815, 816, 811, 813, 403

tests/bugs/distribute/bug-1122443.t, 4050, 407, 403, 815, 816

tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t,
814, 816, 817, 812, 815

tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t,
4049, 812, 814, 405, 392

tests/bitrot/bug-1373520.t, 811, 816, 817, 813

tests/bugs/ec/bug-1236065.t, 812, 813, 815

tests/00-geo-rep/georep-basic-dr-rsync.t, 813, 4046

tests/basic/ec/ec-1468261.t, 817, 812

tests/bugs/glusterd/quorum-validation.t, 4049, 407

tests/bugs/quota/bug-1293601.t, 811, 812

tests/basic/afr/add-brick-self-heal.t, 407

tests/basic/afr/granular-esh/replace-brick.t, 392

tests/bugs/core/multiplex-limit-issue-151.t, 405

tests/bugs/distribute/bug-1042725.t, 405

tests/bugs/distribute/bug-1117851.t, 405

tests/bugs/glusterd/rebalance-operations-in-single-node.t, 405

tests/bugs/index/bug-1559004-EMLINK-handling.t, 405

tests/bugs/replicate/bug-1386188-sbrain-fav-child.t, 4048

tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t, 813  


Thanks,
Shyam


On 07/30/2018 03:21 PM, Shyam Ranganathan wrote:
> On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
>> 1) master branch health checks (weekly, till branching)
>>   - Expect every Monday a status update on various tests runs
> 
> See https://build.gluster.org/job/nightly-master/ for a report on
> various nightly and periodic jobs on master.
> 
> RED:
> 1. Nightly regression (3/6 failed)
> - Tests that reported failure:
> ./tests/00-geo-rep/georep-basic-dr-rsync.t
> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t
> ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
> ./tests/bugs/distribute/bug-1122443.t
> 
> - Tests that needed a retry:
> ./tests/00-geo-rep/georep-basic-dr-tarssh.t
> ./tests/bugs/glusterd/quorum-validation.t
> 
> 2. Regression with multiplex (cores and test failures)
> 
> 3. line-coverage (cores and test failures)
> - Tests that failed:
> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t (patch
> https://review.gluster.org/20568 does not fix the timeout entirely, as
> can be seen in this run,
> https://build.gluster.org/job/line-coverage/401/consoleFull )
> 
> Calling out to contributors to take a look at various failures, and post
> the same as bugs AND to the lists (so that duplication is avoided) to
> get this to a GREEN status.
> 
> GREEN:
> 1. cpp-check
> 2. RPM builds
> 
> IGNORE (for now):
> 1. clang scan (@nigel, this job requires clang warnings to be fixed to
> go green, right?)
> 
> Shyam
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Shyam Ranganathan
On 07/31/2018 12:41 PM, Atin Mukherjee wrote:
> tests/bugs/core/bug-1432542-mpx-restart-crash.t - Times out even after
> 400 secs. Refer
> https://fstat.gluster.org/failure/209?state=2_date=2018-06-30_date=2018-07-31=all,
> specifically the latest report
> https://build.gluster.org/job/regression-test-burn-in/4051/consoleText .
> Wasn't timing out as frequently as it was till 12 July. But since 27
> July, it has timed out twice. Beginning to believe commit
> 9400b6f2c8aa219a493961e0ab9770b7f12e80d2 has added the delay and now 400
> secs isn't sufficient enough (Mohit?)

The above test is the one that is causing line coverage to fail as well
(mostly, say 50% of the time).

I did have this patch up to increase timeouts and also ran a few rounds
of tests, but results are mixed. It passes when run first, and later
errors out in other places (although not timing out).

See: https://review.gluster.org/#/c/20568/2 for the changes and test run
details.

The failure of this test in regression-test-burn-in run#4051 is strange
again, it looks like the test completed within stipulated time, but
restarted again post cleanup_func was invoked.

Digging a little further the manner of cleanup_func and traps used in
this test seem *interesting* and maybe needs a closer look to arrive at
possible issues here.

@Mohit, request you to take a look at the line coverage failures as
well, as you handle the failures in this test.

Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FreeBSD smoke test may fail for older changes, rebase needed

2018-08-01 Thread Niels de Vos
On Wed, Aug 01, 2018 at 09:47:38AM -0400, Shyam Ranganathan wrote:
> On 07/31/2018 02:12 AM, Niels de Vos wrote:
> > On Mon, Jul 30, 2018 at 02:44:57PM -0400, Shyam Ranganathan wrote:
> >> On 07/28/2018 12:45 PM, Niels de Vos wrote:
> >>> On Sat, Jul 28, 2018 at 03:37:46PM +0200, Niels de Vos wrote:
>  This Friday argp-standalone got installed on the FreeBSD Jenkins
>  slave(s). With the library available, we can now drop the bundled and
>  unmaintained contrib/argp-standlone/ from our glusterfs sources.
> 
>  Unfortunately building on FreeBSD fails if the header/library is
>  installed. This has been corrected with https://review.gluster.org/20581
>  but that means changes posted in Gerrit may need a rebase to include the
>  fix for building on FreeBSD.
> 
>  I think I have rebased all related changes that did not have negative
>  comments asking for corrections/improvement. In case I missed a change,
>  please rebase your patch so the smoke test runs again.
> 
>  Sorry for any inconvenience that this caused,
>  Niels
> >>>
> >>> It just occured to me that the argp-standalone installation also affects
> >>> the release-4.1 and release-3.12 branches. Jiffin, Shyam, do you want to
> >>> cherry-pick https://review.gluster.org/20581 to fix that, or do you
> >>> prefer an alternative that always uses the bundled version of the
> >>> library?
> >>
> >> The outcome is to get existing maintained release branches building and
> >> working on FreeBSD, would that be correct?
> > 
> > 'working' in the way that they were earlier. I do not know of any
> > (automated or manual) tests that verify the correct functioning. It is
> > build tested only. I think.
> > 
> >> If so I think we can use the cherry-picked version, the changes seem
> >> mostly straight forward, and it is possibly easier to maintain.
> > 
> > It is straight forward, but does add a new requirement on a library that
> > should get installed on the system. This is not something that we
> > normally allow during a stable release.
> > 
> >> Although, I have to ask, what is the downside of not taking it in at
> >> all? If it is just FreeBSD, then can we live with the same till release-
> >> is out?
> > 
> > Yes, it is 'just' FreeBSD build testing. Users should still be able to
> > build the stable releases on FreeBSD as long as they do not install
> > argp-standalone. In that case the bundled version will be used as the
> > stable releases still have that in their tree.
> > 
> > If the patch does not get merged, it will cause the smoke tests on
> > FreeBSD to fail. As Nigel mentions, it is possible to disable this test
> > for the stable branches.
> > 
> > An alternative would be to fix the build process, and optionally use the
> > bundled library in case it is not installed on the system. This is what
> > we normally would have done, but it seems to have been broken in the
> > case of FreeBSD + argp-standalone.
> 
> Based on the above reasoning, I would suggest that we do not backport
> this to the release branches, and disable the FreeBSD job on them, and
> if possible enable them for the next release (5).
> 
> Objections?

That is fine with me. It is prepared for GlusterFS 5, so nothing needs
to be done for that. Only for 4.1 and 3.12 FreeBSD needs to be disabled
from the smoke job(s).

I could not find the repo that contains the smoke job, otherwise I would
have tried to send a PR.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Shyam Ranganathan
On 08/01/2018 12:13 AM, Sankarshan Mukhopadhyay wrote:
>> Thinking aloud, we may have to stop merges to master to get these test
>> failures addressed at the earliest and to continue maintaining them
>> GREEN for the health of the branch.
>>
>> I would give the above a week, before we lockdown the branch to fix the
>> failures.
>>
> Is 1 week a sufficient estimate to address the issues?
> 

Branching is Aug 20th, so I would say Aug 6th lockdown decision is
almost a little late, and also once we get this going it should be
possible to maintain health going forward. So taking a blocking stance
at this juncture is probably for the best.

Having said that, I am also stating we get Cent7 regressions and lcov
GREEN by this time, giving mux a week more to get the stability in
place. This is due to my belief that mux may take a bit longer than the
other 2 (IOW, addressing the sufficiency clause in the concern raised
above).
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FreeBSD smoke test may fail for older changes, rebase needed

2018-08-01 Thread Shyam Ranganathan
On 07/31/2018 02:12 AM, Niels de Vos wrote:
> On Mon, Jul 30, 2018 at 02:44:57PM -0400, Shyam Ranganathan wrote:
>> On 07/28/2018 12:45 PM, Niels de Vos wrote:
>>> On Sat, Jul 28, 2018 at 03:37:46PM +0200, Niels de Vos wrote:
 This Friday argp-standalone got installed on the FreeBSD Jenkins
 slave(s). With the library available, we can now drop the bundled and
 unmaintained contrib/argp-standlone/ from our glusterfs sources.

 Unfortunately building on FreeBSD fails if the header/library is
 installed. This has been corrected with https://review.gluster.org/20581
 but that means changes posted in Gerrit may need a rebase to include the
 fix for building on FreeBSD.

 I think I have rebased all related changes that did not have negative
 comments asking for corrections/improvement. In case I missed a change,
 please rebase your patch so the smoke test runs again.

 Sorry for any inconvenience that this caused,
 Niels
>>>
>>> It just occured to me that the argp-standalone installation also affects
>>> the release-4.1 and release-3.12 branches. Jiffin, Shyam, do you want to
>>> cherry-pick https://review.gluster.org/20581 to fix that, or do you
>>> prefer an alternative that always uses the bundled version of the
>>> library?
>>
>> The outcome is to get existing maintained release branches building and
>> working on FreeBSD, would that be correct?
> 
> 'working' in the way that they were earlier. I do not know of any
> (automated or manual) tests that verify the correct functioning. It is
> build tested only. I think.
> 
>> If so I think we can use the cherry-picked version, the changes seem
>> mostly straight forward, and it is possibly easier to maintain.
> 
> It is straight forward, but does add a new requirement on a library that
> should get installed on the system. This is not something that we
> normally allow during a stable release.
> 
>> Although, I have to ask, what is the downside of not taking it in at
>> all? If it is just FreeBSD, then can we live with the same till release-
>> is out?
> 
> Yes, it is 'just' FreeBSD build testing. Users should still be able to
> build the stable releases on FreeBSD as long as they do not install
> argp-standalone. In that case the bundled version will be used as the
> stable releases still have that in their tree.
> 
> If the patch does not get merged, it will cause the smoke tests on
> FreeBSD to fail. As Nigel mentions, it is possible to disable this test
> for the stable branches.
> 
> An alternative would be to fix the build process, and optionally use the
> bundled library in case it is not installed on the system. This is what
> we normally would have done, but it seems to have been broken in the
> case of FreeBSD + argp-standalone.

Based on the above reasoning, I would suggest that we do not backport
this to the release branches, and disable the FreeBSD job on them, and
if possible enable them for the next release (5).

Objections?

> 
> Niels
> 
> 
>> Finally, thanks for checking as the patch is not a simple bug-fix backport.
>>
>>>
>>> Niels
>>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-08-01-f354af3a (master branch)

2018-08-01 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-08-01-f354af3a/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-01 Thread Krutika Dhananjay
Same here - https://build.gluster.org/job/centos7-regression/2024/console

-Krutika

On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee  wrote:

> tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times)
> running with master branch. As per my knowledge I've not seen this test
> failing earlier. Looks like some recent changes has caused it. One of such
> instance is https://build.gluster.org/job/centos7-regression/1955/ .
>
> Request the component owners to take a look at it.
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel