[Gluster-devel] Unplanned Jenkins restart this morning

2017-11-08 Thread Nigel Babu
Hello folks,

I had to do a quick Jenkins upgrade and restart this morning for an urgent
security fix. A few of our periodic jobs were cancelled, I'll re-trigger
them now.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster Summit Discussion: Time taken for regression tests

2017-11-08 Thread Xavi Hernandez
One thing we could do with some tests I know is to remove some of them.

EC currently runs the same test on multiple volume configurations (2+1,
3+1, 4+1, 3+2, 4+2, 4+3 and 8+4). I think we could reduce it to two common
configurations (2+1 and 4+2) and one or two special configurations (3+1
and/or 3+2). This will remove the biggest ones, that take most of the time,
while keeping the basic things still tested.

Xavi

On 3 November 2017 at 17:50, Amar Tumballi  wrote:

> All,
>
> While we discussed many other things, we also discussed about reducing
> time taken for the regression jobs. As it stands now, it take around 5hr
> 40mins to complete a single run.
>
> There were many suggestions:
>
>
>- Run them in parallel (as each .t test is independent of each other)
>- Revisit the tests taking long time (20 tests take almost 6000
>seconds as of now).
>- See if we can run the tests in docker (but the issue is the machines
>we have are of 2cores, so there may not be much gain)
>
>
> There are other suggestions as well:
>
>
>- Spend effort and see if there are repeated steps, and merge the
>tests.
>   - Most of the time is spent in starting the processes and cleaning
>   up.
>   - Most of the tests run the similar volume create command
>   (depending on the volume type), and run few different type of I/O in
>   different tests.
>   - Try to see if these things can be merged.
>   - Most of the bug-fix .t files belong to this category too.
>- Classify the tests specific to few non-overlapping volume types and
>depending on the changeset in the patch (based on the files changed) decide
>which are the groups to run.
>   - For example, you can't have replicate and disperse volume type
>   together.
>
>
> 
>
> More ideas and suggestions welcome.
>
>
> Regards,
> Amar
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Change #18681 has broken build on master

2017-11-08 Thread Nigel Babu
I can try to explain what happened. For instance, here's a git tree. Each
alphabet represents a commit.

A -> B -> C -> D -> E -> F (F is the HEAD of master. green builds)

Change X branched off at B
A -> B -> X (green builds)

Change Y branched off at D
A -> B -> C -> D -> Y (green builds)

Now if change X and Y do not work together, for instance, if change X
introduced a new parameter for a function. They also do not conflict with
each other. First change Y lands.

So history now looks like this:

A -> B -> C -> D -> E -> F -> Y (green builds)

Now change X lands:

A -> B -> C -> D -> E -> F -> Y -> Z (red builds)

Because change Z touched a function whose signature had changed in change
Y. If this doesn't make sense, please have a look at this:
https://docs.openstack.org/infra/zuul/user/gating.html

Using a gating system is the most likely solution to our problem. Right
now, adding a gating solution without reducing how much time our tests take
is pointless.


-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Health output like mdstat

2017-11-08 Thread Aravinda

On Tuesday 07 November 2017 03:22 PM, Gandalf Corvotempesta wrote:

I think would be useful to add a cumulative cluster health output like
mdstat for mdadm.
So that with a simple command, would be possible to see:

1) how many nodes are UP and DOWN (and which nodes are DOWN)
2) any background operation running (like healing, scrubbing) with
their progress
3) any split brain files that won't be fixed automatically

Currently, we need to run multiple commands to see cluster health.

An even better version would be something similiar to "mdadm --detail /dev/md0"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Looks like good candidate for "gluster-health-report" project. I will 
add these items to the issue page. Thanks


Details:
Project:  https://github.com/aravindavk/gluster-health-report
Issue:    https://github.com/gluster/glusterfs/issues/313
Details mail: 
http://lists.gluster.org/pipermail/gluster-users/2017-October/032758.html


--
regards
Aravinda VK

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel