Re: [Gluster-users] Geo-replication stops after 4-5 hours

2018-08-01 Thread Kotresh Hiremath Ravishankar
Hi Marcus,

What's the rsync version being used?

Thanks,
Kotresh HR

On Thu, Aug 2, 2018 at 1:48 AM, Marcus Pedersén 
wrote:

> Hi all!
>
> I upgraded from 3.12.9 to 4.1.1 and had problems with geo-replication.
>
> With help from the list with some sym links and so on (handled in another
> thread)
>
> I got the geo-replication running.
>
> It ran for 4-5 hours and then stopped, I stopped and started
> geo-replication and it ran for another 4-5 hours.
>
> 4.1.2 was released and I updated, hoping this would solve the problem.
>
> I still have the same problem, at start it runs for 4-5 hours and then it
> stops.
>
> After that nothing happens, I have waited for days but still
> nothing happens.
>
>
> I have looked through logs but can not find anything obvious.
>
>
> Status for geo-replication is active for the two same nodes all the time:
>
>
> MASTER NODEMASTER VOLMASTER BRICK SLAVE USER
> SLAVE  SLAVE NODE STATUS
> CRAWL STATUS LAST_SYNCEDENTRYDATA META
> FAILURESCHECKPOINT TIMECHECKPOINT COMPLETEDCHECKPOINT
> COMPLETION TIME
> 
> 
> 
> ---
> urd-gds-001urd-gds-volume/urd-gds/gluster geouser
> geouser@urd-gds-geo-001::urd-gds-volumeurd-gds-geo-000Active
> History Crawl2018-04-16 20:32:090142050
> 0   2018-07-27 21:12:44No
> N/A
> urd-gds-002urd-gds-volume/urd-gds/gluster geouser
> geouser@urd-gds-geo-001::urd-gds-volumeurd-gds-geo-002Passive
> N/A  N/AN/A  N/A  N/A
> N/A N/AN/A
> N/A
> urd-gds-004urd-gds-volume/urd-gds/gluster geouser
> geouser@urd-gds-geo-001::urd-gds-volumeurd-gds-geo-002Passive
> N/A  N/AN/A  N/A  N/A
> N/A N/AN/A
> N/A
> urd-gds-003urd-gds-volume/urd-gds/gluster geouser
> geouser@urd-gds-geo-001::urd-gds-volumeurd-gds-geo-000Active
> History Crawl2018-05-01 20:58:14285  4552 0
> 0   2018-07-27 21:12:44No
> N/A
> urd-gds-000urd-gds-volume/urd-gds/gluster1geouser
> geouser@urd-gds-geo-001::urd-gds-volumeurd-gds-geo-001Passive
> N/A  N/AN/A  N/A  N/A
> N/A N/AN/A
> N/A
> urd-gds-000urd-gds-volume/urd-gds/gluster2geouser
> geouser@urd-gds-geo-001::urd-gds-volumeurd-gds-geo-001Passive
> N/A  N/AN/A  N/A  N/A
> N/A N/AN/A N/A
>
>
> Master cluster is Distribute-Replicate
>
> 2 x (2 + 1)
>
> Used space 30TB
>
>
> Slave cluster is Replicate
>
> 1 x (2 + 1)
>
> Used space 9TB
>
>
> Parts from gsyncd.logs are enclosed.
>
>
> Thanks a lot!
>
>
> Best regards
>
> Marcus Pedersén
>
>
>
>
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> personuppgifter. För att läsa mer om hur detta går till, klicka här
> 
> E-mailing SLU will result in SLU processing your personal data. For more
> information on how this is done, click here
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Thanks and Regards,
Kotresh H R
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] thin arbiter vs standard arbiter

2018-08-01 Thread Ravishankar N



On 08/02/2018 06:26 AM, W Kern wrote:



On 8/1/18 11:04 AM, Amar Tumballi wrote:
This recently added document talks about some of the technicalities 
of the feature:


https://docs.gluster.org/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/

Please go through and see if it answers your questions.

-Amar




Well yes that does answer some. By skipping a lot more of the arbiter 
traffic, there may be some noticeable performance benefits especially 
in an older 1G network.

At least until you have to deal with a failure situation.

Though the "would you use it on a VM, either now or when the code is 
more seasoned?" question is still there.


I'm willing to try it out on some non-critical VMs (cloud-native 
stuff, where I always spawn from a golden image), but if it is not 
ready for production, then I don't want to bother with it at the moment.

Hi WK,

There are a few patches [1]  that are still undergoing review .  It 
would be good to wait for some more time until trying it out. If you are 
interested in testing, I'll be happy to inform you once they get merged.


[1] https://review.gluster.org/#/c/20095/, 
https://review.gluster.org/#/c/20103/, https://review.gluster.org/#/c/20577/


Regards,
Ravi




-wk



On Wed, Aug 1, 2018 at 11:09 PM, wkmail > wrote:


I see mentions of thin arbiter in the 4.x notes and I am intrigued.

As I understand it, the thin arbiter volume is

a) receives its data on an async basis (thus it can be on a
slower link). Thus gluster isn't waiting around to verify if it
actually got the data.

b) is only consulted in situations where Gluster needs that third
vote, otherwise it is not consulted.

c) Performance should therefore be better because Gluster is only
seriously talking to 2 nodes instead of 3 nodes (as in normal
arbiter or rep 3)

Am I correct?

If so, is thin arbiter ready for production or at least use on
non-critical workloads?

How safe is it for VMs images (and/or VMs with sharding)?

How much faster is thin arbiter setup over a normal arbiter given
that the normal data only really sees the metadata?

In a degraded situation (i.e. loss of one real node), would
having a thin arbiter on a slow link be problematic until
everything is healed and returned to normal?

Sincerely,

-wk

___
Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users






--
Amar Tumballi (amarts)





___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] thin arbiter vs standard arbiter

2018-08-01 Thread W Kern



On 8/1/18 11:04 AM, Amar Tumballi wrote:
This recently added document talks about some of the technicalities of 
the feature:


https://docs.gluster.org/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/

Please go through and see if it answers your questions.

-Amar




Well yes that does answer some. By skipping a lot more of the arbiter 
traffic, there may be some noticeable performance benefits especially in 
an older 1G network.

At least until you have to deal with a failure situation.

Though the "would you use it on a VM, either now or when the code is 
more seasoned?" question is still there.


I'm willing to try it out on some non-critical VMs (cloud-native stuff, 
where I always spawn from a golden image), but if it is not ready for 
production, then I don't want to bother with it at the moment.


-wk



On Wed, Aug 1, 2018 at 11:09 PM, wkmail > wrote:


I see mentions of thin arbiter in the 4.x notes and I am intrigued.

As I understand it, the thin arbiter volume is

a) receives its data on an async basis (thus it can be on a slower
link). Thus gluster isn't waiting around to verify if it actually
got the data.

b) is only consulted in situations where Gluster needs that third
vote, otherwise it is not consulted.

c) Performance should therefore be better because Gluster is only
seriously talking to 2 nodes instead of 3 nodes (as in normal
arbiter or rep 3)

Am I correct?

If so, is thin arbiter ready for production or at least use on
non-critical workloads?

How safe is it for VMs images (and/or VMs with sharding)?

How much faster is thin arbiter setup over a normal arbiter given
that the normal data only really sees the metadata?

In a degraded situation (i.e. loss of one real node), would having
a thin arbiter on a slow link be problematic until everything is
healed and returned to normal?

Sincerely,

-wk

___
Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users






--
Amar Tumballi (amarts)



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Memory leak with the libgfapi in 3.12 ?

2018-08-01 Thread Jim Kinney
Hmm. I just had to jump through lots of issues with a gluster 3.12.9
setup under Ovirt. The mounts are stock fuse.glusterfs. The RAM usage
had been climbing and I had to move VMs around, put hosts in
maintenance mode, do updates, restart. When the VMs were moved back the
memory usage dropped back to normal. The new gluster is 3.12.11 and
still using fuse in a replica 3 config. I'm blaming the fuse mount
process for the leak (with no data to back it up yet).
A different gluster install also using fuse mounts does not show the
memory consumption. It does not use virtualization at all so it really
is likely an issue with the kvm/qemu. On those system, the fuse mounts
get dropped by oomkiller when computation use of memory overload
things. Different issue totally. 
On Wed, 2018-08-01 at 19:57 +0100, lemonni...@ulrar.net wrote:
> Hey,
> Is there by any chance a known bug about a memory leak for the
> libgfapiin the latests 3.12 releases ?I've migrated a lot of virtual
> machines from an old proxmox cluster to anew one, with a newer
> gluster (3.12.10) and ever since the virtualmachines have been eating
> more and more RAM all the time, without everstopping. I have 8 Gb
> machines occupying 40 Gb or ram, which theyweren't doing on the old
> cluster.
> It could be a proxmox problem, maybe a leak in their qemu, but
> sinceno one seems to be reporting that problem I wonder if maybe the
> newergluster might have a leak, I believe libgfapi isn't used much.I
> tried looking at the bug tracker but I don't see anything obvious,
> theonly leak I found seems to be for distributed volumes, but we only
> usereplica mode.
> Is anyone aware of a way to know if libgfapi is responsible or not
> ?Does it have any kind of reporting I could enable ? Worse case I
> couldalways boot a VM through the fuse mount instead of libgfapi, but
> that'snot ideal, it'd take a while to confirm.
> 
> ___Gluster-users mailing
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listin
> fo/gluster-users
-- 
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Memory leak with the libgfapi in 3.12 ?

2018-08-01 Thread lemonnierk
Hey,

Is there by any chance a known bug about a memory leak for the libgfapi
in the latests 3.12 releases ?
I've migrated a lot of virtual machines from an old proxmox cluster to a
new one, with a newer gluster (3.12.10) and ever since the virtual
machines have been eating more and more RAM all the time, without ever
stopping. I have 8 Gb machines occupying 40 Gb or ram, which they
weren't doing on the old cluster.

It could be a proxmox problem, maybe a leak in their qemu, but since
no one seems to be reporting that problem I wonder if maybe the newer
gluster might have a leak, I believe libgfapi isn't used much.
I tried looking at the bug tracker but I don't see anything obvious, the
only leak I found seems to be for distributed volumes, but we only use
replica mode.

Is anyone aware of a way to know if libgfapi is responsible or not ?
Does it have any kind of reporting I could enable ? Worse case I could
always boot a VM through the fuse mount instead of libgfapi, but that's
not ideal, it'd take a while to confirm.

-- 
PGP Fingerprint : 0x624E42C734DAC346


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] thin arbiter vs standard arbiter

2018-08-01 Thread Amar Tumballi
This recently added document talks about some of the technicalities of the
feature:

https://docs.gluster.org/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/

Please go through and see if it answers your questions.

-Amar


On Wed, Aug 1, 2018 at 11:09 PM, wkmail  wrote:

> I see mentions of thin arbiter in the 4.x notes and I am intrigued.
>
> As I understand it, the thin arbiter volume is
>
> a) receives its data on an async basis (thus it can be on a slower link).
> Thus gluster isn't waiting around to verify if it actually got the data.
>
> b) is only consulted in situations where Gluster needs that third vote,
> otherwise it is not consulted.
>
> c) Performance should therefore be better because Gluster is only
> seriously talking to 2 nodes instead of 3 nodes (as in normal arbiter or
> rep 3)
>
> Am I correct?
>
> If so, is thin arbiter ready for production or at least use on
> non-critical workloads?
>
> How safe is it for VMs images (and/or VMs with sharding)?
>
> How much faster is thin arbiter setup over a normal arbiter given that the
> normal data only really sees the metadata?
>
> In a degraded situation (i.e. loss of one real node), would having a thin
> arbiter on a slow link be problematic until everything is healed and
> returned to normal?
>
> Sincerely,
>
> -wk
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>


-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] thin arbiter vs standard arbiter

2018-08-01 Thread wkmail

I see mentions of thin arbiter in the 4.x notes and I am intrigued.

As I understand it, the thin arbiter volume is

a) receives its data on an async basis (thus it can be on a slower 
link). Thus gluster isn't waiting around to verify if it actually got 
the data.


b) is only consulted in situations where Gluster needs that third vote, 
otherwise it is not consulted.


c) Performance should therefore be better because Gluster is only 
seriously talking to 2 nodes instead of 3 nodes (as in normal arbiter or 
rep 3)


Am I correct?

If so, is thin arbiter ready for production or at least use on 
non-critical workloads?


How safe is it for VMs images (and/or VMs with sharding)?

How much faster is thin arbiter setup over a normal arbiter given that 
the normal data only really sees the metadata?


In a degraded situation (i.e. loss of one real node), would having a 
thin arbiter on a slow link be problematic until everything is healed 
and returned to normal?


Sincerely,

-wk

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Increase redundancy on existing disperse volume

2018-08-01 Thread Benjamin Kingston
Please Ignore,

I see your messages, that is the information I'm looking for.

On Wed, Aug 1, 2018 at 9:10 AM Benjamin Kingston 
wrote:

> Hello, I accidentally sent this question from an email that isn't
> subscribed to the gluster-users list.
> I resent from my mailing list address, but I don't see any of your answers
> quoted here.
> Thanks for your time, I've adjusted the mail recipients to avoid further
> issues.
>
> -ben
>
>
> On Tue, Jul 31, 2018 at 8:02 PM Ashish Pandey  wrote:
>
>>
>>
>> I think I have replied all the questions you have asked.
>> Let me know if you need any additional information.
>>
>> ---
>> Ashish
>> --
>> *From: *"Benjamin Kingston" 
>> *To: *"gluster-users" 
>> *Sent: *Tuesday, July 31, 2018 1:01:29 AM
>> *Subject: *[Gluster-users] Increase redundancy on existing disperse
>> volume
>>
>> I'm working to convert my 3x3 arbiter replicated volume into a disperse
>> volume, however I have to work with the existing disks, maybe adding
>> another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
>> one of the replicated nodes and build it into a
>> I'm opting to host this volume on a set of controllers connected to a
>> common backplane, I don't need help on this config, just on the constraints
>> of the disperse volumes.
>>
>> I have some questions about the disperse functionality
>> 1. If I create a 1 redundancy volume in the beginning, after I add more
>> bricks, can I increase redundancy to 2 or 3
>> 2. If I create the original volume with 6TB bricks, am I really stuck
>> with 6TB bricks even if I add 2 or more 10TB bricks
>> 3. Is it required to extend a volume by the same number or bricks it was
>> created with? If the original volume is made with 3 bricks, do I have to
>> always add capacity in 3 brick increments?
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Increase redundancy on existing disperse volume

2018-08-01 Thread Benjamin Kingston
Hello, I accidentally sent this question from an email that isn't
subscribed to the gluster-users list.
I resent from my mailing list address, but I don't see any of your answers
quoted here.
Thanks for your time, I've adjusted the mail recipients to avoid further
issues.

-ben


On Tue, Jul 31, 2018 at 8:02 PM Ashish Pandey  wrote:

>
>
> I think I have replied all the questions you have asked.
> Let me know if you need any additional information.
>
> ---
> Ashish
> --
> *From: *"Benjamin Kingston" 
> *To: *"gluster-users" 
> *Sent: *Tuesday, July 31, 2018 1:01:29 AM
> *Subject: *[Gluster-users] Increase redundancy on existing disperse volume
>
> I'm working to convert my 3x3 arbiter replicated volume into a disperse
> volume, however I have to work with the existing disks, maybe adding
> another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
> one of the replicated nodes and build it into a
> I'm opting to host this volume on a set of controllers connected to a
> common backplane, I don't need help on this config, just on the constraints
> of the disperse volumes.
>
> I have some questions about the disperse functionality
> 1. If I create a 1 redundancy volume in the beginning, after I add more
> bricks, can I increase redundancy to 2 or 3
> 2. If I create the original volume with 6TB bricks, am I really stuck with
> 6TB bricks even if I add 2 or more 10TB bricks
> 3. Is it required to extend a volume by the same number or bricks it was
> created with? If the original volume is made with 3 bricks, do I have to
> always add capacity in 3 brick increments?
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Rebalance taking > 2 months

2018-08-01 Thread Nithya Balachandran
On 31 July 2018 at 22:17, Rusty Bower  wrote:

> Is it possible to pause the rebalance to get those number? I'm hesitant to
> stop the rebalance and have to redo the entire thing again.
>
> I'm afraid not. Rebalance will start from the beginning if you do so.




> On Tue, Jul 31, 2018 at 11:40 AM, Nithya Balachandran  > wrote:
>
>>
>>
>> On 31 July 2018 at 19:44, Rusty Bower  wrote:
>>
>>> I'll figure out what hasn't been rebalanced yet and run the script.
>>>
>>> There's only a single client accessing this gluster volume, and while
>>> the rebalance is taking place, the I am only able to read/write to the
>>> volume at around 3MB/s. If I log onto one of the bricks, I can read/write
>>> to the physical volumes at speed greater than 100MB/s (which is what I
>>> would expect).
>>>
>>
>> What are the numbers when accessing the volume when rebalance is not
>> running?
>> Regards,
>> Nithya
>>
>>>
>>> Thanks!
>>> Rusty
>>>
>>> On Tue, Jul 31, 2018 at 3:28 AM, Nithya Balachandran <
>>> nbala...@redhat.com> wrote:
>>>
 Hi Rusty,

 A rebalance involves 2 steps:

1. Setting a new layout on a directory
2. Migrating any files inside that directory that hash to a
different subvol based on the new layout set in step 1.


 A few things to keep in mind :

- Any new content created on this volume will currently go to the
newly added brick.
- Having a more equitable file distribution is beneficial but you
might not need to do a complete rebalance to do this. You can run the
script on  just enough directories to free up space on your older 
 bricks.
This should be done on bricks which contains large files to speed this 
 up.

 Do the following on one of the server nodes:

- Create a tmp mount point and mount the volume using the rebalance
volfile
- mkdir /mnt/rebal
   - glusterfs -s localhost --volfile-id rebalance/data /mnt/rebal
- Select a directory in the volume which contains a lot of large
files and which has not been processed by the rebalance yet - the lower
down in the tree the better. Check the rebalance logs to figure out 
 which
dirs have not been processed yet.
   - cd /mnt/rebal/
   - for dir in `find . -type d`; do echo $dir |xargs -0 -n1 -P10
   bash process_dir.sh;done
- You can run this for different values of  and on
multiple server nodes in parallel as long as the directory trees for the
different  don't overlap.
- Do this for multiple directories until the disk space used
reduces on the older bricks.

 This is a very simple script. Let me know how it works - we can always
 tweak it for your particular data set.


 >and performance is basically garbage while it rebalances
 Can you provide more detail on this? What kind of effects are you
 seeing?
 How many clients access this volume?


 Regards,
 Nithya

 On 30 July 2018 at 22:18, Nithya Balachandran 
 wrote:

> I have not documented this yet - I will send you the steps tomorrow.
>
> Regards,
> Nithya
>
> On 30 July 2018 at 20:23, Rusty Bower  wrote:
>
>> That would be awesome. Where can I find these?
>>
>> Rusty
>>
>> Sent from my iPhone
>>
>> On Jul 30, 2018, at 03:40, Nithya Balachandran 
>> wrote:
>>
>> Hi Rusty,
>>
>> Sorry for the delay getting back to you. I had a quick look at the
>> rebalance logs - it looks like the estimates are based on the time taken 
>> to
>> rebalance the smaller files.
>>
>> We do have a scripting option where we can use virtual xattrs to
>> trigger file migration from a mount point. That would speed things up.
>>
>>
>> Regards,
>> Nithya
>>
>> On 28 July 2018 at 07:11, Rusty Bower  wrote:
>>
>>> Just wanted to ping this to see if you guys had any thoughts, or
>>> other scripts I can run for this stuff. It's still predicting another 90
>>> days to rebalance this, and performance is basically garbage while it
>>> rebalances.
>>>
>>> Rusty
>>>
>>> On Mon, Jul 23, 2018 at 10:19 AM, Rusty Bower 
>>> wrote:
>>>
 datanode03 is the newest brick

 the bricks had gotten pretty full, which I think might be part of
 the issue:
 - datanode01 /dev/sda1 51T   48T  3.3T  94%
 /mnt/data
 - datanode02 /dev/sda1 51T   48T  3.4T  94%
 /mnt/data
 - datanode03 /dev/md0 128T  4.6T  123T   4%
 /mnt/data

 each of the bricks are on a completely separate disk from the OS

 I'll shoot you the log files offline :)

 Thanks!
 Rusty

 On Mon, Jul 23, 2018 at 3:12 AM, 

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-01 Thread Hu Bert
Hello :-) Just wanted to give a short report...

>> It could be saturating in the day. But if enough self-heals are going on,
>> even in the night it should have been close to 100%.
>
> Lowest utilization was 70% over night, but i'll check this
> evening/weekend. Also that 'stat...' is running.

At the moment 1.1TB of 2.0TB got healed, disk utilization still
between 100% (day) and 70% (night). So this will take another 10-14
days.

> What, in your opinion, would be better for permormance?
>
> - Having 3 servers and RAID10 (with conventional disks)
> - Having 3 additional servers with 4 hdds (JBOD) each (distribute
> replicate, replica 3)
> - SSDs? (would be quite expensive to reach the storage amount we have
> at the moment)
>
> Just curious. It seems we'll have to adjust our setup during winter anyway :-)

Well, we'll definitely rethink our setup this autumn :-)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterd2 4.1 availability in centos7 repositories

2018-08-01 Thread Davide Obbi
Hi,

does anyone know why glusterd2 4.1 is not available in the main centos
repos?

http://mirror.centos.org/centos/7/storage/x86_64/gluster-4.1/

while is available in the buildlogs?

https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.1/

thanks
Davide
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users