Re: [Gluster-users] Increase redundancy on existing disperse volume

2018-07-30 Thread Ashish Pandey
Hi, 1. If I create a 1 redundancy volume in the beginning, after I add more bricks, can I increase redundancy to 2 or 3 No, You can not change the redundancy level of the same volume in future. So, if you have created a 2+1 volume (2 data and 1 redundancy), you will have to stick to it.

[Gluster-users] Increase redundancy on existing disperse volume

2018-07-30 Thread Benjamin Kingston
I'm working to convert my 3x3 arbiter replicated volume into a disperse volume, however I have to work with the existing disks, maybe adding another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on one of the replicated nodes and build it into a I'm opting to host this volume on

[Gluster-users] Help diagnosing poor performance on mirrored pair

2018-07-30 Thread Arthur Pemberton
I have a pair of nodes configured with GlusterFS. I am trying to otimize the read performance of the setup. with smallfiles_cli.py test I'm getting 4MiB/s on the CREATE and 14MiB/s on the READ I'm using the FUSE mount, mounting to the local GlustertFS server: localhost:/gv_wordpress_std_var

[Gluster-users] Announcing Gluster release 4.1.2 (Long Term Maintenance)

2018-07-30 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster 4.0.1 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: - Release 4.1.0 notes incorrectly reported that all python code in Gluster

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-30 Thread Nithya Balachandran
I have not documented this yet - I will send you the steps tomorrow. Regards, Nithya On 30 July 2018 at 20:23, Rusty Bower wrote: > That would be awesome. Where can I find these? > > Rusty > > Sent from my iPhone > > On Jul 30, 2018, at 03:40, Nithya Balachandran > wrote: > > Hi Rusty, > >

Re: [Gluster-users] Questions on adding brick to gluster

2018-07-30 Thread Nithya Balachandran
Hi, On 30 July 2018 at 00:45, Pat Haley wrote: > > Hi All, > > We are adding a new brick (91TB) to our existing gluster volume (328 TB). > The new brick is on a new physical server and we want to make sure that we > are doing this correctly (the existing volume had 2 bricks on a single >

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-30 Thread Rusty Bower
That would be awesome. Where can I find these? Rusty Sent from my iPhone > On Jul 30, 2018, at 03:40, Nithya Balachandran wrote: > > Hi Rusty, > > Sorry for the delay getting back to you. I had a quick look at the rebalance > logs - it looks like the estimates are based on the time taken to

[Gluster-users] geo-replication failure.

2018-07-30 Thread Alvin Starr
The brick contains  about 8T of live data using up about 61M inodes so we have LOTS of directories with small files. It took something like 45 days for the systems to sync the first time and I believe it completed but some time over the last month or so it has failed. both sides are

Re: [Gluster-users] glustereventsd not being stopped by systemd script

2018-07-30 Thread mabi
Hi Aravinda, Thanks for the info, somehow I wasn't aware about this new service. Now it's clear and I updated my documentation. Best regards, M. ‐‐‐ Original Message ‐‐‐ On July 30, 2018 5:59 AM, Aravinda Vishwanathapura Krishna Murthy wrote: > On Mon, Jul 30, 2018 at 1:03 AM mabi

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-30 Thread Nithya Balachandran
Hi Rusty, Sorry for the delay getting back to you. I had a quick look at the rebalance logs - it looks like the estimates are based on the time taken to rebalance the smaller files. We do have a scripting option where we can use virtual xattrs to trigger file migration from a mount point. That