Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-05 Thread Nithya Balachandran
On 6 November 2018 at 12:24, Jeevan Patnaik wrote: > Hi Vlad, > > I'm still confused of gluster releases. :( > Is 3.13 an official gluster release? It's not mentioned in > www.gluster.org/release-schedule > > 3.13 is EOL. It was a short term release. Which is more stable 3.13.2 or 3.12.6 or

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-05 Thread Jeevan Patnaik
Hi, We are doing production deployment ande I have tested 3.12.4 and found okay to proceed. But we have decided to use tiering feature at the last minute. But something is not right or missing with tiering feature in 3.12.4 and hence, I'm thinking of using higher versions which may have fixed

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-05 Thread Jeevan Patnaik
Hi Vlad, I'm still confused of gluster releases. :( Is 3.13 an official gluster release? It's not mentioned in www.gluster.org/release-schedule Which is more stable 3.13.2 or 3.12.6 or 4.1.5? 3.13.2 was released in Jan and no minor releases since then..So, I expect it's a stable release. or

Re: [Gluster-users] posix_handle_hard [file exists]

2018-11-05 Thread Krutika Dhananjay
The rename log messages are informational and can be ignored. -Krutika On Mon, Nov 5, 2018 at 8:30 PM Jorick Astrego wrote: > I see a lot of DHT warnings in > rhev-data-center-mnt-glusterSD-192.168.99.14:_hdd2.log: > > [2018-10-21 01:24:01.413126] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] >

Re: [Gluster-users] posix_handle_hard [file exists]

2018-11-05 Thread Krutika Dhananjay
I think this is because the way preallocation works is by sending lot of writes. In the newer version of ovirt, this is changed to use fallocate for faster allocation. Adding Sahina, Gobinda to help with the ovirt version number that has this fix. -Krutika On Mon, Nov 5, 2018 at 8:23 PM Jorick

Re: [Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote: > > > On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa > wrote: > >> All, >> >> There is a patch [1] from Kotresh, which makes ctime generator as default >> in stack. Currently ctime generator is being recommended only for usecases >> where

Re: [Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Vijay Bellur
On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa wrote: > All, > > There is a patch [1] from Kotresh, which makes ctime generator as default > in stack. Currently ctime generator is being recommended only for usecases > where ctime is important (like for Elasticsearch). However, a reliable >

[Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
All, There is a patch [1] from Kotresh, which makes ctime generator as default in stack. Currently ctime generator is being recommended only for usecases where ctime is important (like for Elasticsearch). However, a reliable (c)(m)time can fix many consistency issues within glusterfs stack too.

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-05 Thread mabi
Ravi, I did not yet modify the cluster.data-self-heal parameter to off because in the mean time node2 of my cluster had a memory shortage (this node has 32 GB of RAM) and as such I had to reboot it. After that reboot all locks got released and there are no more files left to heal on that

Re: [Gluster-users] posix_handle_hard [file exists]

2018-11-05 Thread Jorick Astrego
I see a lot of DHT warnings in rhev-data-center-mnt-glusterSD-192.168.99.14:_hdd2.log: [2018-10-21 01:24:01.413126] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2018-11-01 12:48:32.537621] I [MSGID: 109066]

Re: [Gluster-users] posix_handle_hard [file exists]

2018-11-05 Thread Jorick Astrego
Hi Krutika, Thanks for the info. After a long time the preallocated disk has been created properly. It was a 1TB disk on a hdd pool so a bit of delay was expected. But it took a bit longer then expected. The disk had no other virtual disks on it. Is there something I can tweak or check for

Re: [Gluster-users] [Gluster-devel] Consolidating Feature Requests in github

2018-11-05 Thread Shyam Ranganathan
On 11/05/2018 08:29 AM, Vijay Bellur wrote: > Hi All, > > I am triaging the open RFEs in bugzilla [1]. Since our new(er) workflow > involves managing RFEs as github issues, I am considering migrating > relevant open RFEs from bugzilla to github. Once migrated, a RFE in > bugzilla would be closed

[Gluster-users] Consolidating Feature Requests in github

2018-11-05 Thread Vijay Bellur
Hi All, I am triaging the open RFEs in bugzilla [1]. Since our new(er) workflow involves managing RFEs as github issues, I am considering migrating relevant open RFEs from bugzilla to github. Once migrated, a RFE in bugzilla would be closed with an appropriate comment. I can also update the