Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa wrote: > For the case of writes to glusterfs mount, > > I saw in earlier conversations that there are too many lookups, but small > number of writes. Since writes cached in write-behind would invalidate > metadata cache, lookups won't be

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
Please note that these suggestions are for native fuse mount. On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa wrote: > For the case of writes to glusterfs mount, > > I saw in earlier conversations that there are too many lookups, but small > number of writes. Since writes cached in

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
For the case of writes to glusterfs mount, I saw in earlier conversations that there are too many lookups, but small number of writes. Since writes cached in write-behind would invalidate metadata cache, lookups won't be absorbed by md-cache. I am wondering what would results look like if we turn

Re: [Gluster-users] Need Help to get GEO Replication working - Error: Please check gsync config file. Unable to get statefile's name

2018-06-20 Thread Kotresh Hiremath Ravishankar
Hi Axel, It's the latest. Ok, please share the geo-replication master and slave logs. master location: /var/log/gluster/geo-replication slave location: /var/log/glusterfs/geo-replication-slaves Thanks, Kotresh HR On Tue, Jun 19, 2018 at 2:54 PM, Axel Gruber wrote: > Hello > > im using in 2

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-20 Thread Nithya Balachandran
Thank you. In the meantime, turning off parallel readdir should prevent the first crash. On 20 June 2018 at 21:42, mohammad kashif wrote: > Hi Nithya > > Thanks for the bug report. This new crash happened only once and only at > one client in the last 6 days. I will let you know if it happened

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 8:32 AM, Raghavendra Gowdappa wrote: > For the case of reading from Glusterfs mount, read-ahead should help. > However, we've known issues with read-ahead[1][2]. To work around these, > can you try with, > > 1. Turn off performance.open-behind > #gluster volume set

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
For the case of reading from Glusterfs mount, read-ahead should help. However, we've known issues with read-ahead[1][2]. To work around these, can you try with, 1. Turn off performance.open-behind #gluster volume set performance.open-behind off 2. enable group meta metadata-cache # gluster

[Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Pat Haley
Hi, We were recently revisiting our problems with the slowness of gluster writes (http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html). Specifically we were testing the suggestions in a recent post

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-20 Thread mohammad kashif
Hi Nithya Thanks for the bug report. This new crash happened only once and only at one client in the last 6 days. I will let you know if it happened again or more frequently. Cheers Kashif On Wed, Jun 20, 2018 at 12:28 PM, Nithya Balachandran wrote: > Hi Mohammad, > > This is a different

[Gluster-users] Announcing GlusterFS release 4.1.0 (Long Term Maintenance)

2018-06-20 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of 4.1, our latest long term supported release. This is a major release that includes a range of features enhancing management, performance, monitoring, and providing newer functionality like thin arbiters, cloud archival, time consistency.

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-20 Thread Nithya Balachandran
Hi Mohammad, This is a different crash. How often does it happen? We have managed to reproduce the first crash you reported and a bug has been filed at [1]. We will work on a fix for this. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1593199 On 18 June 2018 at 14:09,

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-20 Thread Mark Betham
Hi Kotresh, Many thanks for your prompt response. No need to apologise, any help you can provide is greatly appreciated. I look forward to receiving your update next week. Many thanks, Mark Betham On Wed, 20 Jun 2018 at 10:55, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-20 Thread Kotresh Hiremath Ravishankar
Hi Mark, Sorry, I was busy and could not take a serious look at the logs. I can update you on Monday. Thanks, Kotresh HR On Wed, Jun 20, 2018 at 12:32 PM, Mark Betham < mark.bet...@performancehorizon.com> wrote: > Hi Kotresh, > > I was wondering if you had made any progress with regards to the

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-06-20 Thread Mark Betham
Hi Kotresh, I was wondering if you had made any progress with regards to the issue I am currently experiencing with geo-replication. For info the fault remains and effectively requires a restart of the geo-replication service on a daily basis to reclaim the used memory on the slave node. If you