On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa
wrote:
> For the case of writes to glusterfs mount,
>
> I saw in earlier conversations that there are too many lookups, but small
> number of writes. Since writes cached in write-behind would invalidate
> metadata cache, lookups won't be
Please note that these suggestions are for native fuse mount.
On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa
wrote:
> For the case of writes to glusterfs mount,
>
> I saw in earlier conversations that there are too many lookups, but small
> number of writes. Since writes cached in
For the case of writes to glusterfs mount,
I saw in earlier conversations that there are too many lookups, but small
number of writes. Since writes cached in write-behind would invalidate
metadata cache, lookups won't be absorbed by md-cache. I am wondering what
would results look like if we turn
Hi Axel,
It's the latest. Ok, please share the geo-replication master and slave logs.
master location: /var/log/gluster/geo-replication
slave location: /var/log/glusterfs/geo-replication-slaves
Thanks,
Kotresh HR
On Tue, Jun 19, 2018 at 2:54 PM, Axel Gruber wrote:
> Hello
>
> im using in 2
Thank you. In the meantime, turning off parallel readdir should prevent the
first crash.
On 20 June 2018 at 21:42, mohammad kashif wrote:
> Hi Nithya
>
> Thanks for the bug report. This new crash happened only once and only at
> one client in the last 6 days. I will let you know if it happened
On Thu, Jun 21, 2018 at 8:32 AM, Raghavendra Gowdappa
wrote:
> For the case of reading from Glusterfs mount, read-ahead should help.
> However, we've known issues with read-ahead[1][2]. To work around these,
> can you try with,
>
> 1. Turn off performance.open-behind
> #gluster volume set
For the case of reading from Glusterfs mount, read-ahead should help.
However, we've known issues with read-ahead[1][2]. To work around these,
can you try with,
1. Turn off performance.open-behind
#gluster volume set performance.open-behind off
2. enable group meta metadata-cache
# gluster
Hi,
We were recently revisiting our problems with the slowness of gluster
writes
(http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html).
Specifically we were testing the suggestions in a recent post
Hi Nithya
Thanks for the bug report. This new crash happened only once and only at
one client in the last 6 days. I will let you know if it happened again or
more frequently.
Cheers
Kashif
On Wed, Jun 20, 2018 at 12:28 PM, Nithya Balachandran
wrote:
> Hi Mohammad,
>
> This is a different
The Gluster community is pleased to announce the release of 4.1, our
latest long term supported release.
This is a major release that includes a range of features enhancing
management, performance, monitoring, and providing newer functionality
like thin arbiters, cloud archival, time consistency.
Hi Mohammad,
This is a different crash. How often does it happen?
We have managed to reproduce the first crash you reported and a bug has
been filed at [1].
We will work on a fix for this.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1593199
On 18 June 2018 at 14:09,
Hi Kotresh,
Many thanks for your prompt response. No need to apologise, any help you
can provide is greatly appreciated.
I look forward to receiving your update next week.
Many thanks,
Mark Betham
On Wed, 20 Jun 2018 at 10:55, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi
Hi Mark,
Sorry, I was busy and could not take a serious look at the logs. I can
update you on Monday.
Thanks,
Kotresh HR
On Wed, Jun 20, 2018 at 12:32 PM, Mark Betham <
mark.bet...@performancehorizon.com> wrote:
> Hi Kotresh,
>
> I was wondering if you had made any progress with regards to the
Hi Kotresh,
I was wondering if you had made any progress with regards to the issue I am
currently experiencing with geo-replication.
For info the fault remains and effectively requires a restart of the
geo-replication service on a daily basis to reclaim the used memory on the
slave node.
If you
14 matches
Mail list logo