Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Pranith Kumar Karampuri
On Tue, Aug 21, 2018 at 10:13 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote: > >> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3 >> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s >> RAID

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Pranith Kumar Karampuri
On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote: > Regarding hardware the machines are identical. Intel Xeon E5-1650 v3 > Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s > RAID Controller; operating system running on a raid1, then 4 disks > (JBOD) as bricks. > > Ok, i ran perf

Re: [Gluster-users] Disconnected peers after reboot

2018-08-20 Thread Atin Mukherjee
On Mon, 20 Aug 2018 at 13:08, wrote: > Hi, > > To add to the problematic memory leak, I've been seeing another strange > behavior on the 3.12 servers. When I reboot a node, it seems like often > (but not always) the other nodes mark it as disconnected and won't > accept it back until I restart

Re: [Gluster-users] KVM lockups on Gluster 4.1.1

2018-08-20 Thread Amar Tumballi
On Mon, Aug 20, 2018 at 6:20 PM, Walter Deignan wrote: > I upgraded late last week to 4.1.2. Since then I've seen several posix > health checks fail and bricks drop offline but I'm not sure if that's > related or a different root issue. > > I haven't seen the issue described below re-occur on

Re: [Gluster-users] KVM lockups on Gluster 4.1.1

2018-08-20 Thread Walter Deignan
I upgraded late last week to 4.1.2. Since then I've seen several posix health checks fail and bricks drop offline but I'm not sure if that's related or a different root issue. I haven't seen the issue described below re-occur on 4.1.2 yet but it was intermittent to begin with so I'll probably

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Hu Bert
Regarding hardware the machines are identical. Intel Xeon E5-1650 v3 Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s RAID Controller; operating system running on a raid1, then 4 disks (JBOD) as bricks. Ok, i ran perf for a few seconds. perf record

Re: [Gluster-users] KVM lockups on Gluster 4.1.1

2018-08-20 Thread Amar Tumballi
On Wed, Aug 15, 2018 at 2:54 AM, Walter Deignan wrote: > I am using gluster to host KVM/QEMU images. I am seeing an intermittent > issue where access to an image will hang. I have to do a lazy dismount of > the gluster volume in order to break the lock and then reset the impacted > virtual

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Hu Bert
gluster volume heal shared info | grep -i number Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries: 0 Number of entries:

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Pranith Kumar Karampuri
There are a lot of Lookup operations in the system. But I am not able to find why. Could you check the output of # gluster volume heal info | grep -i number it should print all zeros. On Fri, Aug 17, 2018 at 1:49 PM Hu Bert wrote: > I don't know what you exactly mean with workload, but the

Re: [Gluster-users] KVM lockups on Gluster 4.1.1

2018-08-20 Thread Amar Tumballi
Thanks for this report! We will look into this. This is something new we are seeing, and not aware of a RCA yet! -Amar On Mon, Aug 20, 2018 at 1:08 PM, Claus Jeppesen wrote: > I think I have seen this also on our CentOS 7.5 systems using GlusterFS > 4.1.1 (*) - has an upgrade to 4.1.2 helped

Re: [Gluster-users] KVM lockups on Gluster 4.1.1

2018-08-20 Thread Claus Jeppesen
I think I have seen this also on our CentOS 7.5 systems using GlusterFS 4.1.1 (*) - has an upgrade to 4.1.2 helped out ? I'm trying this now. Thanx, Claus. (*) libvirt/quemu log: [2018-08-19 16:45:54.275830] E [MSGID: 114031] [client-rpc-fops_v2.c:1352:client4_0_finodelk_cbk]

[Gluster-users] Disconnected peers after reboot

2018-08-20 Thread lemonnierk
Hi, To add to the problematic memory leak, I've been seeing another strange behavior on the 3.12 servers. When I reboot a node, it seems like often (but not always) the other nodes mark it as disconnected and won't accept it back until I restart them. Sometimes I need to restart the glusterd on