On Tue, Aug 21, 2018 at 10:13 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote:
>
>> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3
>> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s
>> RAID
On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote:
> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3
> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s
> RAID Controller; operating system running on a raid1, then 4 disks
> (JBOD) as bricks.
>
> Ok, i ran perf
On Mon, 20 Aug 2018 at 13:08, wrote:
> Hi,
>
> To add to the problematic memory leak, I've been seeing another strange
> behavior on the 3.12 servers. When I reboot a node, it seems like often
> (but not always) the other nodes mark it as disconnected and won't
> accept it back until I restart
On Mon, Aug 20, 2018 at 6:20 PM, Walter Deignan wrote:
> I upgraded late last week to 4.1.2. Since then I've seen several posix
> health checks fail and bricks drop offline but I'm not sure if that's
> related or a different root issue.
>
> I haven't seen the issue described below re-occur on
I upgraded late last week to 4.1.2. Since then I've seen several posix
health checks fail and bricks drop offline but I'm not sure if that's
related or a different root issue.
I haven't seen the issue described below re-occur on 4.1.2 yet but it was
intermittent to begin with so I'll probably
Regarding hardware the machines are identical. Intel Xeon E5-1650 v3
Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s
RAID Controller; operating system running on a raid1, then 4 disks
(JBOD) as bricks.
Ok, i ran perf for a few seconds.
perf record
On Wed, Aug 15, 2018 at 2:54 AM, Walter Deignan wrote:
> I am using gluster to host KVM/QEMU images. I am seeing an intermittent
> issue where access to an image will hang. I have to do a lazy dismount of
> the gluster volume in order to break the lock and then reset the impacted
> virtual
gluster volume heal shared info | grep -i number
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries: 0
Number of entries:
There are a lot of Lookup operations in the system. But I am not able to
find why. Could you check the output of
# gluster volume heal info | grep -i number
it should print all zeros.
On Fri, Aug 17, 2018 at 1:49 PM Hu Bert wrote:
> I don't know what you exactly mean with workload, but the
Thanks for this report! We will look into this. This is something new we
are seeing, and not aware of a RCA yet!
-Amar
On Mon, Aug 20, 2018 at 1:08 PM, Claus Jeppesen wrote:
> I think I have seen this also on our CentOS 7.5 systems using GlusterFS
> 4.1.1 (*) - has an upgrade to 4.1.2 helped
I think I have seen this also on our CentOS 7.5 systems using GlusterFS
4.1.1 (*) - has an upgrade to 4.1.2 helped out ? I'm trying this now.
Thanx,
Claus.
(*) libvirt/quemu log:
[2018-08-19 16:45:54.275830] E [MSGID: 114031]
[client-rpc-fops_v2.c:1352:client4_0_finodelk_cbk]
Hi,
To add to the problematic memory leak, I've been seeing another strange
behavior on the 3.12 servers. When I reboot a node, it seems like often
(but not always) the other nodes mark it as disconnected and won't
accept it back until I restart them.
Sometimes I need to restart the glusterd on
12 matches
Mail list logo