Hello,

kernel actually running in nodes is 4.11.1-1.el7.elrepo.x86_64

use case:

3 compute nodes in cluster corosync/pacemaker

resources

dlm
clvm
gfs volume 1
gfs volume 2

volume journal size 256MB

please tell me cmds for more informations

Thank you!

S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz

www.feldhost.cz - FeldHost™ – profesionální hostingové a serverové služby za 
adekvátní ceny.

FELDSAM s.r.o.
V rohu 434/3
Praha 4 – Libuš, PSČ 142 00
IČ: 290 60 958, DIČ: CZ290 60 958
C 200350 vedená u Městského soudu v Praze

Banka: Fio banka a.s.
Číslo účtu: 2400330446/2010
BIC: FIOBCZPPXX
IBAN: CZ82 2010 0000 0024 0033 0446

> On 19 Jul 2017, at 11:09, Steven Whitehouse <swhit...@redhat.com> wrote:
> 
> Hi,
> 
> 
> On 19/07/17 00:39, Digimer wrote:
>> On 2017-07-18 07:25 PM, Kristián Feldsam wrote:
>>> Hello, I see today GFS2 errors in log and nothing about that is on net,
>>> so I writing to this mailing list.
>>> 
>>> node2       19.07.2017 01:11:55     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-4549568322848002755
>>> node2       19.07.2017 01:10:56     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-8191295421473926116
>>> node2       19.07.2017 01:10:48     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-8225402411152149004
>>> node2       19.07.2017 01:10:47     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-8230186816585019317
>>> node2       19.07.2017 01:10:45     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-8242007238441787628
>>> node2       19.07.2017 01:10:39     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-8250926852732428536
>>> node3       19.07.2017 00:16:02     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete
>>> nr=-5150933278940354602
>>> node3       19.07.2017 00:16:02     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete nr=-64
>>> node3       19.07.2017 00:16:02     kernel  kern    err     vmscan: 
>>> shrink_slab:
>>> gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete nr=-64
>>> 
>>> Would somebody explain this errors? cluster is looks like working
>>> normally. I enabled vm.zone_reclaim_mode = 1 on nodes...
>>> 
>>> Thank you!
>> Please post this to the Clusterlabs - Users list. This ML is deprecated.
>> 
>> 
> 
> cluster-devel is the right list for GFS2 issues. This looks like a bug to me, 
> since the object count should never be negative. The glock shrinker is not 
> (yet) zone aware, although the quota shrinker is. Not sure if that is 
> related, but perhaps.... certainly something we'd like to investigate 
> further. That said the messages in themselves are harmless, but it will 
> likely indicate a less than optimal use of memory. If there are any details 
> that can be shared about the use case, and how to reproduce that will be very 
> helpful for us to know. Also what kernel version was this?
> 
> Steve.

-- 
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to