0:0:(ldlm_lockd.c:334:waiting_locks_callback()) Skipped 1 previous
similar message
Regards.
--
Jacek Tomaka
Geophysical Software Developer
DownUnder GeoSolutions
76 Kings Park Road
West Perth 6005 WA, Australia
tel +61 8 9287 4143
jac...@dug.com
www.dug.com
___
nteresting article. While memory fragmentation makes it
more
difficult to use huge pages, it is not directly related to the problem of
lustre kernel
memory allocation accounting. It will be good to see movable slabs, though.
Also i am not sure how the high signal_cache can be explained and if
anything ca
And we are running in nohz_full, so it is going to be interesting problem
to diagnose...
But this seems to be going off on a tangent. Still, thank you for the
useful hints and analysis.
Jacek Tomaka
On Tue, Apr 16, 2019 at 7:17 AM NeilBrown wrote:
> On Mon, Apr 15 2019, Jacek Tomaka wrote:
>
&g
Actually i think it is just a bug with the way slab caches are created.
Some of them should be passed a flag that they are reclaimable.
i.e. something like:
https://patchwork.kernel.org/patch/9360819/
Regards.
Jacek Tomaka
On Sun, Apr 14, 2019 at 3:27 PM Jacek Tomaka wrote:
> Hello,
>
&
is that they are marked as SUnreclaim vs
SReclaimable.
So i do not think there is a memory leak per se.
Regards.
Jacek Tomaka
On Mon, Apr 29, 2019 at 1:39 PM NeilBrown wrote:
>
> Thanks Jacek,
> so lustre_inode_cache is the real culprit when signal_cache appears to
> be large.
&
cribed before but the
kernel version is the same so I am assuming the cache merges are the same.
Looks like signal_cache points to lustre_inode_cache.
Regards.
Jacek Tomaka
On Thu, Apr 25, 2019 at 7:42 AM NeilBrown wrote:
>
> Hi,
> you seem to be able to reproduce this fairly easily.
> If
abinfo |grep vvp
vvp_object_kmem32982 33212176 462 : tunables000
: slabdata722722 0
Regards.
Jacek Tomaka
On Tue, Apr 16, 2019 at 11:18 AM Jacek Tomaka wrote:
> >That would be interesting. About a dozen copies of
> > cat /proc/$PID/stack