I have the same issues with the variety of kernel clients running 4.6.3
and 4.4.12 and fuse clients from 10.2.2.

-Mykola

-----Original Message-----
From: xiaoxi chen <[email protected]>
To: João Castro <[email protected]>, [email protected] <cep
[email protected]>
Subject: Re: [ceph-users] CephFS mds cache pressure
Date: Wed, 29 Jun 2016 01:00:40 +0000

Hmm, I asked in the ML some days before,:) likely you hit the kernel
bug which fixed by commit 5e804ac482 "ceph: don't invalidate page cache
when inode is no longer used”.  This fix is in 4.4 but not in 4.2. I
haven't got a chance to play with 4.4 , it would be great if you can
have a try.

For MDS OOM issue, we did a MDS RSS vs #Inodes scaling test, the result
showing around 4MB per 1000 Inodes, so your MDS likely can hold up to
2~3 Million inodes. But yes, even with the fix if the client
misbehavior (open and hold a lot of inodes, doesn't respond to cache
pressure message), MDS can go over the throttling and then killed by
OOM


> To: [email protected]
> From: [email protected]
> Date: Tue, 28 Jun 2016 21:34:03 +0000
> Subject: Re: [ceph-users] CephFS mds cache pressure
> 
> Hey John,
> 
> ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
> 4.2.0-36-generic
> 
> Thanks!
> 
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to