my cache is correctly capped at 5G currently
here some stats: (mds has been restarted yesterday, using around 8,8gb, and
cache capped at 5G).
I'll try to sent some stats in 1 or 2 week, when the memory should be at 20g
# while sleep 1; do ceph daemon mds.ceph4-2.odiso.net perf dump | jq
'.mds_mem.rss'; ceph daemon mds.ceph4-2.odiso.net dump_mempools | jq -c
'.mds_co'; done
8821728
{"items":44512173,"bytes":5346723108}
8821728
{"items":44647862,"bytes":5356139145}
8821728
{"items":43644205,"bytes":5129276043}
8821728
{"items":44134481,"bytes":5260485627}
8821728
{"items":44418491,"bytes":5338308734}
8821728
{"items":45091444,"bytes":5404019118}
8821728
{"items":44714180,"bytes":5322182878}
8821728
{"items":43853828,"bytes":5221597919}
8821728
{"items":44518074,"bytes":5323670444}
8821728
{"items":44679829,"bytes":5367219523}
8821728
{"items":44809929,"bytes":5382383166}
8821728
{"items":43441538,"bytes":5180408997}
8821728
{"items":44239001,"bytes":5349655543}
8821728
{"items":44558135,"bytes":5414566237}
8821728
{"items":44664773,"bytes":5433279976}
8821728
{"items":43433859,"bytes":5148008705}
8821728
{"items":43683053,"bytes":5236668693}
8821728
{"items":44248833,"bytes":5310420155}
8821728
{"items":45013698,"bytes":5381693077}
8821728
{"items":44928825,"bytes":5313048602}
8821728
{"items":43828630,"bytes":5146482155}
8821728
{"items":44005515,"bytes":5167930294}
8821728
{"items":44412223,"bytes":5182643376}
8821728
{"items":44842966,"bytes":5198073066}
----- Mail original -----
De: "aderumier" <[email protected]>
À: "Webert de Souza Lima" <[email protected]>
Cc: "ceph-users" <[email protected]>
Envoyé: Samedi 12 Mai 2018 08:11:04
Objet: Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?
Hi
>>You could use "mds_cache_size" to limit number of CAPS untill you have this
>>fixed, but I'd say for your number of caps and inodes, 20GB is normal.
The documentation (luminous) say:
"
mds cache size
Description: The number of inodes to cache. A value of 0 indicates an unlimited
number. It is recommended to use mds_cache_memory_limit to limit the amount of
memory the MDS cache uses.
Type: 32-bit Integer
Default: 0
"
and, my mds_cache_memory_limit is currently at 5GB.
----- Mail original -----
De: "Webert de Souza Lima" <[email protected]>
À: "ceph-users" <[email protected]>
Envoyé: Vendredi 11 Mai 2018 20:18:27
Objet: Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?
You could use "mds_cache_size" to limit number of CAPS untill you have this
fixed, but I'd say for your number of caps and inodes, 20GB is normal.
this mds (jewel) here is consuming 24GB RAM:
{
"mds": {
"request": 7194867047,
"reply": 7194866688,
"reply_latency": {
"avgcount": 7194866688,
"sum": 27779142.611775008
},
"forward": 0,
"dir_fetch": 179223482,
"dir_commit": 1529387896,
"dir_split": 0,
"inode_max": 3000000,
"inodes": 3001264,
"inodes_top": 160517,
"inodes_bottom": 226577,
"inodes_pin_tail": 2614170,
"inodes_pinned": 2770689,
"inodes_expired": 2920014835,
"inodes_with_caps": 2743194,
"caps": 2803568,
"subtrees": 2,
"traverse": 8255083028,
"traverse_hit": 7452972311,
"traverse_forward": 0,
"traverse_discover": 0,
"traverse_dir_fetch": 180547123,
"traverse_remote_ino": 122257,
"traverse_lock": 5957156,
"load_cent": 18446743934203149911,
"q": 54,
"exported": 0,
"exported_inodes": 0,
"imported": 0,
"imported_inodes": 0
}
}
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
On Fri, May 11, 2018 at 3:13 PM Alexandre DERUMIER < [
mailto:[email protected] | [email protected] ] > wrote:
Hi,
I'm still seeing memory leak with 12.2.5.
seem to leak some MB each 5 minutes.
I'll try to resent some stats next weekend.
----- Mail original -----
De: "Patrick Donnelly" < [ mailto:[email protected] | [email protected] ] >
À: "Brady Deetz" < [ mailto:[email protected] | [email protected] ] >
Cc: "Alexandre Derumier" < [ mailto:[email protected] | [email protected] ]
>, "ceph-users" < [ mailto:[email protected] |
[email protected] ] >
Envoyé: Jeudi 10 Mai 2018 21:11:19
Objet: Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?
On Thu, May 10, 2018 at 12:00 PM, Brady Deetz < [ mailto:[email protected] |
[email protected] ] > wrote:
> [ceph-admin@mds0 ~]$ ps aux | grep ceph-mds
> ceph 1841 3.5 94.3 133703308 124425384 ? Ssl Apr04 1808:32
> /usr/bin/ceph-mds -f --cluster ceph --id mds0 --setuser ceph --setgroup ceph
>
>
> [ceph-admin@mds0 ~]$ sudo ceph daemon mds.mds0 cache status
> {
> "pool": {
> "items": 173261056,
> "bytes": 76504108600
> }
> }
>
> So, 80GB is my configured limit for the cache and it appears the mds is
> following that limit. But, the mds process is using over 100GB RAM in my
> 128GB host. I thought I was playing it safe by configuring at 80. What other
> things consume a lot of RAM for this process?
>
> Let me know if I need to create a new thread.
The cache size measurement is imprecise pre-12.2.5 [1]. You should upgrade
ASAP.
[1] [ https://tracker.ceph.com/issues/22972 |
https://tracker.ceph.com/issues/22972 ]
--
Patrick Donnelly
_______________________________________________
ceph-users mailing list
[ mailto:[email protected] | [email protected] ]
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com