You could use "mds_cache_size" to limit number of CAPS untill you have this
fixed, but I'd say for your number of caps and inodes, 20GB is normal.

this mds (jewel) here is consuming 24GB RAM:

{
    "mds": {
        "request": 7194867047,
        "reply": 7194866688,
        "reply_latency": {
            "avgcount": 7194866688,
            "sum": 27779142.611775008
        },
        "forward": 0,
        "dir_fetch": 179223482,
        "dir_commit": 1529387896,
        "dir_split": 0,
        "inode_max": 3000000,
        "inodes": 3001264,
        "inodes_top": 160517,
        "inodes_bottom": 226577,
        "inodes_pin_tail": 2614170,
        "inodes_pinned": 2770689,
        "inodes_expired": 2920014835,
        "inodes_with_caps": 2743194,
        "caps": 2803568,
        "subtrees": 2,
        "traverse": 8255083028,
        "traverse_hit": 7452972311,
        "traverse_forward": 0,
        "traverse_discover": 0,
        "traverse_dir_fetch": 180547123,
        "traverse_remote_ino": 122257,
        "traverse_lock": 5957156,
        "load_cent": 18446743934203149911,
        "q": 54,
        "exported": 0,
        "exported_inodes": 0,
        "imported": 0,
        "imported_inodes": 0
    }
}


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Fri, May 11, 2018 at 3:13 PM Alexandre DERUMIER <aderum...@odiso.com>
wrote:

> Hi,
>
> I'm still seeing memory leak with 12.2.5.
>
> seem to leak some MB each 5 minutes.
>
> I'll try to resent some stats next weekend.
>
>
> ----- Mail original -----
> De: "Patrick Donnelly" <pdonn...@redhat.com>
> À: "Brady Deetz" <bde...@gmail.com>
> Cc: "Alexandre Derumier" <aderum...@odiso.com>, "ceph-users" <
> ceph-users@lists.ceph.com>
> Envoyé: Jeudi 10 Mai 2018 21:11:19
> Objet: Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?
>
> On Thu, May 10, 2018 at 12:00 PM, Brady Deetz <bde...@gmail.com> wrote:
> > [ceph-admin@mds0 ~]$ ps aux | grep ceph-mds
> > ceph 1841 3.5 94.3 133703308 124425384 ? Ssl Apr04 1808:32
> > /usr/bin/ceph-mds -f --cluster ceph --id mds0 --setuser ceph --setgroup
> ceph
> >
> >
> > [ceph-admin@mds0 ~]$ sudo ceph daemon mds.mds0 cache status
> > {
> > "pool": {
> > "items": 173261056,
> > "bytes": 76504108600
> > }
> > }
> >
> > So, 80GB is my configured limit for the cache and it appears the mds is
> > following that limit. But, the mds process is using over 100GB RAM in my
> > 128GB host. I thought I was playing it safe by configuring at 80. What
> other
> > things consume a lot of RAM for this process?
> >
> > Let me know if I need to create a new thread.
>
> The cache size measurement is imprecise pre-12.2.5 [1]. You should upgrade
> ASAP.
>
> [1] https://tracker.ceph.com/issues/22972
>
> --
> Patrick Donnelly
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to