Hi Samuel,

It can be a few things. A good place to start is to dump_mempools of one of
those bloated OSDs:

`ceph daemon osd.123 dump_mempools`

Cheers, Dan


--
Dan van der Ster
CTO

Clyso GmbH
p: +49 89 215252722 | a: Vancouver, Canada
w: https://clyso.com | e: dan.vanders...@clyso.com

We are hiring: https://www.clyso.com/jobs/



On Wed, Jan 10, 2024 at 10:20 AM huxia...@horebdata.cn <
huxia...@horebdata.cn> wrote:

> Dear Ceph folks,
>
> I am responsible for two Ceph clusters, running Nautilius 14.2.22 version,
> one with replication 3, and the other with EC 4+2. After around 400 days
> runing quietly and smoothly, recently the two clusters occured with similar
> problems: some of OSDs consume ca 18 GB while the memory target is setting
> at 2GB.
>
> What could wrong in the background?  Does it mean any slow OSD memory leak
> issues with 14.2.22 which i do not know yet?
>
> I would be highly appreciated if some some provides any clues, ideas,
> comments ......
>
> best regards,
>
> Samuel
>
>
>
> huxia...@horebdata.cn
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to