With cephadm you're able to set these values cluster wide.
See the host-management section of the docs.
https://docs.ceph.com/en/reef/cephadm/host-management/#os-tuning-profiles

On Fri, 19 Apr 2024 at 12:40, Konstantin Shalygin <k0...@k0ste.ru> wrote:

> Hi,
>
> > On 19 Apr 2024, at 10:39, Pardhiv Karri <meher4in...@gmail.com> wrote:
> >
> > Thank you for the reply. I tried setting ulimit to 32768 when I saw 25726
> > number in lsof output and then after 2 disks deletion again it got an
> error
> > and checked lsof and which is above 35000.  I'm not sure how to handle
> it.
> > I rebooted the monitor node, but the open files kept growing.
> >
> > root@ceph-mon01 ~# lsof | wc -l
> > 49296
> > root@ceph-mon01 ~#
>
> This means that is not a Ceph problem. Is a problem in this system at all
>
>
> k
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to