It looks that I have solved the issue.
I tried:
ceph.conf
[osd]
osd_memory_target = 1073741824
systemctl restart ceph-osd.target
when i run
ceph config get osd.40 osd_memory_target it returns:
4294967296
so this did not work.
Next I tried:
ceph tell osd.* injectargs '--osd_memory_target 1073741824'
and ceph returns:
ceph config get osd.40 osd_memory_target
4294967296
So this also dir not work in 14.2.20
Next I tried:
ceph config set osd/class:hdd osd_memory_target 1073741824
and that finally worked.
I also slowly increased the memory and so far I use:
ceph config set osd/class:hdd osd_memory_target 2147483648
for now.
Thanks
Christoph
On Wed, May 05, 2021 at 04:30:17PM +0200, Christoph Adomeit wrote:
> I manage a historical cluster of severak ceph nodes with each 128 GB Ram and
> 36 OSD each 8 TB size.
>
> The cluster ist just for archive purpose and performance is not so important.
>
> The cluster was running fine for long time using ceph luminous.
>
> Last week I updated it to Debian 10 and Ceph Nautilus.
>
> Now I can see that the memory usage of each osd grows slowly to 4 GB each and
> once the system has
> no memory left it will oom-kill processes
>
> I have already configured osd_memory_target = 1073741824 .
> This helps for some hours but then memory usage will grow from 1 GB to 4 GB
> per OSD.
>
> Any ideas what I can do to further limit osd memory usage ?
>
> It would be good to keep the hardware running some more time without
> upgrading RAM on all
> OSD machines.
>
> Any Ideas ?
>
> Thanks
> Christoph
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]