It looks indeed to be that bug that I hit.

Thanks.

Luis Domingues
Proton AG


------- Original Message -------
On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee <ssesh...@redhat.com> 
wrote:


> Hello Luis,
> 
> Please see my response below:
> 
> But when I took a look on the memory usage of my OSDs, I was below of that
> 
> > value, by quite a bite. Looking at the OSDs themselves, I have:
> > 
> > "bluestore-pricache": {
> > "target_bytes": 4294967296,
> > "mapped_bytes": 1343455232,
> > "unmapped_bytes": 16973824,
> > "heap_bytes": 1360429056,
> > "cache_bytes": 2845415832
> > },
> > 
> > And if I get the running config:
> > "osd_memory_target": "4294967296",
> > "osd_memory_target_autotune": "true",
> > "osd_memory_target_cgroup_limit_ratio": "0.800000",
> > 
> > Which is not the value I observe from the config. I have 4294967296
> > instead of something around 7219293672. Did I miss something?
> 
> This is very likely due to https://tracker.ceph.com/issues/48750. The fix
> was recently merged into
> the main branch and should be backported soon all the way to pacific.
> 
> Until then, the workaround would be to set the osd_memory_target on each
> OSD individually to
> the desired value.
> 
> -Sridhar
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to