Hi,
Thanks for your hints. I tries to play a little bit with the configs. And now I
want to put the 0.7 value as default.
So I configured ceph:
mgr advanced
mgr/cephadm/autotune_memory_target_ratio 0.700000
*
osd advanced
osd_memory_target_autotune true
And I ended up having this configs
osd host:st10-cbosd-001 basic
osd_memory_target 7219293672
osd host:st10-cbosd-002 basic
osd_memory_target 7219293672
osd host:st10-cbosd-004 basic
osd_memory_target 7219293672
osd host:st10-cbosd-005 basic
osd_memory_target 7219293451
osd host:st10-cbosd-006 basic
osd_memory_target 7219293451
osd host:st11-cbosd-007 basic
osd_memory_target 7216821484
osd host:st11-cbosd-008 basic
osd_memory_target 7216825454
And running a ceph orch ps gaves me:
osd.0 st11-cbosd-007.plabs.ch running (2d)
10m ago 10d 25.8G 6882M 16.2.13 327f301eff51 29a075f2f925
osd.1 st10-cbosd-001.plabs.ch running (19m)
8m ago 10d 2115M 6884M 16.2.13 327f301eff51 df5067bde5ce
osd.10 st10-cbosd-005.plabs.ch running (2d)
10m ago 10d 5524M 6884M 16.2.13 327f301eff51 f7bc0641ee46
osd.100 st11-cbosd-008.plabs.ch running (2d)
10m ago 10d 5234M 6882M 16.2.13 327f301eff51 74efa243b953
osd.101 st11-cbosd-008.plabs.ch running (2d)
10m ago 10d 4741M 6882M 16.2.13 327f301eff51 209671007c65
osd.102 st11-cbosd-008.plabs.ch running (2d)
10m ago 10d 5174M 6882M 16.2.13 327f301eff51 63691d557732
So far so good.
But when I took a look on the memory usage of my OSDs, I was below of that
value, by quite a bite. Looking at the OSDs themselves, I have:
"bluestore-pricache": {
"target_bytes": 4294967296,
"mapped_bytes": 1343455232,
"unmapped_bytes": 16973824,
"heap_bytes": 1360429056,
"cache_bytes": 2845415832
},
And if I get the running config:
"osd_memory_target": "4294967296",
"osd_memory_target_autotune": "true",
"osd_memory_target_cgroup_limit_ratio": "0.800000",
Which is not the value I observe from the config. I have 4294967296 instead of
something around 7219293672. Did I miss something?
Luis Domingues
Proton AG
------- Original Message -------
On Tuesday, July 11th, 2023 at 18:10, Mark Nelson <[email protected]> wrote:
> On 7/11/23 09:44, Luis Domingues wrote:
>
> > "bluestore-pricache": {
> > "target_bytes": 6713193267,
> > "mapped_bytes": 6718742528,
> > "unmapped_bytes": 467025920,
> > "heap_bytes": 7185768448,
> > "cache_bytes": 4161537138
> > },
>
>
> Hi Luis,
>
>
> Looks like the mapped bytes for this OSD process is very close to (just
> a little over) the target bytes that has been set when you did the perf
> dump. There is some unmapped memory that can be reclaimed by the kernel,
> but we can't force the kernel to reclaim it. It could be that the
> kernel is being a little lazy if there isn't memory pressure.
>
> The way the memory autotuning works in Ceph is that periodically the
> prioritycache system will look at the mapped memory usage of the
> process, then grow/shrink the aggregate size of the in-memory caches to
> try and stay near the target. It's reactive in nature, meaning that it
> can't completely control for spikes. It also can't shrink the caches
> below a small minimum size, so if there is a memory leak it will help to
> an extent but can't completely fix it. Once the aggregate memory size
> is decided on, it goes through a process of looking at how hot the
> different caches are and assigns memory based on where it thinks the
> memory would be most useful. Again this is based on mapped memory
> though. It can't force the kernel to reclaim memory that has already
> been released.
>
> Thanks,
>
> Mark
>
> --
> Best Regards,
> Mark Nelson
> Head of R&D (USA)
>
> Clyso GmbH
> p: +49 89 21552391 12
> a: Loristraße 8 | 80335 München | Germany
> w: https://clyso.com | e: [email protected]
>
> We are hiring: https://www.clyso.com/jobs/
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]