> Unfortunately they cannot. You'll want to set them in centralized conf
> and then restart OSDs for them to take effect.
>
Got it. Thank you Josh! WIll put it to config of affected OSDs and restart
them.
Just curious, can decreasing rocksdb_cf_compact_on_deletion_trigger 16384 >
4096 hurt perfor
Hi Mark,
In v17.2.7 we enabled a feature that automatically performs a compaction
>> if too many tombstones are present during iteration in RocksDB. It
>> might be worth upgrading to see if it helps (you might have to try
>> tweaking the settings if the defaults aren't helping enough). The PR is
>
> Hi Mark, thank you for prompt answer.
The fact that changing the pg_num for the index pool drops the latency
> back down might be a clue. Do you have a lot of deletes happening on
> this cluster? If you have a lot of deletes and long pauses between
> writes, you could be accumulating tombsto
Hi Stefan,
Do you make use of a separate db partition as well? And if so, where is
> it stored?
>
No, only WAL partition is on separate NVME partition. Not sure if
ceph-ansible could install Ceph with db partition on separate device on
v17.6.2
Do you only see latency increase in reads? And not w
Hi Eugen,
How is the data growth in your cluster? Is the pool size rather stable or
> is it constantly growing?
>
Pool size is fairly constant with tiny up trend. It's growth doesn't
correlate with increase of OSD read latency. I've combined pool usage with
OSD read latency on one graph to provid
Hello Ceph users,
we see strange issue on last recent Ceph installation v17.6.2. We store
data on HDD pool, index pool is on SSD. Each OSD store its wal on NVME
partition. Benchmarks didn't expose any issues with cluster, but since we
placed production load on it we see constantly growing OSD late
Hi owners of ceph-users list, I've been trying to post new message for the first time. First has been bounced because I've registered, but not subscribed to list. Than I've subscribed and sent message with picture, which was larger than allowed 500KB and got into quarantine as well. I've decided to