Cache tiers are deprecated, I strongly advise finding a way to factor it out of 
your deployment.

There was some discussion of this in the past, e.g. 

https://ceph-users.ceph.narkive.com/0LxBSHEQ/changing-pg-num-on-cache-pool

The PG to RADOS object ratio isn't necessarily in and of itself a problem, as 
the dynamics of pg_num for a pool depend on the data and the access modality.  
When multiple pools share the same OSDs, there are tradeoffs that include what 
the pools are for, and how much data each stores.  RGW index pools, for 
example, need more PGs than their data volume would otherwise indicate.

Is your cache pool for sure limited to *only* the SSDs you expect, or does it 
specify a CRUSH rule that also lands on HDDs?

-- the former Cepher known as Anthony

> On Aug 13, 2025, at 12:15 AM, Vishnu Bhaskar <vishn...@acceleronlabs.com> 
> wrote:
> 
> Hi Team,
> 
> I have an OpenStack Ceph cluster where the Cinder volumes pool uses a cache
> pool named *volumes_cache*.
> 
> Due to performance issues, I observed that the cache pool has fewer PGs and
> a high PG-to-object ratio, approximately 10x higher than normal. For
> instance, one PG contains around 35,000 objects. I am planning to increase
> the PG number for the cache pool.
> 
> For the base pool, I was able to increase the PG count without any issues.
> However, for the cache pool, I encountered a warning indicating that a
> force argument is required. In my lab environment, I found that changing
> the cache pool mode to *none* allowed me to modify the PG number.
> 
> Since this is my production setup, I would like to know if there is a safe
> and recommended procedure to increase the PG number of a *cache pool*
> without impacting the environment.
> 
> Kindly advise on the best approach.
> 
> Thanks and Regards
> Vishnu Bhaskar
> Acceleron Labs Pvt Ltd
> Bangalore, India
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to