在 2021年5月28日,08:18,Jeremy Hansen <[email protected]> 写道:


I’m very new to Ceph so if this question makes no sense, I apologize.  
Continuing to study but I thought an answer to this question would help me 
understand Ceph a bit more.

Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for 
Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  I’d 
like to understand how to remap this PG so it’s not using the SSD OSDs.

ceph pg map 1.0
osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]

OSD 28 is the SSD.

Is this possible?  Does this make any sense?  I’d like to reserve the SSDs for 
their own pool.

Yes, you can refer to the doc [1]. You need to create a new crush rule with HDD 
device class, and assign this new rule to that pool.

[1]: https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes

Weiwen Hu

Thank you!
-jeremy
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to