On Sun, 30 Dec 2018, David C wrote:
> Hi All
> 
> I'm trying to set the existing pools in a Luminous cluster to use the hdd
> device-class but without moving data around. If I just create a new rule
> using the hdd class and set my pools to use that new rule it will cause a
> huge amount of data movement even though the pgs are all already on HDDs.
> 
> There is a thread on ceph-large [1] which appears to have the solution but
> I can't get my head around what I need to do. I'm not too clear on which
> IDs I need to swap. Could someone give me some pointers on this please?
> 
> [1]
> http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000109.html
> 

This is a new feature in crushtool in master that will be included in 
nautilus. You can either wait, or you can just build nautilus from source 
(or grab a recent package from shaman.ceph.com) and use the crushtool CLI  
to update your crushmap.

See 
http://docs.ceph.com/docs/master/rados/operations/crush-map-edits/#crush-reclassify

s
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to