My crush map is already set up and the rules exist for the various roots.
Just tried altering the crush rule set on a test pool and it migrates
successfully... didn't know you could do that!

Thanks Maxime

On 16 August 2016 at 12:13, Maxime Guyot <[email protected]> wrote:

> Hi Simon,
>
>
>
> If everything is in the same Ceph cluster and you want to move the whole
> “.rgw.buckets” (I assume your RBD traffic is targeted into a “data” or
> “rbd” pool) to your cold storage OSD maybe you could edit the CRUSH map,
> then it’s just a matter of rebalancing.
>
> You can check the ssd/platter example in the doc:
> http://docs.ceph.com/docs/master/rados/operations/crush-map/ or this
> article detailing different maps: http://cephnotes.ksperis.com/
> blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map
>
>
>
> Cheers,
>
> Maxime
>
> *From: *ceph-users <[email protected]> on behalf of Simon
> Murray <[email protected]>
> *Date: *Tuesday 16 August 2016 12:25
> *To: *"[email protected]" <[email protected]>
> *Subject: *[ceph-users] rados cppool slooooooowness
>
>
>
> Morning guys,
>
> I've got about 8 million objects sat in .rgw.buckets that wants moving out
> of the way of OpenStack RDB traffic onto its own (admittedly small) cold
> storage pool on separate OSDs.
>
> I attempted to do this over the weekend during a 12h scheduled downtime,
> however my estimates had this pool completing in a rather un-customer
> friendly (think no backups...) 7 days.
>
> Anyone had any experience in doing this quicker?  Any obvious reasons why
> I can't hack do_copy_pool() to spawn a bunch of threads and bang this off
> in a few hours?
>
> Cheers
>
> Si
>
>
> DataCentred Limited registered in England and Wales no. 05611763
>

-- 
DataCentred Limited registered in England and Wales no. 05611763
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to