On Thu, Apr 23, 2015 at 11:18 AM, Jake Grimmett <[email protected]> wrote:
> Dear All,
>
> I have multiple disk types (15k & 7k) on each ceph node, which I assign to
> different pools, but have a problem as whenever I reboot a node, the OSD's
> move in the CRUSH map.

I just found out that you can customize the way OSDs are automatically
added to the crushmap using an hook script.

I have in ceph.conf:

    osd crush location hook = /usr/local/sbin/sc-ceph-crush-location

this will return the correct bucket and root for the specific osd.

I also have

osd crush update on start = true

which should be the default.

This way, whenever an OSD starts, it's automatically added to correct bucket.

ref: http://ceph.com/docs/master/rados/operations/crush-map/#crush-location

.a.

-- 
[email protected]
[email protected]                     +41 (0)44 635 42 22
S3IT: Service and Support for Science IT   http://www.s3it.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to