Hi Greg
On Thu, Sep 06, 2012 at 11:19:12AM -0700, Gregory Farnum wrote:
> You always need to end up with "devices" (the OSDs, generally) and
> then emit those from your CRUSH rule. You can do so hierarchically:
> rule data {
> ruleset 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step choose firstn 0 type host
> step choose firstn 1 osd
> step emit
> }
> In this case, (with n being your replication count) this rule chooses
> n hosts, and then chooses 1 OSD from each chosen host.
>
> You can also use "chooseleaf", which is a bit more robust in the
> presence of failed OSDs:
> rule data {
> ruleset 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
> This rule will choose n hosts and an OSD from each chosen host, and if
> it fails on any host then it will restart with a different host (the
> previous rule would stick with the chosen hosts and so it can't handle
> if eg an entire host's OSDs are down).
> -Greg
> That explaination certainly has cleared things up for me, I had not realised that I needed to end with a "device" at the end of a rule based on the documentation that I could find on the website and the old ceph wiki. Also, the "ceph osd setcrushmap..." command doesn't up when a ceph --help is run in the 0.51 release, however it is documented on the wiki as far as I recall. It'd be real nice if the applications emitted all the available commands, it would make experimenting much nicer and fun. Thanks, Jimmy, -- Jimmy Tang Trinity Centre for High Performance Computing, Lloyd Building, Trinity College Dublin, Dublin 2, Ireland. http://www.tchpc.tcd.ie/
pgpARGJ5DLSYa.pgp
Description: PGP signature
