On Fri, 9 May 2014, Jeff Bachtel wrote:
> I'm working on http://tracker.ceph.com/issues/8310 , basically by bringing
> osds down and up I've come to a state where on-disk I have pgs, osds seem to
> scan the directories on boot, but the crush map isn't mapping the objects
> properly.
>
> In addition to that ticket, I've got a decompile of my crushmap at
> https://github.com/jeffb-bt/nova_confs/blob/master/crushmap.txt and a raw dump
> (in case anything is missing) at
> https://github.com/jeffb-bt/nova_confs/blob/master/crushmap
>
> # ceph osd tree
> # id weight type name up/down reweight
> -1 5 root default
> -2 1 host compute1
> 0 1 osd.0 up 0.24
> -3 1 host compute2
> 1 1 osd.1 up 0.26
> -4 3 host compute3
> 3 1 osd.3 up 0.02325
> 2 1 osd.2 up 0.01993
> 5 1 osd.5 up 0.15
The weights on the right should all be set to 1:
ceph osd reweight $OSD 1
or
ceph osd in $OSD
Those weights are used for exceptional cases. In general they should be
1 ("in") or 0 ("out"), unless you are making small corrections in the
placement.
To adjust the relative weights on the disks, you want to adjust the CRUSH
weights on the left (second column):
ceph osd crush reweight $OSD $WEIGHT
sage
>
> Does anyone see where the error is in the crushmap that I can fix it? I don't
> have very many pgs/pools, if I need to add extra mapping to the crushmap to
> get my pgs visible I will do so, I just don't have any examples of how.
>
> Thanks for any help,
>
> Jeff
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com