We need a bit more information. If you can do: "ceph osd dump", "ceph
osd tree", and paste your ceph conf, we might get a bit further. The
CRUSH hierarchy looks okay. I can't see the replica size from this
though.

Have you followed this procedure to see if your object is getting
remapped? 
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location

On Thu, Mar 21, 2013 at 12:02 PM, Martin Mailand <[email protected]> wrote:
> Hi,
>
> I want to change my crushmap to reflect my setup, I have two racks with
> each 3 hosts. I want to use for the rbd pool a replication size of 2.
> The failure domain should be the rack, so each replica should be in each
> rack. That works so far.
> But if I shutdown a host the clusters stays degraded, but I want that
> the now missing replicas get replicated to the two remaining hosts in
> this rack.
>
> Here is crushmap.
> http://pastebin.com/UaB6LfKs
>
> Any idea what I did wrong?
>
> -martin
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
John Wilkins
Senior Technical Writer
Intank
[email protected]
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to