Hi,

that's the config http://pastebin.com/2JzABSYt
ceph osd dump http://pastebin.com/GSCGKL1k
ceph osd tree http://pastebin.com/VSgPFRYv

As far as I can tell they are not mapped right.

sdmap e133 pool 'rbd' (2) object '2.31a' -> pg 2.f3caaf00 (2.300) -> up
[13,23] acting [13,23]

-martin

On 28.03.2013 01:09, John Wilkins wrote:
> We need a bit more information. If you can do: "ceph osd dump", "ceph
> osd tree", and paste your ceph conf, we might get a bit further. The
> CRUSH hierarchy looks okay. I can't see the replica size from this
> though.
> 
> Have you followed this procedure to see if your object is getting
> remapped? 
> http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location
> 
> On Thu, Mar 21, 2013 at 12:02 PM, Martin Mailand <[email protected]> wrote:
>> Hi,
>>
>> I want to change my crushmap to reflect my setup, I have two racks with
>> each 3 hosts. I want to use for the rbd pool a replication size of 2.
>> The failure domain should be the rack, so each replica should be in each
>> rack. That works so far.
>> But if I shutdown a host the clusters stays degraded, but I want that
>> the now missing replicas get replicated to the two remaining hosts in
>> this rack.
>>
>> Here is crushmap.
>> http://pastebin.com/UaB6LfKs
>>
>> Any idea what I did wrong?
>>
>> -martin
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to