Hi,
October 15 2014 7:05 PM, "Chad Seys" <[email protected]> wrote:
> Hi Dan,
> I'm using Emperor (0.72). Though I would think CRUSH maps have not changed
> that much btw versions?
I'm using dumpling, with the hashpspool flag enabled, which I believe could
have been the only difference.
>> That sounds bizarre to me, and I can't reproduce it. I added an osd (which
>> was previously not in the crush map) to a fake host=test:
>>
>> ceph osd crush create-or-move osd.52 1.0 rack=RJ45 host=test
>
> I have flatter failure domain with only servers/drives. Looks like you would
> have at least rack/server/drive. Would that make the difference?
Could be. Now I just tried using testrack, testhost then removing the osd. So I
have
-30 0 rack testrack
-23 0 host testhost
Then I remove testhost and testrack and there is still no data movement
afterwards. Our crush rule is doing
rule data {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type rack
step emit
}
in case that makes a difference.
>
>> As far as I've experienced, an entry in the crush map with a _crush_ weight
>> of zero is equivalent to that entry not being in the map. (In fact, I use
>> this to drain OSDs ... I just ceph osd crush reweight osd.X 0, then
>> sometime later I crush rm the osd, without incurring any secondary data
>> movement).
>
> Is the crush weight the second column of ceph osd tree ?
Yes, that's the one I'm talking about. The reweight (0-1 value in the rightmost
column) is another thing altogether.
Cheers, Dan
> I'll have to pay attention to that next time I drain a node.
>
> Thanks for investigating!
> Chad.
> _______________________________
>
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com