If you have osds that are close to full, you may be hitting 9626.  I
pushed a branch based on v0.80.7 with the fix, wip-v0.80.7-9626.
-Sam

On Mon, Nov 3, 2014 at 2:09 PM, Chad Seys <[email protected]> wrote:
>>
>> No, it is a change, I just want to make sure I understand the
>> scenario. So you're reducing CRUSH weights on full OSDs, and then
>> *other* OSDs are crashing on these bad state machine events?
>
> That is right.  The other OSDs shutdown sometime later.  (Not immediately.)
>
> I really haven't tested to see if the OSDs will stay up with if there are no
> manipulations.  Need to wait with the PGs to settle for awhile, which I
> haven't done yet.
>
>>
>> >> I don't think it should matter, although I confess I'm not sure how
>> >> much monitor load the scrubbing adds. (It's a monitor check; doesn't
>> >> hit the OSDs at all.)
>> >
>> > $ ceph scrub
>> > No output.
>>
>> Oh, yeah, I think that output goes to the central log at a later time.
>> (Will show up in ceph -w if you're watching, or can be accessed from
>> the monitor nodes; in their data directory I think?)
>
> OK.  Will doing ceph scrub again result in the same output? If so, I'll run it
> again and look for output in ceph -w when the migrations have stopped.
>
> Thanks!
> Chad.
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to