On Tue, Mar 31, 2015 at 7:50 AM, Quentin Hartman
qhart...@direwolfdigital.com wrote:
I'm working on redeploying a 14-node cluster. I'm running giant 0.87.1. Last
friday I got everything deployed and all was working well, and I set noout
and shut all the OSD nodes down over the weekend.
Thanks for the extra info Gregory. I did not also set nodown.
I expect that I will be very rarely shutting everything down in the normal
course of things, but it has come up a couple times when having to do some
physical re-organizing of racks. Little irritants like this aren't a big
deal if
On Tue, Mar 31, 2015 at 2:05 PM, Gregory Farnum g...@gregs42.com wrote:
Github pull requests. :)
Ah, well that's easy:
https://github.com/ceph/ceph/pull/4237
QH
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Tue, Mar 31, 2015 at 3:05 PM, Gregory Farnum g...@gregs42.com wrote:
On Tue, Mar 31, 2015 at 12:56 PM, Quentin Hartman
My understanding is that the right method to take an entire cluster
offline is to set noout and then shutting everything down. Is there a
better
way?
That's
I'm working on redeploying a 14-node cluster. I'm running giant 0.87.1.
Last friday I got everything deployed and all was working well, and I set
noout and shut all the OSD nodes down over the weekend. Yesterday when I
spun it back up, the OSDs were behaving very strangely, incorrectly marking
On Tue, Mar 31, 2015 at 12:56 PM, Quentin Hartman
qhart...@direwolfdigital.com wrote:
Thanks for the extra info Gregory. I did not also set nodown.
I expect that I will be very rarely shutting everything down in the normal
course of things, but it has come up a couple times when having to do