Re: [ceph-users] osd fails to start, rbd hangs

2015-11-07 Thread Philipp Schwaha
Hi Iban, On 11/06/2015 10:59 PM, Iban Cabrillo wrote: > Hi Philipp, > I see you only have 2 osds, have you check that your "osd pool get > size" is 2, and min_size=1?? yes, the the default and the active values are as you describe (size = 2, min_size = 1). My idea was to start with a really sma

Re: [ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Iban Cabrillo
Hi Philipp, I see you only have 2 osds, have you check that your "osd pool get size" is 2, and min_size=1?? Cheers, I 2015-11-06 22:05 GMT+01:00 Philipp Schwaha : > On 11/06/2015 09:25 PM, Gregory Farnum wrote: > > > http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ > >

Re: [ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Philipp Schwaha
On 11/06/2015 09:25 PM, Gregory Farnum wrote: > http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ > > :) > Thanks, I tried to follow the advice to "... start that ceph-osd and things will recover.", for the better part of the last two days but did not succeed in reviving

Re: [ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Gregory Farnum
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ :) On Friday, November 6, 2015, Philipp Schwaha wrote: > Hi, > > I have an issue with my (small) ceph cluster after an osd failed. > ceph -s reports the following: > cluster 2752438a-a33e-4df4-b9ec-beae32d00aad >

[ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Philipp Schwaha
Hi, I have an issue with my (small) ceph cluster after an osd failed. ceph -s reports the following: cluster 2752438a-a33e-4df4-b9ec-beae32d00aad health HEALTH_WARN 31 pgs down 31 pgs peering 31 pgs stuck inactive 31 pgs stuck unclean m