Hi Iban,
On 11/06/2015 10:59 PM, Iban Cabrillo wrote:
> Hi Philipp,
> I see you only have 2 osds, have you check that your "osd pool get
> size" is 2, and min_size=1??
yes, the the default and the active values are as you describe (size =
2, min_size = 1).
My idea was to start with a really sma
Hi Philipp,
I see you only have 2 osds, have you check that your "osd pool get size"
is 2, and min_size=1??
Cheers, I
2015-11-06 22:05 GMT+01:00 Philipp Schwaha :
> On 11/06/2015 09:25 PM, Gregory Farnum wrote:
> >
> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
> >
On 11/06/2015 09:25 PM, Gregory Farnum wrote:
> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
>
> :)
>
Thanks, I tried to follow the advice to "... start that ceph-osd and
things will recover.", for the better part of the last two days but did
not succeed in reviving
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
:)
On Friday, November 6, 2015, Philipp Schwaha wrote:
> Hi,
>
> I have an issue with my (small) ceph cluster after an osd failed.
> ceph -s reports the following:
> cluster 2752438a-a33e-4df4-b9ec-beae32d00aad
>
Hi,
I have an issue with my (small) ceph cluster after an osd failed.
ceph -s reports the following:
cluster 2752438a-a33e-4df4-b9ec-beae32d00aad
health HEALTH_WARN
31 pgs down
31 pgs peering
31 pgs stuck inactive
31 pgs stuck unclean
m