Hi all,
I have Ceph cluster+rgw. Now I have problems with one of OSD, it's down now. I
check ceph status and see this information
[root@node-1 ceph-0]# ceph -s
cluster fc8c3ecc-ccb8-4065-876c-dc9fc992d62d
health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck
unclean
Hi!
I had a very similar issue a few days ago.
For me it wasn't too much of a problem since the cluster was new
without data and I could force recreate the PGs. I really hope that in
your case it won't be necessary to do the same thing.
As a first step try to reduce the min_size from 2 to 1
On Mon, Dec 01, 2014 at 05:09:31PM +0300, Butkeev Stas wrote:
Hi all,
I have Ceph cluster+rgw. Now I have problems with one of OSD, it's down now.
I check ceph status and see this information
[root@node-1 ceph-0]# ceph -s
cluster fc8c3ecc-ccb8-4065-876c-dc9fc992d62d
health
Le 01/12/2014 15:09, Butkeev Stas a écrit :
pg 13.2 is incomplete, acting [1,3] (reducing pool .rgw.buckets min_size from
2 may help; search ceph.com/docs for 'incomplete')
The answer is in the logs: your .rgw.buckets pool is using min_size = 2.
So it doesn't have enough valid pg replicas to
Le 01/12/2014 17:08, Lionel Bouton a écrit :
I may be wrong here (I'm surprised you only have 4 incomplete pgs, I'd
expect ~1/3rd of your pgs to be incomplete given your ceph osd tree
output) but reducing min_size to 1 should be harmless and should
unfreeze the recovering process.
Ignore this
Thank you Lionel,
Indeed I have forgotten about size min_size. I have set min_size to 1 and my
cluster is UP now. I have deleted crash osd and have set size to 3 and min_size
to 2.
---
With regards,
Stanislav
01.12.2014, 19:15, Lionel Bouton lionel-subscript...@bouton.name:
Le