Re: [ceph-users] 1 pgs inconsistent 2 scrub errors

2017-01-26 Thread Eugen Block
Glad I could help! :-) Zitat von Mio Vlahović : From: Eugen Block [mailto:ebl...@nde.ag] From what I understand, with a rep size of 2 the cluster can't decide which object is intact if one is broken, so the repair fails. If you had a size of 3, the cluster would see 2 intact objects an repai

Re: [ceph-users] 1 pgs inconsistent 2 scrub errors

2017-01-26 Thread Mio Vlahović
> From: Eugen Block [mailto:ebl...@nde.ag] > > From what I understand, with a rep size of 2 the cluster can't decide > which object is intact if one is broken, so the repair fails. If you > had a size of 3, the cluster would see 2 intact objects an repair the > broken one (I guess). At least we d

Re: [ceph-users] 1 pgs inconsistent 2 scrub errors

2017-01-26 Thread Eugen Block
Yes, we have replication size of 2 also From what I understand, with a rep size of 2 the cluster can't decide which object is intact if one is broken, so the repair fails. If you had a size of 3, the cluster would see 2 intact objects an repair the broken one (I guess). At least we didn't

Re: [ceph-users] 1 pgs inconsistent 2 scrub errors

2017-01-26 Thread Mio Vlahović
Hello, > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On > Behalf Of Eugen Block > I had a similar issue recently, where I had a replication size of 2 (I > changed that to 3 after the recovery). Yes, we have replication size of 2 also... > ceph health detail > HEALTH_ERR 16 pgs i

Re: [ceph-users] 1 pgs inconsistent 2 scrub errors

2017-01-26 Thread Eugen Block
I had a similar issue recently, where I had a replication size of 2 (I changed that to 3 after the recovery). ceph health detail HEALTH_ERR 16 pgs inconsistent; 261 scrub errors pg 1.bb1 is active+clean+inconsistent, acting [15,5] zgrep 1.bb1 /var/log/ceph/ceph.log* [...] cluster [INF] 1.bb1 d

[ceph-users] 1 pgs inconsistent 2 scrub errors

2017-01-26 Thread Mio Vlahović
Hello, We have some problems with 1 pg from this morning, this is what we found so far... # ceph --version ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) # ceph -s cluster 2bf80721-fceb-4b63-89ee-1a5faa278493 health HEALTH_ERR 1 pgs inconsistent