I have one placement group that is stuck inconsistent.
$ ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 8.e82 is active+clean+inconsistent, acting [15,43]
1 scrub errors
I tried to run "ceph pg repair 8.e82" but it will not repair it. In the
OSD log with debugging
The cluster version is 0.94.3.
On 2015-10-17 2:25 am, Chris Taylor wrote:
> I have one placement group that is stuck inconsistent.
>
> $ ceph health detail
> HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
> pg 8.e82 is active+clean+inconsistent, acting [15,43]
> 1 scrub errors
>
> I
Hi,
I've a machine doing rbd backup with the rbd export command. Around 10tb each
day. After some days it starts to get very slowly and the rbd commands seem to
do nothing. If I run echo 3 > drop_caches they starts immediately again to get
traffic and dumping data.
What's wrong here?
Kernel
I think I figured it out, for my install the admin token is broken for v2 auth
and I needed to use user:password w/ admin role. It is the more correct way to
do things but is conspicuously missing from here
http://docs.ceph.com/docs/master/radosgw/keystone/