Thanks for your suggestions, but I managed it without removing OSDs.
Coming back to the office today I found ceph still in error state, but
the number of inconsistent PGs seemed to be stable at 22. So I started
all over with the manual repair (grepped log files for PG, searched
the respecti
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> What I have tried is to manually repair single PGs as described in [1]. But
> some of the broken PGs have no entries in the log file so I don't have
> anything to look at.
> In case there is one object in one OSD but is missing in the other. how do I
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> And the number of scrub errors is increasing, although I started with more
> thatn 400 scrub errors.
> What I have tried is to manually repair single PGs as described in [1]. But
> some of the broken PGs have no entries in the log file so I don't have
please try with:
ceph pg repair
most of the time will be good!
good luck!
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> (Sorry, sometimes I use the wrong shortcuts too quick)
>
> Hi experts,
>
> I need your help. I have a running cluster with 19 OSDs and 3 MONs. I created
> a separate LVM
(Sorry, sometimes I use the wrong shortcuts too quick)
Hi experts,
I need your help. I have a running cluster with 19 OSDs and 3 MONs. I
created a separate LVM for /var/lib/ceph on one of the nodes. I
stopped the mon service on that node, rsynced the content to the newly
created LVM and re