So it seems that there is more than 1 PG with problems and something
not-normal occured to the cluster. Taken as granted that your
underlying storage/filesystems/networking work as expected you should
check the timestamps/md5sums/attrs of the PGs' objects across the
cluster and if you conclude that
added debug journal = 20 and got some new lines in the log. that i added
to the end of this email.
any of you can make something out of them ?
kind regards
Ronny Aasen
On 18.09.2016 18:59, Kostis Fardelas wrote:
If you are aware of the problematic PGs and they are exportable, then
ceph-obje
If you are aware of the problematic PGs and they are exportable, then
ceph-objectstore-tool is a viable solution. If not, then running gdb
and/or higher debug osd level logs may prove useful (to understand
more about the problem or collect info to ask for more in ceph-devel).
On 13 September 2016
On 16-09-13 11:13, Ronny Aasen wrote:
I suspect this must be a difficult question since there have been no
replies on irc or mailinglist.
assuming it's impossible to get these osd's running again.
Is there a way to recover objects from the disks. ? they are mounted
and data is readable. I hav
I suspect this must be a difficult question since there have been no
replies on irc or mailinglist.
assuming it's impossible to get these osd's running again.
Is there a way to recover objects from the disks. ? they are mounted and
data is readable. I have pg's down since they want to probe th