If this is really the last copy of important data: consider making a
full raw clone of the disk before running any ceph-objectstore-tool
commands on it and consider getting some professional help if you are
not too familiar with the inner workings of Ceph.
That being said, it's basically just:
Yeah, that may be the way.
Preferably to disable compaction during this procedure though.
To do that please set
bluestore rocksdb options = "disable_auto_compactions=true"
in [osd] section in ceph.conf
Thanks,
Igor
On 11/29/2018 4:54 PM, Paul Emmerich wrote:
does objectstore-tool still
does objectstore-tool still work? If yes:
export all the PGs on the OSD with objectstore-tool and important them
into a new OSD.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89
'ceph-bluestore-tool repair' checks and repairs BlueStore metadata
consistency not RocksDB one.
It looks like you're observing CRC mismatch during DB compaction which
is probably not triggered during the repair.
Good point is that it looks like Bluestore's metadata are consistent and
hence
The only strange thing is that ceph-bluestore-tool says that repair was
done, no errors are found and all is ok.
I ask myself what really does that tool.
Mario
Il giorno gio 29 nov 2018 alle ore 11:03 Wido den Hollander
ha scritto:
>
>
> On 11/29/18 10:45 AM, Mario Giammarco wrote:
> > I have
On 11/29/18 10:45 AM, Mario Giammarco wrote:
> I have only that copy, it is a showroom system but someone put a
> production vm on it.
>
I have a feeling this won't be easy to fix or actually fixable:
- Compaction error: Corruption: block checksum mismatch
- submit_transaction error:
I have only that copy, it is a showroom system but someone put a production
vm on it.
Il giorno gio 29 nov 2018 alle ore 10:43 Wido den Hollander
ha scritto:
>
>
> On 11/29/18 10:28 AM, Mario Giammarco wrote:
> > Hello,
> > I have a ceph installation in a proxmox cluster.
> > Due to a temporary
On 11/29/18 10:28 AM, Mario Giammarco wrote:
> Hello,
> I have a ceph installation in a proxmox cluster.
> Due to a temporary hardware glitch now I get this error on osd startup
>
> -6> 2018-11-26 18:02:33.179327 7fa1d784be00 0 osd.0 1033 crush map
> has features 1009089991638532096,
Hello,
I have a ceph installation in a proxmox cluster.
Due to a temporary hardware glitch now I get this error on osd startup
-6> 2018-11-26 18:02:33.179327 7fa1d784be00 0 osd.0 1033 crush map has
> features 1009089991638532096, adjusting msgr requires for osds
>-5> 2018-11-26