So - here is the feedback. After a long night...
The plain copying did not help... it then complains about the Snaps of
another VM (also with old Snapshots).
I remembered about a thread I read that the problem could solved by
converting back to filestore, because you then have access of the data
Alright, good luck!
The results would be interesting. :-)
Zitat von Karsten Becker :
Hi Eugen,
yes, I also see the rbd_data.-Number changing. This can be caused by me
by deleting snapshots and trying to move over VMs to another pool which
is not affected.
Hi Eugen,
yes, I also see the rbd_data.-Number changing. This can be caused by me
by deleting snapshots and trying to move over VMs to another pool which
is not affected.
Currently I'm trying to move the Finance VM, which is a very old VM
which got created as one of the first VMs and is still
I'm not quite sure how to interpret this, but there are different
objects referenced. From the first log output you pasted:
2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9
10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:head
expected clone
Nope:
> Write #10:9df3943b:::rbd_data.e57feb238e1f29.0003c2e1:head#
> snapset 0=[]:{}
> Write #10:9df399dd:::rbd_data.4401c7238e1f29.050d:19#
> Write #10:9df399dd:::rbd_data.4401c7238e1f29.050d:23#
> Write
And does the re-import of the PG work? From the logs I assumed that
the snapshot(s) prevented a successful import, but now that they are
deleted it could work.
Zitat von Karsten Becker :
Hi Eugen,
hmmm, that should be :
rbd -p cpVirtualMachines list | while
Hi Eugen,
hmmm, that should be :
> rbd -p cpVirtualMachines list | while read LINE; do osdmaptool
> --test-map-object $LINE --pool 10 osdmap 2>&1; rbd snap ls
> cpVirtualMachines/$LINE | grep -v SNAPID | awk '{ print $2 }' | while read
> LINE2; do echo "$LINE"; osdmaptool --test-map-object
On Mon, Feb 19, 2018 at 10:17:55PM +0100, Karsten Becker wrote:
> BTW - how can I find out, which RBDs are affected by this problem. Maybe
> a copy/remove of the affected RBDs could help? But how to find out to
> which RBDs this PG belongs to?
In this case rbd_data.966489238e1f29.250b
BTW - how can I find out, which RBDs are affected by this problem. Maybe
a copy/remove of the affected RBDs could help? But how to find out to
which RBDs this PG belongs to?
Depending on how many PGs your cluster/pool has, you could dump your
osdmap and then run the osdmaptool [1] for every
BTW - how can I find out, which RBDs are affected by this problem. Maybe
a copy/remove of the affected RBDs could help? But how to find out to
which RBDs this PG belongs to?
Best
Karsten
On 19.02.2018 19:26, Karsten Becker wrote:
> Hi.
>
> Thank you for the tip. I just tried... but
Hi.
Thank you for the tip. I just tried... but unfortunately the import aborts:
> Write #10:9de96eca:::rbd_data.f5b8603d1b58ba.1d82:head#
> snapset 0=[]:{}
> Write #10:9de973fe:::rbd_data.966489238e1f29.250b:18#
> Write
Could [1] be of interest?
Exporting the intact PG and importing it back to the rescpective OSD
sounds promising.
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-July/019673.html
Zitat von Karsten Becker :
Hi.
We have size=3 min_size=2.
But this
When we ran our test cluster with size 2 I experienced a similar
issue, but that was in Hammer. There I could find the corresponding PG
data in the filesystem and copy it to the damaged PG. But now we also
run Bluestore on Luminous, I don't know yet how to fix this kind of
issue, maybe
Hi.
We have size=3 min_size=2.
But this "upgrade" has been done during the weekend. We had size=2
min_size=1 before.
Best
Karsten
On 19.02.2018 13:02, Eugen Block wrote:
> Hi,
>
> just to rule out the obvious, which size does the pool have? You aren't
> running it with size = 2, do you?
>
Hi,
just to rule out the obvious, which size does the pool have? You
aren't running it with size = 2, do you?
Zitat von Karsten Becker :
Hi,
I have one damaged PG in my cluster. All OSDs are BlueStore. How do I
fix this?
2018-02-19 11:00:23.183695 osd.29
Hi,
I have one damaged PG in my cluster. All OSDs are BlueStore. How do I
fix this?
> 2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9
> 10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:head expected clone
> 10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:64e 1 missing
>
16 matches
Mail list logo