$ rbd map xxx

rbd: sysfs write failed
2023-04-21 11:29:13.786418 7fca1bfff700 -1 librbd::image::OpenRequest: failed 
to retrieve image id: (5) Input/output error
2023-04-21 11:29:13.786456 7fca1b7fe700 -1 librbd::ImageState: 0x55a60108a040 
failed to open image: (5) Input/output error
rbd: error opening image xxx: (5) Input/output error
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (5) Input/output error

the command 'dmesg | tail' did not show any useful results.

and other command: 
$ rbd info xxx

2023-04-21 11:33:10.223701 7f3547fff700 -1 librbd::image::OpenRequest: failed 
to retrieve image id: (5) Input/output error
2023-04-21 11:33:10.223739 7f35477fe700 -1 librbd::ImageState: 0x5647d5cfeeb0 
failed to open image: (5) Input/output error
rbd: error opening image xxx: (5) Input/output error

i know header id is c2c061579478fe ,so i get omapvals: 

$ rados -p rbd listomapvals rbd_header.c2c061579478fe

features
value (8 bytes) :
00000000  01 00 00 00 00 00 00 00                           |........|
00000008

object_prefix
value (27 bytes) :
00000000  17 00 00 00 72 62 64 5f  64 61 74 61 2e 63 32 63  |....rbd_data.c2c|
00000010  30 36 31 35 37 39 34 37  38 66 65                 |061579478fe|
0000001b

order
value (1 bytes) :
00000000  16                                                |.|
00000001

size
value (8 bytes) :
00000000  00 00 00 00 71 02 00 00                           |....q...|
00000008

snap_seq
value (8 bytes) :
00000000  00 00 00 00 00 00 00 00                           |........|
00000008

I scanned all existing data blocks using the command 'rados -p rbd list | grep 
c2c061579478fe', and found many 'rbd_data' blocks. I suspect that the loss of 
important blocks caused this situation. Currently, my plan is to obtain all 
'rbd' blocks, concatenate them into an XFS file, and then perform a repair. 
However, as the data size is huge, I am still experimenting with this method. 
Are there any other feasible methods to repair this data?

rbd_data:

rbd_data.c2c061579478fe.0000000000000000
rbd_data.c2c061579478fe.0000000000000001
rbd_data.c2c061579478fe.0000000000000002
rbd_data.c2c061579478fe.0000000000000003
rbd_data.c2c061579478fe.0000000000000004
rbd_data.c2c061579478fe.0000000000000005
rbd_data.c2c061579478fe.0000000000000006
rbd_data.c2c061579478fe.0000000000000007
rbd_data.c2c061579478fe.0000000000000008
rbd_data.c2c061579478fe.0000000000000009
rbd_data.c2c061579478fe.000000000000000a
rbd_data.c2c061579478fe.000000000000000b
rbd_data.c2c061579478fe.000000000000000c
rbd_data.c2c061579478fe.000000000000000d
rbd_data.c2c061579478fe.000000000000000e
rbd_data.c2c061579478fe.000000000000000f
rbd_data.c2c061579478fe.0000000000000010
rbd_data.c2c061579478fe.0000000000000011
...
rbd_data.c2c061579478fe.000000000009c3f6
rbd_data.c2c061579478fe.000000000009c3f7
rbd_data.c2c061579478fe.000000000009c3fa
rbd_data.c2c061579478fe.000000000009c3fb
rbd_data.c2c061579478fe.000000000009c3fc
rbd_data.c2c061579478fe.000000000009c3fd
rbd_data.c2c061579478fe.000000000009c3fe
rbd_data.c2c061579478fe.000000000009c3ff
rbd_header.c2c061579478fe


thanks
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to