Re: [ovirt-users] [Qemu-block] qcow2 images corruption
https://framadrop.org/r/Lvvr392QZo#/wOeYUUlHQAtkUw1E+x2YdqTqq21Pbic6OPBIH0TjZE= Le 14/02/2018 à 00:01, John Snow a écrit : On 02/13/2018 04:41 AM, Kevin Wolf wrote: Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: TL; DR : qcow2 images keep getting corrupted. Any workaround? Not without knowing the cause. The first thing to make sure is that the image isn't touched by a second process while QEMU is running a VM. The classic one is using 'qemu-img snapshot' on the image of a running VM, which is instant corruption (and newer QEMU versions have locking in place to prevent this), but we have seen more absurd cases of things outside QEMU tampering with the image when we were investigating previous corruption reports. This covers the majority of all reports, we haven't had a real corruption caused by a QEMU bug in ages. After having found (https://access.redhat.com/solutions/1173623) the right logical volume hosting the qcow2 image, I can run qemu-img check on it. - On 80% of my VMs, I find no errors. - On 15% of them, I find Leaked cluster errors that I can correct using "qemu-img check -r all" - On 5% of them, I find Leaked clusters errors and further fatal errors, which can not be corrected with qemu-img. In rare cases, qemu-img can correct them, but destroys large parts of the image (becomes unusable), and on other cases it can not correct them at all. It would be good if you could make the 'qemu-img check' output available somewhere. It would be even better if we could have a look at the respective image. I seem to remember that John (CCed) had a few scripts to analyse corrupted qcow2 images, maybe we would be able to see something there. Hi! I did write a pretty simplistic tool for trying to tell the shape of a corruption at a glance. It seems to work pretty similarly to the other tool you already found, but it won't hurt anything to run it: https://github.com/jnsnow/qcheck (Actually, that other tool looks like it has an awful lot of options. I'll have to check it out.) It can print a really upsetting amount of data (especially for very corrupt images), but in the default case, the simple setting should do the trick just fine. You could always put the output from this tool in a pastebin too; it might help me visualize the problem a bit more -- I find seeing the exact offsets and locations of where all the various tables and things to be pretty helpful. You can also always use the "deluge" option and compress it if you want, just don't let it print to your terminal: jsnow@probe (dev) ~/s/qcheck> ./qcheck -xd /home/bos/jsnow/src/qemu/bin/git/install_test_f26.qcow2 > deluge.log; and ls -sh deluge.log 4.3M deluge.log but it compresses down very well: jsnow@probe (dev) ~/s/qcheck> 7z a -t7z -m0=ppmd deluge.ppmd.7z deluge.log jsnow@probe (dev) ~/s/qcheck> ls -s deluge.ppmd.7z 316 deluge.ppmd.7z So I suppose if you want to send along: (1) The basic output without any flags, in a pastebin (2) The zipped deluge output, just in case and I will try my hand at guessing what went wrong. (Also, maybe my tool will totally choke for your image, who knows. It hasn't received an overwhelming amount of testing apart from when I go to use it personally and inevitably wind up displeased with how it handles certain situations, so ...) What I read similar to my case is : - usage of qcow2 - heavy disk I/O - using the virtio-blk driver In the proxmox thread, they tend to say that using virtio-scsi is the solution. Having asked this question to oVirt experts (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's not clear the driver is to blame. This seems very unlikely. The corruption you're seeing is in the qcow2 metadata, not only in the guest data. If anything, virtio-scsi exercises more qcow2 code paths than virtio-blk, so any potential bug that affects virtio-blk should also affect virtio-scsi, but not the other way around. I agree with the answer Yaniv Kaul gave to me, saying I have to properly report the issue, so I'm longing to know which peculiar information I can give you now. To be honest, debugging corruption after the fact is pretty hard. We'd need the 'qemu-img check' output and ideally the image to do anything, but I can't promise that anything would come out of this. Best would be a reproducer, or at least some operation that you can link to the appearance of the corruption. Then we could take a more targeted look at the respective code. As you can imagine, all this setup is in production, and for most of the VMs, I can not "play" with them. Moreover, we launched a campaign of nightly stopping every VM, qemu-img check them one by one, then boot. So it might take some time before I find another corrupted image. (which I'll preciously store for debug) Other informations : We very rarely do snapshots, but I'm close to imagine that automated migrations of VMs could trigger similar behaviors on qcow2 images. To my knowledge, oVirt only uses
Re: [ovirt-users] [Qemu-block] qcow2 images corruption
On 02/13/2018 04:41 AM, Kevin Wolf wrote: > Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: >> TL; DR : qcow2 images keep getting corrupted. Any workaround? > > Not without knowing the cause. > > The first thing to make sure is that the image isn't touched by a second > process while QEMU is running a VM. The classic one is using 'qemu-img > snapshot' on the image of a running VM, which is instant corruption (and > newer QEMU versions have locking in place to prevent this), but we have > seen more absurd cases of things outside QEMU tampering with the image > when we were investigating previous corruption reports. > > This covers the majority of all reports, we haven't had a real > corruption caused by a QEMU bug in ages. > >> After having found (https://access.redhat.com/solutions/1173623) the right >> logical volume hosting the qcow2 image, I can run qemu-img check on it. >> - On 80% of my VMs, I find no errors. >> - On 15% of them, I find Leaked cluster errors that I can correct using >> "qemu-img check -r all" >> - On 5% of them, I find Leaked clusters errors and further fatal errors, >> which can not be corrected with qemu-img. >> In rare cases, qemu-img can correct them, but destroys large parts of the >> image (becomes unusable), and on other cases it can not correct them at all. > > It would be good if you could make the 'qemu-img check' output available > somewhere. > > It would be even better if we could have a look at the respective image. > I seem to remember that John (CCed) had a few scripts to analyse > corrupted qcow2 images, maybe we would be able to see something there. > Hi! I did write a pretty simplistic tool for trying to tell the shape of a corruption at a glance. It seems to work pretty similarly to the other tool you already found, but it won't hurt anything to run it: https://github.com/jnsnow/qcheck (Actually, that other tool looks like it has an awful lot of options. I'll have to check it out.) It can print a really upsetting amount of data (especially for very corrupt images), but in the default case, the simple setting should do the trick just fine. You could always put the output from this tool in a pastebin too; it might help me visualize the problem a bit more -- I find seeing the exact offsets and locations of where all the various tables and things to be pretty helpful. You can also always use the "deluge" option and compress it if you want, just don't let it print to your terminal: jsnow@probe (dev) ~/s/qcheck> ./qcheck -xd /home/bos/jsnow/src/qemu/bin/git/install_test_f26.qcow2 > deluge.log; and ls -sh deluge.log 4.3M deluge.log but it compresses down very well: jsnow@probe (dev) ~/s/qcheck> 7z a -t7z -m0=ppmd deluge.ppmd.7z deluge.log jsnow@probe (dev) ~/s/qcheck> ls -s deluge.ppmd.7z 316 deluge.ppmd.7z So I suppose if you want to send along: (1) The basic output without any flags, in a pastebin (2) The zipped deluge output, just in case and I will try my hand at guessing what went wrong. (Also, maybe my tool will totally choke for your image, who knows. It hasn't received an overwhelming amount of testing apart from when I go to use it personally and inevitably wind up displeased with how it handles certain situations, so ...) >> What I read similar to my case is : >> - usage of qcow2 >> - heavy disk I/O >> - using the virtio-blk driver >> >> In the proxmox thread, they tend to say that using virtio-scsi is the >> solution. Having asked this question to oVirt experts >> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's >> not clear the driver is to blame. > > This seems very unlikely. The corruption you're seeing is in the qcow2 > metadata, not only in the guest data. If anything, virtio-scsi exercises > more qcow2 code paths than virtio-blk, so any potential bug that affects > virtio-blk should also affect virtio-scsi, but not the other way around. > >> I agree with the answer Yaniv Kaul gave to me, saying I have to properly >> report the issue, so I'm longing to know which peculiar information I can >> give you now. > > To be honest, debugging corruption after the fact is pretty hard. We'd > need the 'qemu-img check' output and ideally the image to do anything, > but I can't promise that anything would come out of this. > > Best would be a reproducer, or at least some operation that you can link > to the appearance of the corruption. Then we could take a more targeted > look at the respective code. > >> As you can imagine, all this setup is in production, and for most of the >> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly >> stopping every VM, qemu-img check them one by one, then boot. >> So it might take some time before I find another corrupted image. >> (which I'll preciously store for debug) >> >> Other informations : We very rarely do snapshots, but I'm close to imagine >> that automated migrations of VMs could trigger similar behaviors on qcow2 >> images. > > To my
Re: [ovirt-users] [Qemu-block] qcow2 images corruption
Le 13/02/2018 à 16:26, Nicolas Ecarnot a écrit : >> It would be good if you could make the 'qemu-img check' output available >> somewhere. > I found this : https://github.com/ShijunDeng/qcow2-dump and the transcript (beautiful colors when viewed with "more") is attached : -- Nicolas ECARNOT Le script a débuté sur mar. 13 févr. 2018 17:31:05 CET ]0;root@serv-hv-adm13:/home[?1034h[01;32mroot@serv-hv-adm13[00m:[01;34m/home[00m# /root/qcow2-dump -m check serv-term-adm4-corr.qcow2.img [1;32m File:[1;36m serv-term-adm4-corr.qcow2.img [0m magic: 0x514649fb version: [1;36m2 [0mbacking_file_offset: 0x0 backing_file_size: 0 fs_type: [1;32mxfs [0mvirtual_size: 64424509440 / 61440M / 60G disk_size: 36507222016 / 34816M / 34G seek_end: 36507222016 [[1;32m0x88000[0m] / 34816M / 34G cluster_bits: [1;36m16 [0mcluster_size: [1;36m65536 [0mcrypt_method: 0 csize_shift: 54 csize_mask: 255 cluster_offset_mask: [1;36m0x3f [0ml1_table_offset: [1;32m0x76a46 [0ml1_size: [1;32m120 [0ml1_vm_state_index: [1;32m120 [0ml2_size: [1;36m8192 [0mrefcount_order: [1;36m4 [0mrefcount_bits: [1;36m16 [0mrefcount_block_bits: [1;36m15 [0mrefcount_block_size: [1;36m32768 [0mrefcount_table_offset: [1;32m0x1 [0mrefcount_table_clusters: [1;32m1 [0msnapshots_offset: [1;32m0x0 [0mnb_snapshots: [1;32m0 [0mincompatible_features: compatible_features: autoclear_features: [1;32mActive Snapshot: [0m L1 Table: [offset: 0x76a46, len: 120] [1;36mResult: [0mL1 Table: unaligned: [1;33m0, [0minvalid: [1;33m0, [0munused: 53, used: 67 L2 Table: unaligned: [1;33m0, [0minvalid: [1;33m0, [0munused: 20304, used: 528560 [1;32mRefcount Table: [0m Refcount Table: [offset: 0x1, len: 8192] [1;36mResult: [0mRefcount Table: unaligned: [1;33m0, [0minvalid: [1;33m0, [0munused: 8175, used: 17 Refcount: error: [1;33m4342, [0mleak: [1;33m0, [0munused: 28426, used: 524288 [1;32mCOPIED OFLAG: [0m [1;36mResult: [0mL1 Table ERROR OFLAG_COPIED: [1;33m1 [0mL2 Table ERROR OFLAG_COPIED: [1;33m4323 [0mActive L2 COPIED: [1;33m528560 [34639708160 / 33035M / 32G] [0m [1;32mActive Cluster: [0m [1;36m Result: [0mActive Cluster: reuse: [1;33m17 [0m [1;31mSummary: [0mpreallocation: [1;32moff [0mActive Cluster: [1;31mreuse: 17 [0mRefcount Table: [1;33munaligned: 0, [0m[1;33minvalid: 0, [0munused: 8175, used: 17 Refcount: [1;33merror: [0m[1;31m4342, [0m[1;33mleak: 0, [0m[1;31mrebuild: 4325, [0munused: 28426, used: 524288 L1 Table: [1;33munaligned: 0, [0m[1;33minvalid: 0, [0munused: 53, used: 67 [1;33moflag copied: [0m[1;31m1 [0mL2 Table: [1;33munaligned: 0, [0m[1;33minvalid: 0, [0munused: 20304, used: 528560 [1;33moflag copied: [0m[1;31m4323 [0m ###[5;31m qcow2 image has refcount errors! (=_=#)[0m### ###[5;31mand qcow2 image has copied errors! (o_0)?[0m### ###[5;31m Sadly: refcount error cause active cluster reused! Orz[0m ### ###[1;33m Please backup this image and contact the author![0m ### ]0;root@serv-hv-adm13:/home[01;32mroot@serv-hv-adm13[00m:[01;34m/home[00m# exit Script terminé sur mar. 13 févr. 2018 17:31:13 CET ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [Qemu-block] qcow2 images corruption
Hello Kevin, Le 13/02/2018 à 10:41, Kevin Wolf a écrit : Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: TL; DR : qcow2 images keep getting corrupted. Any workaround? Not without knowing the cause. Actually, my main concern is mostly about finding the cause rather than correcting my corrupted VMs. Another way to say it : I prefer to help oVirt than help myself. The first thing to make sure is that the image isn't touched by a second process while QEMU is running a VM. Indeed, I read some BZ about this issue : they were raised by a user who ran some qemu-img commands on a "mounted" image, thus leading to some corruption. In my case, I'm not playing with this, and the corrupted VMs were only touched by classical oVirt actions. The classic one is using 'qemu-img snapshot' on the image of a running VM, which is instant corruption (and newer QEMU versions have locking in place to prevent this), but we have seen more absurd cases of things outside QEMU tampering with the image when we were investigating previous corruption reports. This covers the majority of all reports, we haven't had a real corruption caused by a QEMU bug in ages. May I ask after what QEMU version this kind of locking has been added. As I wrote, our oVirt setup is 3.6 so not recent. After having found (https://access.redhat.com/solutions/1173623) the right logical volume hosting the qcow2 image, I can run qemu-img check on it. - On 80% of my VMs, I find no errors. - On 15% of them, I find Leaked cluster errors that I can correct using "qemu-img check -r all" - On 5% of them, I find Leaked clusters errors and further fatal errors, which can not be corrected with qemu-img. In rare cases, qemu-img can correct them, but destroys large parts of the image (becomes unusable), and on other cases it can not correct them at all. It would be good if you could make the 'qemu-img check' output available somewhere. See attachment. It would be even better if we could have a look at the respective image. I seem to remember that John (CCed) had a few scripts to analyse corrupted qcow2 images, maybe we would be able to see something there. I just exported it like this : qemu-img convert /dev/the_correct_path /home/blablah.qcow2.img The resulting file is 32G and I need an idea to transfer this img to you. What I read similar to my case is : - usage of qcow2 - heavy disk I/O - using the virtio-blk driver In the proxmox thread, they tend to say that using virtio-scsi is the solution. Having asked this question to oVirt experts (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's not clear the driver is to blame. This seems very unlikely. The corruption you're seeing is in the qcow2 metadata, not only in the guest data. Are you saying: - the corruption is in the metadata and in the guest data OR - the corruption is only in the metadata ? If anything, virtio-scsi exercises more qcow2 code paths than virtio-blk, so any potential bug that affects virtio-blk should also affect virtio-scsi, but not the other way around. I get that. I agree with the answer Yaniv Kaul gave to me, saying I have to properly report the issue, so I'm longing to know which peculiar information I can give you now. To be honest, debugging corruption after the fact is pretty hard. We'd need the 'qemu-img check' output Done. and ideally the image to do anything, I remember some Redhat people once gave me a temporary access to put heavy file on some dedicated server. Is it still possible? but I can't promise that anything would come out of this. Best would be a reproducer, or at least some operation that you can link to the appearance of the corruption. Then we could take a more targeted look at the respective code. Sure. Alas I find no obvious pattern leading to corruption : From the guest side, it appeared with windows 2003, 2008, 2012, linux centOS 6 and 7. It appeared with virtio-blk; and I changed some VMs to used virtio-scsi but it's too soon to see appearance of corruption in that case. As I said, I'm using snapshots VERY rarely, and our versions are too old so we do them the cold way only (VM shutdown). So very safely. The "weirdest" thing we do is to migrate VMs : you see how conservative we are! As you can imagine, all this setup is in production, and for most of the VMs, I can not "play" with them. Moreover, we launched a campaign of nightly stopping every VM, qemu-img check them one by one, then boot. So it might take some time before I find another corrupted image. (which I'll preciously store for debug) Other informations : We very rarely do snapshots, but I'm close to imagine that automated migrations of VMs could trigger similar behaviors on qcow2 images. To my knowledge, oVirt only uses external snapshots and creates them with QMP. This should be perfectly safe because from the perspective of the qcow2 image being snapshotted, it just means that it gets no new write
Re: [ovirt-users] [Qemu-block] qcow2 images corruption
Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: > TL; DR : qcow2 images keep getting corrupted. Any workaround? Not without knowing the cause. The first thing to make sure is that the image isn't touched by a second process while QEMU is running a VM. The classic one is using 'qemu-img snapshot' on the image of a running VM, which is instant corruption (and newer QEMU versions have locking in place to prevent this), but we have seen more absurd cases of things outside QEMU tampering with the image when we were investigating previous corruption reports. This covers the majority of all reports, we haven't had a real corruption caused by a QEMU bug in ages. > After having found (https://access.redhat.com/solutions/1173623) the right > logical volume hosting the qcow2 image, I can run qemu-img check on it. > - On 80% of my VMs, I find no errors. > - On 15% of them, I find Leaked cluster errors that I can correct using > "qemu-img check -r all" > - On 5% of them, I find Leaked clusters errors and further fatal errors, > which can not be corrected with qemu-img. > In rare cases, qemu-img can correct them, but destroys large parts of the > image (becomes unusable), and on other cases it can not correct them at all. It would be good if you could make the 'qemu-img check' output available somewhere. It would be even better if we could have a look at the respective image. I seem to remember that John (CCed) had a few scripts to analyse corrupted qcow2 images, maybe we would be able to see something there. > What I read similar to my case is : > - usage of qcow2 > - heavy disk I/O > - using the virtio-blk driver > > In the proxmox thread, they tend to say that using virtio-scsi is the > solution. Having asked this question to oVirt experts > (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's > not clear the driver is to blame. This seems very unlikely. The corruption you're seeing is in the qcow2 metadata, not only in the guest data. If anything, virtio-scsi exercises more qcow2 code paths than virtio-blk, so any potential bug that affects virtio-blk should also affect virtio-scsi, but not the other way around. > I agree with the answer Yaniv Kaul gave to me, saying I have to properly > report the issue, so I'm longing to know which peculiar information I can > give you now. To be honest, debugging corruption after the fact is pretty hard. We'd need the 'qemu-img check' output and ideally the image to do anything, but I can't promise that anything would come out of this. Best would be a reproducer, or at least some operation that you can link to the appearance of the corruption. Then we could take a more targeted look at the respective code. > As you can imagine, all this setup is in production, and for most of the > VMs, I can not "play" with them. Moreover, we launched a campaign of nightly > stopping every VM, qemu-img check them one by one, then boot. > So it might take some time before I find another corrupted image. > (which I'll preciously store for debug) > > Other informations : We very rarely do snapshots, but I'm close to imagine > that automated migrations of VMs could trigger similar behaviors on qcow2 > images. To my knowledge, oVirt only uses external snapshots and creates them with QMP. This should be perfectly safe because from the perspective of the qcow2 image being snapshotted, it just means that it gets no new write requests. Migration is something more involved, and if you could relate the problem to migration, that would certainly be something to look into. In that case, it would be important to know more about the setup, e.g. is it migration with shared or non-shared storage? > Last point about the versions we use : yes that's old, yes we're planning to > upgrade, but we don't know when. That would be helpful, too. Nothing is more frustrating that debugging a bug in an old version only to find that it's already fixed in the current version (well, except maybe debugging and finding nothing). Kevin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users