[Bug 1926231] Re: SCSI passthrough of SATA cdrom -> errors & performance issues
[Expired for QEMU because there has been no activity for 60 days.] ** Changed in: qemu Status: Incomplete => Expired -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1926231 Title: SCSI passthrough of SATA cdrom -> errors & performance issues Status in QEMU: Expired Bug description: qemu 5.0, compiled from git I am passing through a SATA cdrom via SCSI passthrough, with this libvirt XML: It seems to mostly work, I have written discs with it, except I am getting errors that cause reads to take about 5x as long as they should, under certain circumstances. It appears to be based on the guest's read block size. I found that if on the guest I run, say `dd if=$some_large_file bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much is being read by a factor of about 2. Also many kernel messages like this happen on the guest: [ 190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s [ 190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] [ 190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated [ 190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00 [ 190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0 If I change to bs=131072 the errors stop and performance is normal. (262144 happens to be the block size ultimately used by md5sum, which is how I got here) I also ran strace on the qemu process while it was happening, and noticed SG_IO calls like this: 21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO 21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO 21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO [etc] I suspect qemu is the culprit because I have tried a 4.19 guest kernel as well as a 5.9 one, with the same result. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions
[Bug 1926231] Re: SCSI passthrough of SATA cdrom -> errors & performance issues
The QEMU project is currently moving its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting the bug state to "Incomplete" now. If the bug has already been fixed in the latest upstream version of QEMU, then please close this ticket as "Fix released". If it is not fixed yet and you think that this bug report here is still valid, then you have two options: 1) If you already have an account on gitlab.com, please open a new ticket for this problem in our new tracker here: https://gitlab.com/qemu-project/qemu/-/issues and then close this ticket here on Launchpad (or let it expire auto- matically after 60 days). Please mention the URL of this bug ticket on Launchpad in the new ticket on GitLab. 2) If you don't have an account on gitlab.com and don't intend to get one, but still would like to keep this ticket opened, then please switch the state back to "New" or "Confirmed" within the next 60 days (other- wise it will get closed as "Expired"). We will then eventually migrate the ticket automatically to the new system (but you won't be the reporter of the bug in the new system and thus you won't get notified on changes anymore). Thank you and sorry for the inconvenience. ** Changed in: qemu Status: New => Incomplete -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1926231 Title: SCSI passthrough of SATA cdrom -> errors & performance issues Status in QEMU: Incomplete Bug description: qemu 5.0, compiled from git I am passing through a SATA cdrom via SCSI passthrough, with this libvirt XML: It seems to mostly work, I have written discs with it, except I am getting errors that cause reads to take about 5x as long as they should, under certain circumstances. It appears to be based on the guest's read block size. I found that if on the guest I run, say `dd if=$some_large_file bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much is being read by a factor of about 2. Also many kernel messages like this happen on the guest: [ 190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s [ 190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] [ 190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated [ 190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00 [ 190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0 If I change to bs=131072 the errors stop and performance is normal. (262144 happens to be the block size ultimately used by md5sum, which is how I got here) I also ran strace on the qemu process while it was happening, and noticed SG_IO calls like this: 21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO 21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO 21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO [etc] I suspect qemu is the culprit because I have tried a 4.19 guest kernel as well as a 5.9 one, with the same result. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions
[Bug 1926231] Re: SCSI passthrough of SATA cdrom -> errors & performance issues
Just discovered that `hdparm -a 128` works around the issue (it was 256 at boot). -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1926231 Title: SCSI passthrough of SATA cdrom -> errors & performance issues Status in QEMU: New Bug description: qemu 5.0, compiled from git I am passing through a SATA cdrom via SCSI passthrough, with this libvirt XML: It seems to mostly work, I have written discs with it, except I am getting errors that cause reads to take about 5x as long as they should, under certain circumstances. It appears to be based on the guest's read block size. I found that if on the guest I run, say `dd if=$some_large_file bs=262144|pv > /dev/null`, `iostat` and `pv` disagree about how much is being read by a factor of about 2. Also many kernel messages like this happen on the guest: [ 190.919684] sr 0:0:0:0: [sr0] tag#160 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s [ 190.919687] sr 0:0:0:0: [sr0] tag#160 Sense Key : Aborted Command [current] [ 190.919689] sr 0:0:0:0: [sr0] tag#160 Add. Sense: I/O process terminated [ 190.919691] sr 0:0:0:0: [sr0] tag#160 CDB: Read(10) 28 00 00 18 a5 5a 00 00 80 00 [ 190.919694] blk_update_request: I/O error, dev sr0, sector 6460776 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0 If I change to bs=131072 the errors stop and performance is normal. (262144 happens to be the block size ultimately used by md5sum, which is how I got here) I also ran strace on the qemu process while it was happening, and noticed SG_IO calls like this: 21748 10:06:29.330910 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x95\x5a\x00\x00\x80\x00", mx_sb_len=252, iovec_count=0, dxfer_len=262144, timeout=4294967295, flags=SG_FLAG_DIRECT_IO 21751 10:06:29.330976 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xda\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO 21749 10:06:29.331586 ioctl(22, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=10, cmdp="\x28\x00\x00\x12\x94\xdc\x00\x00\x02\x00", mx_sb_len=252, iovec_count=0, dxfer_len=4096, timeout=4294967295, flags=SG_FLAG_DIRECT_IO [etc] I suspect qemu is the culprit because I have tried a 4.19 guest kernel as well as a 5.9 one, with the same result. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1926231/+subscriptions