Am 15.02.23 um 20:26 schrieb Miroslav Lachman:
On 10/02/2023 22:22, Stefan Esser wrote:
Am 09.02.23 um 20:04 schrieb Miroslav Lachman:
I have FreeBSD 12.4 virtual machine installed inside KVM. This machine has 2 disks. One is 30GB connected as VirtIO vtbd0. There is installed system. The second is 20TB iSCSI connected to KVM and should be available inside VM as SCSI disk da0 but it is not.

This is an emulated SCSI controller in KVM instead of a physical
controller on a PCI bus?

As far as I know it is emulated by KVM. I search the net and found this for example:
https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines#qm_hard_disk_bus

"the SCSI controller, designed in 1985, is commonly found on server grade hardware, and can connect up to 14 storage devices. Proxmox VE emulates by default a LSI 53C895A controller.

The issue with this emulation is that the 53c8xx family of devices is
not an "intelligent" controller that gets a command written into some
register and then executes the transfer, instead it is a very simple
CPU that executes a limited and domain specific command set.

The early NCR 53x8xx devices did not contain any indirect address modes,
you had to store the address of a calculated memory access into the next
instruction to fetch from a variable address (i.e. it depended on self
modifying code). And if KVM emulates the NCR 53c895 (which btw. has
indirect addressing modes), then any trivial operation requires the
emulation of hundreds of simple NCR instructions, at a cost of thousands
or tens of thousands CPU instructions.

A SCSI controller of type VirtIO SCSI is the recommended setting if you aim for performance and is automatically selected for newly created Linux VMs since Proxmox VE 4.3. Linux distributions have support for this controller since 2012, and FreeBSD since 2014."

If possible use VirtIO! These "virtual" devices are implemented in a
way that makes them very efficient. The parameters of a request and
the data to be transfered are just transfered to the hypervisor via
shared memory areas, much like a system call into the kernel.

The device emulation may be incomplete and may therefore violate
assumptions made in the driver. The 53c8xx devices were complex
and the driver contains a "firmware" blob that has to be executed
using main CPU instructions by the emulated Symbios device. This
is extremely inefficient compared to other emulstions (it takes
hundreds of emulated controller instructions to execute trivial
SCSI transfers).

Why can't you use the iSCSI driver to connect to an iscsi device?

Because my VM does not have access to network where the iSCSI target is (for security reason)

But VirtIO would be offered?

Regarding the history of the ncr and sym drivers:

I'm the co-author of the ncr driver, which originally supported only
the NCR53c810 chip. Later additions of this device family added WIDE
SCSI support and support for faster synchronous transfers, and their
support was added to the ncr driver.

[..]

The ncr driver has been removed from later FreeBSD releases, since sym
covers all devices (since it is an extension of the ncr driver).

Oh, thank you for the clarification, I thought ncr is newer than sym. So building ncr into kernel does not have any sense.

No, definitely not. The NCR driver used only the initial command set,
without the indirect memory access modes offered by later NCR chips.

The sym driver attached correctly, it should work. It has been more than
10 years since I last used the driver and had a machine with SCSI drives.

In the meantime Marius Strobl made a few changes to the driver, but I do
not know, whether he still has access to a system with that controller.

The error messages:

(probe1:sym0:0:1:0): INQUIRY. CDB: 12 00 00 00 24 00
(probe1:sym0:0:1:0): CAM status: Command timeout
(probe1:sym0:0:1:0): Retrying command, 3 more tries remain
sym0:1: message c sent on bad reselection.
sym0:1:control msgout: 80 6.

The device should execute the INQUIRY command, but not reply was received
(the device did not reselect the controlled). Several attempts are made,
but the retries are exausted and the device is ignored.

It seems that the selection/reselection and sending of SCSI messages is
not perfectly emulated.

You may want to contact the author of the 53c895a emulation, since you'll
need debug output from that emulation in order to understand what's wrong.

The administrator of KVM hypervisor changed some settings on the translation layer iSCSI / LSI emul. / VirtIO. Even if he told me I do not understand it well but now I see the disk correctly:

# diskinfo -v da0
da0
         512             # sectorsize
         23747947921408  # mediasize in bytes (22T)
         46382710784     # mediasize in sectors
         0               # stripesize
         0               # stripeoffset
         2887190         # Cylinders according to firmware.
         255             # Heads according to firmware.
         63              # Sectors according to firmware.
         QNAP iSCSI Storage      # Disk descr.
         194d2dd6-683a-42e1-9985-acd5580b119d    # Disk ident.
         vtscsi0         # Attachment
         No              # TRIM/UNMAP support
         Unknown         # Rotation rate in RPM
         Not_Zoned       # Zone Mode


# pciconf -l -v

sym0@pci0:0:7:0:        class=0x010000 rev=0x00 hdr=0x00 vendor=0x1000 device=0x0012 subvendor=0x0000 subdevice=0x1000
     vendor     = 'Broadcom / LSI'
     device     = '53c895a'
     class      = mass storage
     subclass   = SCSI
virtio_pci2@pci0:0:8:0: class=0x010000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1004 subvendor=0x1af4 subdevice=0x0008
     vendor     = 'Red Hat, Inc.'
     device     = 'Virtio SCSI'
     class      = mass storage
     subclass   = SCSI

Yes, that's the device you want to use!

virtio_pci3@pci0:0:9:0: class=0x010000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1001 subvendor=0x1af4 subdevice=0x0002
     vendor     = 'Red Hat, Inc.'
     device     = 'Virtio block device'
     class      = mass storage
     subclass   = SCSI

virtio_pci0: <VirtIO PCI (legacy) Block adapter> port 0xc100-0xc17f mem 0xfc0b5000-0xfc0b5fff, 0xfebf0000-0xfebf3fff irq 10 at device 5.0 on pci0
vtblk0: <VirtIO Block Adapter> on virtio_pci0
vtblk0: 30720MB (62914560 512 byte sectors)
virtio_pci1: <VirtIO PCI (legacy) Balloon adapter> port 0xc240-0xc27f mem 0xfebf4000-0xfebf7fff irq 10 at device 6.0 on pci0
vtballoon0: <VirtIO Balloon Adapter> on virtio_pci1
sym0: <895a> port 0xc000-0xc0ff mem 0xfc0b6000-0xfc0b63ff,0xfc0b2000-0xfc0b3fff irq 11 at device 7.0 on pci0
sym0: No NVRAM, ID 7, Fast-40, LVD, parity checking
virtio_pci2: <VirtIO PCI (legacy) SCSI adapter> port 0xc280-0xc2bf mem 0xfc0b7000-0xfc0b7fff,0xfebf8000-0xfebfbfff irq 11 at device 8.0 on pci0
vtscsi0: <VirtIO SCSI Adapter> on virtio_pci2
virtio_pci3: <VirtIO PCI (legacy) Block adapter> port 0xc180-0xc1ff mem 0xfc0b8000-0xfc0b8fff,0xfebfc000-0xfebfffff irq 10 at device 9.0 on pci0
vtblk1: <VirtIO Block Adapter> on virtio_pci3
vtblk1: 102400MB (209715200 512 byte sectors)

da0 at vtscsi0 bus 0 scbus3 target 0 lun 0
da0: <QNAP iSCSI Storage 4.0> Fixed Direct Access SPC-3 SCSI device
da0: Serial Number 194d2dd6-683a-42e1-9985-acd5580b119d
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 22647808MB (46382710784 512 byte sectors)

So I assume the iSCSI device is now connected as VirtIO SCSI device on vtscsi0 / virtio_pci2.

Yes, "da0 at vtscsi0" is exactly what you wanted ...

You can remove the sym driver from your kernel or disable it. It will
only consume (some) memory, but is of no use in your VM.

The important thing is that it works!

And it does not only work this way - this is the most efficient way to
connect to a device offered by the hypervisor.

I greatly appreciated your insightful reply and of course your work on ncr / 
sym.

Thanks, and I'm glad my knowledge about this long obsolete (for lack of
physical PCI slots in current systems, and due to the limit of at most
40 MB/s of the fastest parallel SCSI transfer mode) was still of use ...

Regards, STefan

Reply via email to