On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
There are hardware that can do virtio datapath offloading while having
its own control path. This path tries to implement a mdev based
unified API to support using kernel virtio driver to drive those
devices. This is done by introducing a new mdev transport for virtio
(virtio_mdev) and register itself as a new kind of mdev driver. Then
it provides a unified way for kernel virtio driver to talk with mdev
device implementation.

Though the series only contains kernel driver support, the goal is to
make the transport generic enough to support userspace drivers. This
means vhost-mdev[1] could be built on top as well by resuing the
transport.

A sample driver is also implemented which simulate a virito-net
loopback ethernet device on top of vringh + workqueue. This could be
used as a reference implementation for real hardware driver.

Consider mdev framework only support VFIO device and driver right now,
this series also extend it to support other types. This is done
through introducing class id to the device and pairing it with
id_talbe claimed by the driver. On top, this seris also decouple
device specific parents ops out of the common ones.
I was curious so I took a quick look and posted comments.

I guess this driver runs inside the guest since it registers virtio
devices?


It could run in either guest or host. But the main focus is to run in the host then we can use virtio drivers in containers.



If this is used with physical PCI devices that support datapath
offloading then how are physical devices presented to the guest without
SR-IOV?


We will do control path meditation through vhost-mdev[1] and vhost-vfio[2]. Then we will present a full virtio compatible ethernet device for guest.

SR-IOV is not a must, any mdev device that implements the API defined in patch 5 can be used by this framework.

Thanks

[1] https://lkml.org/lkml/2019/9/26/15

[2] https://patchwork.ozlabs.org/cover/984763/



Stefan

Reply via email to