On 2018年04月10日 17:23, Liang, Cunming wrote:
From: Paolo Bonzini [mailto:pbonz...@redhat.com]
Sent: Tuesday, April 10, 2018 3:52 PM
To: Bie, Tiwei <tiwei....@intel.com>; Jason Wang <jasow...@redhat.com>
Cc: m...@redhat.com; alex.william...@redhat.com; ddut...@redhat.com;
Duyck, Alexander H <alexander.h.du...@intel.com>; email@example.com-
open.org; linux-ker...@vger.kernel.org; k...@vger.kernel.org;
firstname.lastname@example.org; net...@vger.kernel.org; Daly, Dan
<dan.d...@intel.com>; Liang, Cunming <cunming.li...@intel.com>; Wang,
Zhihong <zhihong.w...@intel.com>; Tan, Jianfeng <jianfeng....@intel.com>;
Wang, Xiao W <xiao.w.w...@intel.com>
Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
On 10/04/2018 06:57, Tiwei Bie wrote:
So you just move the abstraction layer from qemu to kernel, and you
still need different drivers in kernel for different device
interfaces of accelerators. This looks even more complex than leaving
it in qemu. As you said, another idea is to implement userspace vhost
backend for accelerators which seems easier and could co-work with
other parts of qemu without inventing new type of messages.
I'm not quite sure. Do you think it's acceptable to add various vendor
specific hardware drivers in QEMU?
I think so. We have vendor-specific quirks, and at some point there was an
idea of using quirks to implement (vendor-specific) live migration support for
Vendor-specific quirks of accessing VGA is a small portion. Other major
portions are still handled by guest driver.
While in this case, when saying various vendor specific drivers in QEMU, it
says QEMU takes over and provides the entire user space device drivers. Some
parts are even not relevant to vhost, they're basic device function enabling.
Moreover, it could be different kinds of devices(network/block/...) under
vhost. No matter # of vendors or # of types, total LOC is not small.
The idea is to avoid introducing these extra complexity out of QEMU, keeping
vhost adapter simple. As vhost protocol is de factor standard, it leverages
kernel device driver to provide the diversity. Changing once in QEMU, then it
supports multi-vendor devices whose drivers naturally providing kernel driver
Let me clarify my question, it's not qemu vs kenrel but userspace vs
kernel. It could be a library which could be linked to qemu. Doing it in
userspace have the following obvious advantages:
- attack surface was limited to userspace
- easier to be maintained (compared to kernel driver)
- easier to be extended without introducing new userspace/kernel interfaces
- not tied to a specific operating system
If we want to do it in the kernel, need to consider to unify code
between mdev device driver and generic driver. For net driver, maybe we
can even consider to do it on top of exist drivers.
If QEMU is going to build a user space driver framework there, we're open mind
on that, even leveraging DPDK as the underlay library. Looking forward to more
others' comments from community.
I'm doing this now by implementing vhost inside qemu IOThreads. Hope I
can post RFC in few months.
Virtualization mailing list