On 05/20/2017 04:44 AM, Michael S. Tsirkin wrote:
On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote:
That being said, we compared to vhost-user, instead of vhost_net,
because vhost-user is the one
that is used in NFV, which we think is a major use case for vhost-pci.
If this is true, why not draft a pmd driver instead of a kernel one?
Yes, that's right. There are actually two directions of the vhost-pci driver
implementation - kernel driver
and dpdk pmd. The QEMU side device patches are first posted out for
discussion, because when the device
part is ready, we will be able to have the related team work on the pmd
driver as well. As usual, the pmd
driver would give a much better throughput.
For PMD to work though, the protocol will need to support vIOMMU.
Not asking you to add it right now since it's work in progress
for vhost user at this point, but something you will have to
keep in mind. Further, reviewing vhost user iommu patches might be
a good idea for you.


For the dpdk pmd case, I'm not sure if vIOMMU is necessary to be used -
Since it only needs to share a piece of memory between the two VMs, we
can only send that piece of memory info for sharing, instead of sending the
entire VM's memory and using vIOMMU to expose that accessible portion.

Best,
Wei

Reply via email to