[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-10-14 Thread linhaifeng
? 2016/10/10 16:03, Yuanhan Liu ??:
> On Sun, Oct 09, 2016 at 06:46:44PM +0800, linhaifeng wrote:
>> ? 2016/8/23 16:10, Yuanhan Liu ??:
>>> The basic idea of Tx zero copy is, instead of copying data from the
>>> desc buf, here we let the mbuf reference the desc buf addr directly.
>>
>> Is there problem when push vlan to the mbuf which reference the desc buf 
>> addr directly?
> 
> Yes, you can't do that when zero copy is enabled, due to following code
> piece:
> 
> +   if (unlikely(dev->dequeue_zero_copy && (hpa = 
> gpa_to_hpa(dev,
> +   desc->addr + desc_offset, 
> cpy_len {
> +   cur->data_len = cpy_len;
> ==> +   cur->data_off = 0;
> +   cur->buf_addr = (void *)(uintptr_t)desc_addr;
> +   cur->buf_physaddr = hpa;
> 
> The marked line basically makes the mbuf has no headroom to use.
> 
>   --yliu
> 
>> We know if guest use virtio_net(kernel) maybe skb has no headroom.
> 
> .
>

It ok to set data_off zero.
But we also can use 128 bytes headromm when guest use virtio_net PMD but not 
for virtio_net kernel driver.

I think it's better to add headroom size to desc and kernel dirver support set 
headroom size.




[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-10-10 Thread Yuanhan Liu
On Sun, Oct 09, 2016 at 06:46:44PM +0800, linhaifeng wrote:
> ? 2016/8/23 16:10, Yuanhan Liu ??:
> > The basic idea of Tx zero copy is, instead of copying data from the
> > desc buf, here we let the mbuf reference the desc buf addr directly.
> 
> Is there problem when push vlan to the mbuf which reference the desc buf addr 
> directly?

Yes, you can't do that when zero copy is enabled, due to following code
piece:

+   if (unlikely(dev->dequeue_zero_copy && (hpa = 
gpa_to_hpa(dev,
+   desc->addr + desc_offset, 
cpy_len {
+   cur->data_len = cpy_len;
==> +   cur->data_off = 0;
+   cur->buf_addr = (void *)(uintptr_t)desc_addr;
+   cur->buf_physaddr = hpa;

The marked line basically makes the mbuf has no headroom to use.

--yliu

> We know if guest use virtio_net(kernel) maybe skb has no headroom.


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-10-09 Thread Yuanhan Liu
On Mon, Aug 29, 2016 at 08:32:55AM +, Xu, Qian Q wrote:
> I just ran a PVP test, nic receive packets then forwards to vhost PMD, and 
> virtio user interface. I didn't see any performance gains in this scenario. 
> All packet size from 64B to 1518B 
> performance haven't got benefit from this patchset, and in fact, the 
> performance dropped a lot before 1280B, and similar at 1518B. 

40G nic?

> The TX/RX desc setting is " txd=64, rxd=128"

Try it with "txd=128", you should be able to set that value since the
vhost Tx indirect patch is merged.

--yliu

> for TX-zero-copy enabled case. For TX-zero-copy disabled case, I just ran 
> default testpmd(txd=512, rxd=128) without the patch. 
> Could you help check if NIC2VM case? 
> 
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Yuanhan Liu
> Sent: Tuesday, August 23, 2016 4:11 PM
> To: dev at dpdk.org
> Cc: Maxime Coquelin ; Yuanhan Liu  at linux.intel.com>
> Subject: [dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support
> 
> This patch set enables vhost Tx zero copy. The majority work goes to patch 4: 
> vhost: add Tx zero copy.
> 
> The basic idea of Tx zero copy is, instead of copying data from the desc buf, 
> here we let the mbuf reference the desc buf addr directly.
> 
> The major issue behind that is how and when to update the used ring.
> You could check the commit log of patch 4 for more details.
> 
> Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable Tx zero 
> copy, which is disabled by default.
> 
> Few more TODOs are left, including handling a desc buf that is across two 
> physical pages, updating release note, etc. Those will be fixed in later 
> version. For now, here is a simple one that hopefully it shows the idea 
> clearly.
> 
> I did some quick tests, the performance gain is quite impressive.
> 
> For a simple dequeue workload (running rxonly in vhost-pmd and runnin txonly 
> in guest testpmd), it yields 40+% performance boost for packet size 1400B.
> 
> For VM2VM iperf test case, it's even better: about 70% boost.
> 
> ---
> Yuanhan Liu (6):
>   vhost: simplify memory regions handling
>   vhost: get guest/host physical address mappings
>   vhost: introduce last avail idx for Tx
>   vhost: add Tx zero copy
>   vhost: add a flag to enable Tx zero copy
>   examples/vhost: add an option to enable Tx zero copy
> 
>  doc/guides/prog_guide/vhost_lib.rst |   7 +-
>  examples/vhost/main.c   |  19 ++-
>  lib/librte_vhost/rte_virtio_net.h   |   1 +
>  lib/librte_vhost/socket.c   |   5 +
>  lib/librte_vhost/vhost.c|  12 ++
>  lib/librte_vhost/vhost.h| 103 +
>  lib/librte_vhost/vhost_user.c   | 297 
> +++-
>  lib/librte_vhost/virtio_net.c   | 188 +++
>  8 files changed, 472 insertions(+), 160 deletions(-)
> 
> --
> 1.9.0


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-10-09 Thread linhaifeng
? 2016/8/23 16:10, Yuanhan Liu ??:
> The basic idea of Tx zero copy is, instead of copying data from the
> desc buf, here we let the mbuf reference the desc buf addr directly.

Is there problem when push vlan to the mbuf which reference the desc buf addr 
directly?
We know if guest use virtio_net(kernel) maybe skb has no headroom.



[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-09-23 Thread Yuanhan Liu
On Mon, Aug 29, 2016 at 08:57:52AM +, Xu, Qian Q wrote:
> Btw, some good news: if I run a simple dequeue workload (running rxonly in 
> vhost-pmd and runnin txonly in guest testpmd), it yields ~50% performance 
> boost for packet size 1518B, but this case is without NIC. 
> And similar case as vhost<-->virtio loopback, we can see ~10% performance 
> gains at 1518B without NIC. 
> 
> Some bad news: If with the patch, I noticed a 3%-7% performance drop if 
> zero-copy=0 compared with current DPDK(e.g: 16.07) at vhost/virtio loopback 
> and vhost RX only + virtio TX only. Seems the patch will 
> Impact the zero-copy=0 performance a little. 

There are some follow up discussion internally, the 3%-7% drop reported
by Qian when zero-copy is not enabled is acutally due to the fluctuation.
So, a false alarm.

--yliu


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-29 Thread Xu, Qian Q
Btw, some good news: if I run a simple dequeue workload (running rxonly in 
vhost-pmd and runnin txonly in guest testpmd), it yields ~50% performance boost 
for packet size 1518B, but this case is without NIC. 
And similar case as vhost<-->virtio loopback, we can see ~10% performance gains 
at 1518B without NIC. 

Some bad news: If with the patch, I noticed a 3%-7% performance drop if 
zero-copy=0 compared with current DPDK(e.g: 16.07) at vhost/virtio loopback and 
vhost RX only + virtio TX only. Seems the patch will 
Impact the zero-copy=0 performance a little. 

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Xu, Qian Q
Sent: Monday, August 29, 2016 4:33 PM
To: Yuanhan Liu ; dev at dpdk.org
Cc: Maxime Coquelin 
Subject: Re: [dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

I just ran a PVP test, nic receive packets then forwards to vhost PMD, and 
virtio user interface. I didn't see any performance gains in this scenario. All 
packet size from 64B to 1518B performance haven't got benefit from this 
patchset, and in fact, the performance dropped a lot before 1280B, and similar 
at 1518B. 
The TX/RX desc setting is " txd=64, rxd=128" for TX-zero-copy enabled case. For 
TX-zero-copy disabled case, I just ran default testpmd(txd=512, rxd=128) 
without the patch. 
Could you help check if NIC2VM case? 

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Yuanhan Liu
Sent: Tuesday, August 23, 2016 4:11 PM
To: dev at dpdk.org
Cc: Maxime Coquelin ; Yuanhan Liu 
Subject: [dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

This patch set enables vhost Tx zero copy. The majority work goes to patch 4: 
vhost: add Tx zero copy.

The basic idea of Tx zero copy is, instead of copying data from the desc buf, 
here we let the mbuf reference the desc buf addr directly.

The major issue behind that is how and when to update the used ring.
You could check the commit log of patch 4 for more details.

Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable Tx zero 
copy, which is disabled by default.

Few more TODOs are left, including handling a desc buf that is across two 
physical pages, updating release note, etc. Those will be fixed in later 
version. For now, here is a simple one that hopefully it shows the idea clearly.

I did some quick tests, the performance gain is quite impressive.

For a simple dequeue workload (running rxonly in vhost-pmd and runnin txonly in 
guest testpmd), it yields 40+% performance boost for packet size 1400B.

For VM2VM iperf test case, it's even better: about 70% boost.

---
Yuanhan Liu (6):
  vhost: simplify memory regions handling
  vhost: get guest/host physical address mappings
  vhost: introduce last avail idx for Tx
  vhost: add Tx zero copy
  vhost: add a flag to enable Tx zero copy
  examples/vhost: add an option to enable Tx zero copy

 doc/guides/prog_guide/vhost_lib.rst |   7 +-
 examples/vhost/main.c   |  19 ++-
 lib/librte_vhost/rte_virtio_net.h   |   1 +
 lib/librte_vhost/socket.c   |   5 +
 lib/librte_vhost/vhost.c|  12 ++
 lib/librte_vhost/vhost.h| 103 +
 lib/librte_vhost/vhost_user.c   | 297 +++-
 lib/librte_vhost/virtio_net.c   | 188 +++
 8 files changed, 472 insertions(+), 160 deletions(-)

--
1.9.0



[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-29 Thread Xu, Qian Q
I just ran a PVP test, nic receive packets then forwards to vhost PMD, and 
virtio user interface. I didn't see any performance gains in this scenario. All 
packet size from 64B to 1518B 
performance haven't got benefit from this patchset, and in fact, the 
performance dropped a lot before 1280B, and similar at 1518B. 
The TX/RX desc setting is " txd=64, rxd=128" for TX-zero-copy enabled case. For 
TX-zero-copy disabled case, I just ran default testpmd(txd=512, rxd=128) 
without the patch. 
Could you help check if NIC2VM case? 

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Yuanhan Liu
Sent: Tuesday, August 23, 2016 4:11 PM
To: dev at dpdk.org
Cc: Maxime Coquelin ; Yuanhan Liu 
Subject: [dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

This patch set enables vhost Tx zero copy. The majority work goes to patch 4: 
vhost: add Tx zero copy.

The basic idea of Tx zero copy is, instead of copying data from the desc buf, 
here we let the mbuf reference the desc buf addr directly.

The major issue behind that is how and when to update the used ring.
You could check the commit log of patch 4 for more details.

Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable Tx zero 
copy, which is disabled by default.

Few more TODOs are left, including handling a desc buf that is across two 
physical pages, updating release note, etc. Those will be fixed in later 
version. For now, here is a simple one that hopefully it shows the idea clearly.

I did some quick tests, the performance gain is quite impressive.

For a simple dequeue workload (running rxonly in vhost-pmd and runnin txonly in 
guest testpmd), it yields 40+% performance boost for packet size 1400B.

For VM2VM iperf test case, it's even better: about 70% boost.

---
Yuanhan Liu (6):
  vhost: simplify memory regions handling
  vhost: get guest/host physical address mappings
  vhost: introduce last avail idx for Tx
  vhost: add Tx zero copy
  vhost: add a flag to enable Tx zero copy
  examples/vhost: add an option to enable Tx zero copy

 doc/guides/prog_guide/vhost_lib.rst |   7 +-
 examples/vhost/main.c   |  19 ++-
 lib/librte_vhost/rte_virtio_net.h   |   1 +
 lib/librte_vhost/socket.c   |   5 +
 lib/librte_vhost/vhost.c|  12 ++
 lib/librte_vhost/vhost.h| 103 +
 lib/librte_vhost/vhost_user.c   | 297 +++-
 lib/librte_vhost/virtio_net.c   | 188 +++
 8 files changed, 472 insertions(+), 160 deletions(-)

--
1.9.0



[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-23 Thread Yuanhan Liu
BTW, I really appreicate your efforts on reviewing this patchset.

It would be great if you could take some time to review my another
patchset :)

[PATCH 0/7] vhost: vhost-cuse removal and code path refactoring

It touchs a large of code base, that I wish I could apply it ASAP.
So that the chance a later patch will introduce conflicts is small.

--yliu

On Tue, Aug 23, 2016 at 10:42:11PM +0800, Yuanhan Liu wrote:
> On Tue, Aug 23, 2016 at 04:18:40PM +0200, Maxime Coquelin wrote:
> > 
> > 
> > On 08/23/2016 10:10 AM, Yuanhan Liu wrote:
> > >This patch set enables vhost Tx zero copy. The majority work goes to
> > >patch 4: vhost: add Tx zero copy.
> > >
> > >The basic idea of Tx zero copy is, instead of copying data from the
> > >desc buf, here we let the mbuf reference the desc buf addr directly.
> > >
> > >The major issue behind that is how and when to update the used ring.
> > >You could check the commit log of patch 4 for more details.
> > >
> > >Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable
> > >Tx zero copy, which is disabled by default.
> > >
> > >Few more TODOs are left, including handling a desc buf that is across
> > >two physical pages, updating release note, etc. Those will be fixed
> > >in later version. For now, here is a simple one that hopefully it
> > >shows the idea clearly.
> > >
> > >I did some quick tests, the performance gain is quite impressive.
> > >
> > >For a simple dequeue workload (running rxonly in vhost-pmd and runnin
> > >txonly in guest testpmd), it yields 40+% performance boost for packet
> > >size 1400B.
> > >
> > >For VM2VM iperf test case, it's even better: about 70% boost.
> > 
> > This is indeed impressive.
> > Somewhere else, you mention that there is a small regression with small
> > packets. Do you have some figures to share?
> 
> It could be 15% drop for PVP case with 64B packet size. The test topo is:
> 
>nic 0 --> VM Rx --> VM Tx --> nic 0
> 
> Put simply, I run vhost-switch example in the host and run testpmd in
> the guest.
> 
> Though the number looks big, I don't think it's an issue. First of all,
> it's disabled by default. Secondly, if you want to enable it, you should
> be certain that the packet size is normally big, otherwise, you should
> not bother to try with zero copy.
> 
> > Also, with this feature OFF, do you see some regressions for both small
> > and bigger packets?
> 
> Good question. I didn't check it on purpose, but I did try when it's
> disabled, the number I got is pretty the same as the one I got without
> this feature. So, I would say I don't see regressions. Anyway, I could
> do more tests to make sure.
>   
>   --yliu


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-23 Thread Yuanhan Liu
On Tue, Aug 23, 2016 at 04:18:40PM +0200, Maxime Coquelin wrote:
> 
> 
> On 08/23/2016 10:10 AM, Yuanhan Liu wrote:
> >This patch set enables vhost Tx zero copy. The majority work goes to
> >patch 4: vhost: add Tx zero copy.
> >
> >The basic idea of Tx zero copy is, instead of copying data from the
> >desc buf, here we let the mbuf reference the desc buf addr directly.
> >
> >The major issue behind that is how and when to update the used ring.
> >You could check the commit log of patch 4 for more details.
> >
> >Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable
> >Tx zero copy, which is disabled by default.
> >
> >Few more TODOs are left, including handling a desc buf that is across
> >two physical pages, updating release note, etc. Those will be fixed
> >in later version. For now, here is a simple one that hopefully it
> >shows the idea clearly.
> >
> >I did some quick tests, the performance gain is quite impressive.
> >
> >For a simple dequeue workload (running rxonly in vhost-pmd and runnin
> >txonly in guest testpmd), it yields 40+% performance boost for packet
> >size 1400B.
> >
> >For VM2VM iperf test case, it's even better: about 70% boost.
> 
> This is indeed impressive.
> Somewhere else, you mention that there is a small regression with small
> packets. Do you have some figures to share?

It could be 15% drop for PVP case with 64B packet size. The test topo is:

 nic 0 --> VM Rx --> VM Tx --> nic 0

Put simply, I run vhost-switch example in the host and run testpmd in
the guest.

Though the number looks big, I don't think it's an issue. First of all,
it's disabled by default. Secondly, if you want to enable it, you should
be certain that the packet size is normally big, otherwise, you should
not bother to try with zero copy.

> Also, with this feature OFF, do you see some regressions for both small
> and bigger packets?

Good question. I didn't check it on purpose, but I did try when it's
disabled, the number I got is pretty the same as the one I got without
this feature. So, I would say I don't see regressions. Anyway, I could
do more tests to make sure.

--yliu


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-23 Thread Maxime Coquelin


On 08/23/2016 04:53 PM, Yuanhan Liu wrote:
> BTW, I really appreicate your efforts on reviewing this patchset.
>
> It would be great if you could take some time to review my another
> patchset :)
>
> [PATCH 0/7] vhost: vhost-cuse removal and code path refactoring
>
> It touchs a large of code base, that I wish I could apply it ASAP.
> So that the chance a later patch will introduce conflicts is small.

Sure, I will try to review it by tomorrow morning (CET).

REgards,
Maxime

>
>   --yliu
>
> On Tue, Aug 23, 2016 at 10:42:11PM +0800, Yuanhan Liu wrote:
>> On Tue, Aug 23, 2016 at 04:18:40PM +0200, Maxime Coquelin wrote:
>>>
>>>
>>> On 08/23/2016 10:10 AM, Yuanhan Liu wrote:
 This patch set enables vhost Tx zero copy. The majority work goes to
 patch 4: vhost: add Tx zero copy.

 The basic idea of Tx zero copy is, instead of copying data from the
 desc buf, here we let the mbuf reference the desc buf addr directly.

 The major issue behind that is how and when to update the used ring.
 You could check the commit log of patch 4 for more details.

 Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable
 Tx zero copy, which is disabled by default.

 Few more TODOs are left, including handling a desc buf that is across
 two physical pages, updating release note, etc. Those will be fixed
 in later version. For now, here is a simple one that hopefully it
 shows the idea clearly.

 I did some quick tests, the performance gain is quite impressive.

 For a simple dequeue workload (running rxonly in vhost-pmd and runnin
 txonly in guest testpmd), it yields 40+% performance boost for packet
 size 1400B.

 For VM2VM iperf test case, it's even better: about 70% boost.
>>>
>>> This is indeed impressive.
>>> Somewhere else, you mention that there is a small regression with small
>>> packets. Do you have some figures to share?
>>
>> It could be 15% drop for PVP case with 64B packet size. The test topo is:
>>
>>   nic 0 --> VM Rx --> VM Tx --> nic 0
>>
>> Put simply, I run vhost-switch example in the host and run testpmd in
>> the guest.
>>
>> Though the number looks big, I don't think it's an issue. First of all,
>> it's disabled by default. Secondly, if you want to enable it, you should
>> be certain that the packet size is normally big, otherwise, you should
>> not bother to try with zero copy.
>>
>>> Also, with this feature OFF, do you see some regressions for both small
>>> and bigger packets?
>>
>> Good question. I didn't check it on purpose, but I did try when it's
>> disabled, the number I got is pretty the same as the one I got without
>> this feature. So, I would say I don't see regressions. Anyway, I could
>> do more tests to make sure.
>>  
>>  --yliu


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-23 Thread Maxime Coquelin


On 08/23/2016 10:10 AM, Yuanhan Liu wrote:
> This patch set enables vhost Tx zero copy. The majority work goes to
> patch 4: vhost: add Tx zero copy.
>
> The basic idea of Tx zero copy is, instead of copying data from the
> desc buf, here we let the mbuf reference the desc buf addr directly.
>
> The major issue behind that is how and when to update the used ring.
> You could check the commit log of patch 4 for more details.
>
> Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable
> Tx zero copy, which is disabled by default.
>
> Few more TODOs are left, including handling a desc buf that is across
> two physical pages, updating release note, etc. Those will be fixed
> in later version. For now, here is a simple one that hopefully it
> shows the idea clearly.
>
> I did some quick tests, the performance gain is quite impressive.
>
> For a simple dequeue workload (running rxonly in vhost-pmd and runnin
> txonly in guest testpmd), it yields 40+% performance boost for packet
> size 1400B.
>
> For VM2VM iperf test case, it's even better: about 70% boost.

This is indeed impressive.
Somewhere else, you mention that there is a small regression with small
packets. Do you have some figures to share?

Also, with this feature OFF, do you see some regressions for both small
and bigger packets?

Thanks,
Maxime
>
> ---
> Yuanhan Liu (6):
>   vhost: simplify memory regions handling
>   vhost: get guest/host physical address mappings
>   vhost: introduce last avail idx for Tx
>   vhost: add Tx zero copy
>   vhost: add a flag to enable Tx zero copy
>   examples/vhost: add an option to enable Tx zero copy
>
>  doc/guides/prog_guide/vhost_lib.rst |   7 +-
>  examples/vhost/main.c   |  19 ++-
>  lib/librte_vhost/rte_virtio_net.h   |   1 +
>  lib/librte_vhost/socket.c   |   5 +
>  lib/librte_vhost/vhost.c|  12 ++
>  lib/librte_vhost/vhost.h| 103 +
>  lib/librte_vhost/vhost_user.c   | 297 
> +++-
>  lib/librte_vhost/virtio_net.c   | 188 +++
>  8 files changed, 472 insertions(+), 160 deletions(-)
>


[dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support

2016-08-23 Thread Yuanhan Liu
This patch set enables vhost Tx zero copy. The majority work goes to
patch 4: vhost: add Tx zero copy.

The basic idea of Tx zero copy is, instead of copying data from the
desc buf, here we let the mbuf reference the desc buf addr directly.

The major issue behind that is how and when to update the used ring.
You could check the commit log of patch 4 for more details.

Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable
Tx zero copy, which is disabled by default.

Few more TODOs are left, including handling a desc buf that is across
two physical pages, updating release note, etc. Those will be fixed
in later version. For now, here is a simple one that hopefully it
shows the idea clearly.

I did some quick tests, the performance gain is quite impressive.

For a simple dequeue workload (running rxonly in vhost-pmd and runnin
txonly in guest testpmd), it yields 40+% performance boost for packet
size 1400B.

For VM2VM iperf test case, it's even better: about 70% boost.

---
Yuanhan Liu (6):
  vhost: simplify memory regions handling
  vhost: get guest/host physical address mappings
  vhost: introduce last avail idx for Tx
  vhost: add Tx zero copy
  vhost: add a flag to enable Tx zero copy
  examples/vhost: add an option to enable Tx zero copy

 doc/guides/prog_guide/vhost_lib.rst |   7 +-
 examples/vhost/main.c   |  19 ++-
 lib/librte_vhost/rte_virtio_net.h   |   1 +
 lib/librte_vhost/socket.c   |   5 +
 lib/librte_vhost/vhost.c|  12 ++
 lib/librte_vhost/vhost.h| 103 +
 lib/librte_vhost/vhost_user.c   | 297 +++-
 lib/librte_vhost/virtio_net.c   | 188 +++
 8 files changed, 472 insertions(+), 160 deletions(-)

-- 
1.9.0