Michael,
Yes, I think this packet split mode probably maps well to mergeable buffer
support. Note that
1. Not all devices support large packets in this way, others might map
to indirect buffers better
Do the indirect buffers accord to deal with the skb-frag_list?
So we have to figure out
On Thu, Apr 22, 2010 at 04:57:56PM +0800, Xin, Xiaohui wrote:
Michael,
Yes, I think this packet split mode probably maps well to mergeable buffer
support. Note that
1. Not all devices support large packets in this way, others might map
to indirect buffers better
Do the indirect
On Tue, Apr 20, 2010 at 10:21:55AM +0800, Xin, Xiaohui wrote:
Michael,
What we have not done yet:
packet split support
What does this mean, exactly?
We can support 1500MTU, but for jumbo frame, since vhost driver before
don't
support mergeable buffer, we cannot try it for
Michael,
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as sendmsg/recvmsg to vhost-net to
send/recv directly to/from
On Mon, Apr 19, 2010 at 06:05:17PM +0800, Xin, Xiaohui wrote:
Michael,
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides
Michael,
What we have not done yet:
packet split support
What does this mean, exactly?
We can support 1500MTU, but for jumbo frame, since vhost driver before
don't
support mergeable buffer, we cannot try it for multiple sg.
I do not see why, vhost currently supports 64K buffers with
Michael,
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as sendmsg/recvmsg to vhost-net to
send/recv directly to/from the
On Thu, Apr 15, 2010 at 05:36:07PM +0800, Xin, Xiaohui wrote:
Michael,
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops
On Fri, Apr 02, 2010 at 03:25:00PM +0800, xiaohui@intel.com wrote:
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as
Sridhar,
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as sendmsg/recvmsg to vhost-net to
send/recv directly to/from the
On 04/03/2010 02:51 AM, Sridhar Samudrala wrote:
On Fri, 2010-04-02 at 15:25 +0800, xiaohui@intel.com wrote:
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We
On Fri, 2010-04-02 at 15:25 +0800, xiaohui@intel.com wrote:
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as
12 matches
Mail list logo