On Mon, 2010-03-22 at 20:16 +0200, Michael S. Tsirkin wrote:
On Sun, Mar 21, 2010 at 01:58:29PM +0200, Avi Kivity wrote:
On 03/21/2010 01:34 PM, Michael S. Tsirkin wrote:
On Sun, Mar 21, 2010 at 12:29:31PM +0200, Avi Kivity wrote:
On 03/21/2010 12:15 PM, Michael S. Tsirkin wrote:
-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a3fd0f9..7fb48d3 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -54,7 +54,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
*/
struct kvm_io_bus {
int
On Wed, 2010-03-31 at 12:51 +0300, Michael S. Tsirkin wrote:
On Tue, Mar 30, 2010 at 04:48:25PM -0700, Sridhar Samudrala wrote:
This patch increases the current hardcoded limit of NR_IOBUS_DEVS
from 6 to 200. We are hitting this limit when creating a guest with more
than 1 virtio-net device
in parallel
with this patch.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a6a88df..29aa80f 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -339,8 +339,10 @@ static int vhost_net_open(struct inode *inode, struct file
*f
On Fri, 2010-04-02 at 15:25 +0800, xiaohui@intel.com wrote:
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as
On Sun, 2010-04-04 at 14:14 +0300, Michael S. Tsirkin wrote:
On Fri, Apr 02, 2010 at 10:31:20AM -0700, Sridhar Samudrala wrote:
Make vhost scalable by creating a separate vhost thread per vhost
device. This provides better scaling across multiple guests and with
multiple interfaces
On Mon, 2010-04-05 at 10:35 -0700, Sridhar Samudrala wrote:
On Sun, 2010-04-04 at 14:14 +0300, Michael S. Tsirkin wrote:
On Fri, Apr 02, 2010 at 10:31:20AM -0700, Sridhar Samudrala wrote:
Make vhost scalable by creating a separate vhost thread per vhost
device. This provides better
On Thu, 2010-04-08 at 17:14 -0700, Rick Jones wrote:
Here are the results with netperf TCP_STREAM 64K guest to host on a
8-cpu Nehalem system.
I presume you mean 8 core Nehalem-EP, or did you mean 8 processor Nehalem-EX?
Yes. It is a 2 socket quad-core Nehalem. so i guess it is a 8 core
On Sun, 2010-04-11 at 18:47 +0300, Michael S. Tsirkin wrote:
On Thu, Apr 08, 2010 at 05:05:42PM -0700, Sridhar Samudrala wrote:
On Mon, 2010-04-05 at 10:35 -0700, Sridhar Samudrala wrote:
On Sun, 2010-04-04 at 14:14 +0300, Michael S. Tsirkin wrote:
On Fri, Apr 02, 2010 at 10:31:20AM
Add a new kernel API to attach a task to current task's cgroup
in all the active hierarchies.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -570,6 +570,7 @@ struct
Make vhost more scalable by creating a separate vhost thread per
vhost device. This provides better scaling across virtio-net interfaces
in multiple guests.
Also attach each vhost thread to the cgroup and cpumask of the
associated guest(qemu or libvirt).
Signed-off-by: Sridhar Samudrala s
On 5/20/2010 3:22 PM, Paul Menage wrote:
On Tue, May 18, 2010 at 5:04 PM, Sridhar Samudrala
samudrala.srid...@gmail.com wrote:
Add a new kernel API to attach a task to current task's cgroup
in all the active hierarchies.
Signed-off-by: Sridhar Samudralas...@us.ibm.com
Reviewed
On Thu, 2010-05-27 at 14:44 +0200, Oleg Nesterov wrote:
On 05/27, Michael S. Tsirkin wrote:
On Tue, May 18, 2010 at 05:04:51PM -0700, Sridhar Samudrala wrote:
Add a new kernel API to create a singlethread workqueue and attach it's
task to current task's cgroup and cpumask.
Signed
. Tsirkin m...@redhat.com
Cc: Sridhar Samudrala samudrala.srid...@gmail.com
Cc: Li Zefan l...@cn.fujitsu.com
---
drivers/vhost/vhost.c | 34 ++
1 file changed, 30 insertions(+), 4 deletions(-)
Index: work/drivers/vhost/vhost.c
patch fixes the bugs in handling msix_is_masked() condition
in msix_set/unset_mask_notifier() routines.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/hw/msix.c b/hw/msix.c
index 1398680..a191df1 100644
--- a/hw/msix.c
+++ b/hw/msix.c
@@ -609,7 +609,7 @@ void msix_unuse_all_vectors
On Wed, 2010-06-02 at 20:49 +0300, Michael S. Tsirkin wrote:
Sridhar Samudrala reported hitting the following assertions
in msix.c when doing a guest reboot or live migration using vhost.
qemu-kvm/hw/msix.c:375: msix_mask_all: Assertion `r = 0' failed.
qemu-kvm/hw/msix.c:640
added to 2.6.32 kernel header file
include/linux/if_tun.h. Until this updated header file gets into distro
releases, i think we need to have this defined in qemu.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index ce8e6cb..c73487d 100644
--- a/hw
On Thu, 2009-10-08 at 11:07 +0100, Mark McLoughlin wrote:
On Wed, 2009-10-07 at 14:50 -0700, Sridhar Samudrala wrote:
linux 2.6.32 includes UDP fragmentation offload support in software.
So we can enable UFO on the host tap device if supported and allow setting
UFO on virtio-net
On Wed, 2009-10-14 at 17:50 +0200, Michael S. Tsirkin wrote:
On Wed, Oct 14, 2009 at 04:19:17PM +0100, Jamie Lokier wrote:
Michael S. Tsirkin wrote:
On Wed, Oct 14, 2009 at 09:17:15AM -0500, Anthony Liguori wrote:
Michael S. Tsirkin wrote:
Looks like Or has abandoned it. I have an
On Sun, 2009-10-18 at 19:32 +0200, Michael S. Tsirkin wrote:
On Sun, Oct 18, 2009 at 12:53:56PM +0200, Michael S. Tsirkin wrote:
On Fri, Oct 16, 2009 at 12:29:29PM -0700, Sridhar Samudrala wrote:
Hi Michael,
We are trying out your vhost-net patches from your git trees
On Thu, 2009-10-22 at 19:43 +0200, Michael S. Tsirkin wrote:
Possibly we'll have to debug this in vhost in host kernel.
I would debug this directly, it's just that my setup is somehow
different and I do not see this issue, otherwise I would not
waste your time.
Can we add some printks?
With the latest upstream qemu-kvm git tree, all the offloads are disabled
on virtio-net.
peer_has_vnet_hdr(n) in virtio_net_get_features() is failing because
n-vc-peer is NULL. Could not figure out yet why peer field is not initialized.
Do i need any new options to be specified with qemu
On Thu, 2011-01-20 at 17:35 +0200, Michael S. Tsirkin wrote:
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization than userspace virtio which handles networking in
the same thread.
On Thu, 2011-01-20 at 19:47 +0200, Michael S. Tsirkin wrote:
On Thu, Jan 20, 2011 at 08:31:53AM -0800, Sridhar Samudrala wrote:
On Thu, 2011-01-20 at 17:35 +0200, Michael S. Tsirkin wrote:
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so
On Mon, 2011-01-31 at 18:24 -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue
On Tue, 2011-02-01 at 17:52 +0200, Michael S. Tsirkin wrote:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue is full again.
Maybe the following will help it stabilize? By default
: Sridhar Samudrala samudrala.srid...@gmail.com
I wanted to apply this, but modpost fails:
ERROR: sched_setaffinity [drivers/vhost/vhost_net.ko] undefined!
ERROR: sched_getaffinity [drivers/vhost/vhost_net.ko] undefined!
Did you try building as a module?
In my original implementation, i
a negative error code and stop polling.
One minor comment on error return below. With that change,
Acked-by: Sridhar Samudrala s...@us.ibm.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
Dave, I'm sending this out so it can get reviewed.
I'll put this on my vhost tree
so no need
On Mon, 2010-06-28 at 13:08 +0300, Michael S. Tsirkin wrote:
Userspace virtio server has the following hack
so guests rely on it, and we have to replicate it, too:
Use port number to detect incoming IPv4 DHCP response packets,
and fill in the checksum for these.
The issue we are solving
On 7/1/2010 7:55 AM, Peter Zijlstra wrote:
On Thu, 2010-07-01 at 16:53 +0200, Tejun Heo wrote:
Hello,
On 07/01/2010 04:46 PM, Oleg Nesterov wrote:
It might be a good idea to make the function take extra clone flags
but anyways once created cloned task can be treated the same way as
On 7/4/2010 2:00 AM, Michael S. Tsirkin wrote:
On Fri, Jul 02, 2010 at 11:06:37PM +0200, Oleg Nesterov wrote:
On 07/02, Peter Zijlstra wrote:
On Fri, 2010-07-02 at 11:01 -0700, Sridhar Samudrala wrote:
Does it (Tejun's kthread_clone() patch) also inherit the
cgroup
On Tue, 2010-07-13 at 14:09 +0300, Michael S. Tsirkin wrote:
On Mon, Jul 12, 2010 at 11:59:08PM -0700, Sridhar Samudrala wrote:
On 7/4/2010 2:00 AM, Michael S. Tsirkin wrote:
On Fri, Jul 02, 2010 at 11:06:37PM +0200, Oleg Nesterov wrote:
On 07/02, Peter Zijlstra wrote:
On Fri, 2010-07-02
On 7/14/2010 5:05 PM, Oleg Nesterov wrote:
On 07/14, Sridhar Samudrala wrote:
OK. So we want to create a thread that is a child of kthreadd, but inherits the
cgroup/cpumask
from the caller. How about an exported kthread function
kthread_create_in_current_cg()
that does this?
Well
On Thu, 2010-07-15 at 15:19 +0300, Michael S. Tsirkin wrote:
We flush under vq mutex when changing backends.
This creates a deadlock as workqueue being flushed
needs this lock as well.
https://bugzilla.redhat.com/show_bug.cgi?id=612421
Drop the vq mutex before flush: we have the device
On Thu, 2010-07-22 at 19:53 -0400, Balachandar wrote:
I am resending this email as Freddie didn't use 'reply to all' when
replying to this message. I am also updating to answer Freddie's
questions..
I can see that virtio network performance is poorer than emaulated
e1000 nic. I did some
On Mon, 2010-07-26 at 20:12 +0300, Michael S. Tsirkin wrote:
On Fri, Jul 02, 2010 at 11:06:37PM +0200, Oleg Nesterov wrote:
On 07/02, Peter Zijlstra wrote:
On Fri, 2010-07-02 at 11:01 -0700, Sridhar Samudrala wrote:
Does it (Tejun's kthread_clone() patch) also inherit
On Tue, 2010-07-27 at 23:42 +0300, Michael S. Tsirkin wrote:
Sridhar,
I pushed a patchset with all known issues fixed,
on my vhost-net-next branch.
For now this ignores the cpu mask issue, addressing
only the cgroups issue.
Would appreciate testing and reports.
I had to apply the
On 8/5/2010 3:59 PM, Michael S. Tsirkin wrote:
cgroup_attach_task_current_cg API that have upstream is backwards: we
really need an API to attach to the cgroups from another process A to
the current one.
In our case (vhost), a priveledged user wants to attach it's task to cgroups
from a less
.
http://thread.gmane.org/gmane.linux.network/150308
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/Makefile.objs b/Makefile.objs
index 357d305..4468124 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -34,6 +34,8 @@ net-nested-$(CONFIG_SOLARIS) += tap-solaris.o
net-nested
This patch adds generic peer routines for the remaining tap specific
routines(using_vnet_hdr set_offload). This makes it easier to add
new backends like raw(packet sockets) that support gso/checksum-offload.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/hw/virtio-net.c b/hw
On Tue, 2010-01-26 at 14:47 -0600, Anthony Liguori wrote:
On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:
This patch adds raw socket backend to qemu and is based on Or Gerlitz's
patch re-factored and ported to the latest qemu-kvm git tree.
It also includes support for vnet_hdr option
On Tue, 2010-01-26 at 14:50 -0600, Anthony Liguori wrote:
On 01/26/2010 02:47 PM, Anthony Liguori wrote:
On 01/26/2010 02:40 PM, Sridhar Samudrala wrote:
This patch adds raw socket backend to qemu and is based on Or Gerlitz's
patch re-factored and ported to the latest qemu-kvm git tree
On Wed, 2010-01-27 at 22:39 +0100, Arnd Bergmann wrote:
On Wednesday 27 January 2010, Anthony Liguori wrote:
I think -net socket,fd should just be (trivially) extended to work with
raw
sockets out of the box, with no support for opening it. Then you can have
libvirt or some wrapper
On 2/1/2010 11:50 AM, Bernhard Schmidt wrote:
Hi,
I have a really weird issue all of the sudden on _one_ of my two KVM
hosts. The other one, while running on a different hardware in a
different network, is configured in a very similar way and does not show
these issues (so far).
Host:
- AMD
On Fri, 2010-02-26 at 10:51 +0800, David V. Cloud wrote:
Hi,
I read some kernel source. My basic understanding is that, in
net/8021q/vlan_dev.c, vlan_dev_init, the dev-features of vconfig
created interface is defined to be
dev-features |= real_dev-features real_dev-vlan_features;
On Fri, 2010-02-26 at 10:51 +0800, David V. Cloud wrote:
Hi,
I read some kernel source. My basic understanding is that, in
net/8021q/vlan_dev.c, vlan_dev_init, the dev-features of vconfig
created interface is defined to be
dev-features |= real_dev-features real_dev-vlan_features;
On 11/23/2010 5:41 AM, Michael S. Tsirkin wrote:
On Tue, Nov 23, 2010 at 09:23:41PM +0800, lidong chen wrote:
At this point, I'd suggest testing vhost-net on the upstream kernel,
not on rhel kernels. The change that introduced per-device threads is:
c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
i
On Fri, 2010-09-03 at 13:14 +0300, Michael S. Tsirkin wrote:
On Tue, Aug 10, 2010 at 06:23:24PM -0700, Shirley Ma wrote:
Hello Xiaohui,
On Fri, 2010-08-06 at 17:23 +0800, xiaohui@intel.com wrote:
Our goal is to improve the bandwidth and reduce the CPU usage.
Exact performance
On 9/9/2010 2:45 AM, Krishna Kumar2 wrote:
Krishna Kumar2/India/IBM wrote on 09/08/2010 10:17:49 PM:
Some more results and likely cause for single netperf
degradation below.
Guest - Host (single netperf):
I am getting a drop of almost 20%. I am trying to figure out
why.
Host - guest
On 9/28/2010 8:18 AM, Arnd Bergmann wrote:
On Tuesday 28 September 2010, Michael S. Tsirkin wrote:
On Tue, Sep 28, 2010 at 04:39:59PM +0200, Arnd Bergmann wrote:
Can you be more specific what the problem is? Do you think
it breaks when a guest sends VLAN tagged frames or when macvtap
is
Support a new 'passthru' mode with macvlan and 'mode' parameter
with macvtap devices.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/include/linux/if_link.h b/include/linux/if_link.h
index f5bb2dc..23de79e 100644
--- a/include/linux/if_link.h
+++ b/include/linux/if_link.h
With the current default macvtap mode, a KVM guest using virtio with
macvtap backend has the following limitations.
- cannot change/add a mac address on the guest virtio-net
- cannot create a vlan device on the guest virtio-net
- cannot enable promiscuous mode on guest virtio-net
This patch
On Thu, 2010-10-28 at 13:13 -0700, Shirley Ma wrote:
On Thu, 2010-10-28 at 12:32 -0700, Shirley Ma wrote:
Also I found a big TX regression for old guest and new guest. For old
guest, I am able to get almost 11Gb/s for 2K message size, but for the
new guest kernel, I can only get 3.5 Gb/s
Add support for 'mode' parameter when creating a macvtap device.
This allows a macvtap device to be created in bridge, private or
the default vepa modes.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
diff --git a/ip/Makefile
and sets it in promiscuous
mode to receive and forward all the packets.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
diff --git a/include/linux/if_link.h b/include/linux/if_link.h
index f5bb2dc..23de79e 100644
--- a/include
On 10/29/2010 6:45 AM, Arnd Bergmann wrote:
On Friday 29 October 2010, Sridhar Samudrala wrote:
With the current default 'vepa' mode, a KVM guest using virtio with
macvtap backend has the following limitations.
- cannot change/add a mac address on the guest virtio-net
I believe this could
On Fri, 2011-08-12 at 09:54 +0800, Jason Wang wrote:
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple
On Thu, 2011-09-08 at 09:19 -0700, Roopa Prabhu wrote:
On 9/8/11 4:08 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Sep 07, 2011 at 10:20:28PM -0700, Roopa Prabhu wrote:
On 9/7/11 5:34 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Sep 06, 2011 at 03:35:40PM -0700,
On 9/8/2011 8:00 PM, Roopa Prabhu wrote:
On 9/8/11 12:33 PM, Michael S. Tsirkinm...@redhat.com wrote:
On Thu, Sep 08, 2011 at 12:23:56PM -0700, Roopa Prabhu wrote:
I think the main usecase for passthru mode is to assign a SR-IOV VF to
a single guest.
Yes and for the passthru usecase this
On 9/11/2011 6:18 AM, Roopa Prabhu wrote:
On 9/11/11 2:44 AM, Michael S. Tsirkinm...@redhat.com wrote:
AFAIK, though it might maintain a single filter table space in hw, hw does
know which filter belongs to which VF. And the OS driver does not need to do
anything special. The VF driver
When i moved to the latest qemu-kvm git tree from kvm-85, i noticed that
networking stopped working between the host and the guest.
It started working when i put the device in promiscuos mode by running
tcpdump in background on the guest.
After browsing through the recent patches, i found that
Add HW checksum support for outgoing large UDP/IPv6 packets destined to a UFO
enabled device.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
net/ipv6/udp.c | 51 ++-
1 files changed, 50 insertions(+), 1 deletions(-)
diff --git a/net
Setup partial checksum and add gso checks to handle large UFO packets from
untrusted sources.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
include/net/udp.h |3 +++
net/ipv4/af_inet.c |2 ++
net/ipv4/udp.c | 60
net
Allow UFO feature to be set on virtio_net device.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
drivers/net/virtio_net.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0c9ca67..d24ede0 100644
Handle send/receive of UFO packets in tun/tap driver.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
drivers/net/tun.c | 13 -
include/linux/if_tun.h |1 +
2 files changed, 13 insertions(+), 1 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index
Enable UFO on the host tap device if supported and allow setting UFO
on virtio-net in the guest.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index 3c77b99..8a53e27 100644
--- a/hw/virtio-net.c
+++ b/hw/virtio-net.c
@@ -134,7 +134,8 @@ static
On Sun, 2009-06-07 at 09:21 +0300, Avi Kivity wrote:
Sridhar Samudrala wrote:
Enable UFO on the host tap device if supported and allow setting UFO
on virtio-net in the guest.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index
On Mon, 2009-06-08 at 15:16 +1000, Herbert Xu wrote:
On Fri, Jun 05, 2009 at 05:16:31PM -0700, Sridhar Samudrala wrote:
+ /* Software UFO is not yet supported */
+ segs = ERR_PTR(-EPROTONOSUPPORT);
Hmm, we need to fill this in before you start using it for virt.
After all, it's very
Rusty Russell wrote:
On Mon, 8 Jun 2009 02:46:08 pm Herbert Xu wrote:
On Fri, Jun 05, 2009 at 05:16:31PM -0700, Sridhar Samudrala wrote:
+ /* Software UFO is not yet supported */
+ segs = ERR_PTR(-EPROTONOSUPPORT);
Hmm, we need to fill this in before you start using
Herbert Xu wrote:
On Mon, Jun 08, 2009 at 10:04:47AM -0700, Sridhar Samudrala wrote:
OK. Can we use skb_segment() to do IP fragmentation of UDP packets?
It should be able to.
Unfortunately, this doesn't work for UDP without any changes.
skb_segment() currently adds transport
On 6/25/2012 2:16 AM, Jason Wang wrote:
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R)
On Wed, 2012-02-08 at 19:22 -0800, John Fastabend wrote:
Propagate software FDB table into hardware uc, mc lists when
the NETIF_F_HW_FDB is set.
This resolves the case below where an embedded switch is used
in hardware to do inter-VF or VF-PF switching. This patch
pushes the FDB entry
On Thu, 2012-02-09 at 12:30 -0800, John Fastabend wrote:
On 2/9/2012 10:14 AM, Sridhar Samudrala wrote:
On Wed, 2012-02-08 at 19:22 -0800, John Fastabend wrote:
Propagate software FDB table into hardware uc, mc lists when
the NETIF_F_HW_FDB is set.
This resolves the case below where
On 11/30/2011 3:00 PM, Chris Wright wrote:
* Ben Hutchings (bhutchi...@solarflare.com) wrote:
On Wed, 2011-11-30 at 13:04 -0800, Chris Wright wrote:
I agree that it's confusing. Couldn't you simplify your ascii art
(hopefully removing hw assumptions about receive processing, and
completely
On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wangjasow...@redhat.com wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason Wangjasow...@redhat.comwrote:
On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
On Mon,
On 12/6/2011 8:14 AM, Michael S. Tsirkin wrote:
On Tue, Dec 06, 2011 at 07:42:54AM -0800, Sridhar Samudrala wrote:
On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wangjasow...@redhat.com wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6
On 12/7/2011 3:02 AM, Jason Wang wrote:
On 12/06/2011 11:42 PM, Sridhar Samudrala wrote:
On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wangjasow...@redhat.com
wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason
On Thu, 2011-06-02 at 18:43 +0300, Michael S. Tsirkin wrote:
Current code might introduce a lot of latency variation
if there are many pending bufs at the time we
attempt to transmit a new one. This is bad for
real-time applications and can't be good for TCP either.
Free up just enough to
78 matches
Mail list logo