On Fri, Oct 23, 2009 at 09:23:40AM -0700, Shirley Ma wrote:
> Hello Michael,
> 
> Some initial vhost test netperf results on my T61 laptop from the
> working tap device are here, latency has been significant decreased, but
> throughput from guest to host has huge regression. I also hit guest
> skb_xmit panic.
> 
> netperf TCP_STREAM, default setup, 60 secs run
> 
> guest->host drops from 3XXXMb/s to 1XXXMb/s (regression)
> host->guest increases from 3XXXMb/s to 4XXXXMb/s 
> 
> TCP_RR, 60 secs run (very impressive)
> 
> guest->host trans/s increases from 2XXX/s to 13XXX/s
> host->guest trans/s increases from 2XXX/s to 13XXX/s
> 
> Thanks
> Shirley

Shirley, could you please test the following patch?
It is surprising to me that it should improve
performance, but seems to do this in my setup.
Please comment.


diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 30708c6..67bfc08 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -775,7 +775,7 @@ void vhost_no_notify(struct vhost_virtqueue *vq)
 
 int vhost_init(void)
 {
-       vhost_workqueue = create_workqueue("vhost");
+       vhost_workqueue = create_singlethread_workqueue("vhost");
        if (!vhost_workqueue)
                return -ENOMEM;
        return 0;
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a140dad..49026bb 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -106,10 +106,14 @@ static void handle_tx(struct vhost_net *net)
                .msg_flags = MSG_DONTWAIT,
        };
        size_t len, total_len = 0;
-       int err;
+       int err, wmem;
        size_t hdr_size;
        struct socket *sock = rcu_dereference(vq->private_data);
-       if (!sock || !sock_writeable(sock->sk))
+       if (!sock)
+               return;
+
+       wmem = atomic_read(&sock->sk->sk_wmem_alloc);
+       if (wmem >= sock->sk->sk_sndbuf)
                return;
 
        use_mm(net->dev.mm);
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to