Commit-ID: 1de72c706488b7be664a601cf3843bd01e327e58
Gitweb: https://git.kernel.org/tip/1de72c706488b7be664a601cf3843bd01e327e58
Author: Michael Kelley
AuthorDate: Sun, 4 Nov 2018 03:48:57 +
Committer: Thomas Gleixner
CommitDate: Sun, 4 Nov 2018 11:04:46 +0100
x86/hyper-v: Enable
Commit-ID: 35b69a420bfb56b7b74cb635ea903db05e357bec
Gitweb: https://git.kernel.org/tip/35b69a420bfb56b7b74cb635ea903db05e357bec
Author: Michael Kelley
AuthorDate: Sun, 4 Nov 2018 03:48:54 +
Committer: Thomas Gleixner
CommitDate: Sun, 4 Nov 2018 11:04:46 +0100
Now vsock only support send/receive small packet, it can't achieve
high performance. As previous discussed with Jason Wang, I revisit the
idea of vhost-net about mergeable rx buffer and implement the mergeable
rx buffer in vhost-vsock, it can allow big packet to be scattered in
into different
In driver probing, if virtio has VIRTIO_VSOCK_F_MRG_RXBUF feature,
it will fill mergeable rx buffer, support for host send mergeable
rx buffer. It will fill a page everytime to compact with small
packet and big packet.
Signed-off-by: Yiwen Jiang
---
include/linux/virtio_vsock.h | 3 ++
Guest receive mergeable rx buffer, it can merge
scatter rx buffer into a big buffer and then copy
to user space.
Signed-off-by: Yiwen Jiang
---
include/linux/virtio_vsock.h| 9
net/vmw_vsock/virtio_transport.c| 75 +
When vhost support VIRTIO_VSOCK_F_MRG_RXBUF feature,
it will merge big packet into rx vq.
Signed-off-by: Yiwen Jiang
---
drivers/vhost/vsock.c | 117 +++---
include/linux/virtio_vsock.h | 1 +
include/uapi/linux/virtio_vsock.h | 5 ++
3 files
> > On Thu, 25 Oct 2018 at 19:38, Robert Foss wrote:
> > >
> > > From: Gustavo Padovan
> > >
> > > Refactor fence creation to remove the potential allocation failure from
> > > the cmd_submit and atomic_commit paths. Now the fence should be allocated
> > > first and just after we should
Batch sending rx buffer can improve total bandwidth.
Signed-off-by: Yiwen Jiang
---
drivers/vhost/vsock.c | 24 +---
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 648be39..a587ddc 100644
---
On 2018/11/3 上午12:07, Vitaly Mayatskikh wrote:
Hi,
I stumbled across poor performance of virtio-blk while working on a
high-performance network storage protocol. Moving virtio-blk's host
side to kernel did increase single queue IOPS, but multiqueue disk
still was not scaling well. It turned
On 2018/11/3 上午12:07, Vitaly Mayatskikh wrote:
+
+static int vhost_vq_poll_start(struct vhost_virtqueue *vq)
+{
+ if (!vq->worker) {
+ vq->worker = kthread_create(vhost_vq_worker, vq, "vhost-%d/%i",
+ vq->dev->pid, vq->index);
+
On 2018/11/3 上午2:21, Vitaly Mayatskikh wrote:
vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
driver allows VM to reach a near bare-metal disk performance. See IOPS
numbers below (fio --rw=randread --bs=4k).
This implementation uses kiocb interface. It is slightly slower
11 matches
Mail list logo