Re: [PATCH net 3/4] vhost: vsock: add weight support

2019-05-16 Thread Jason Wang
On 2019/5/16 下午5:33, Stefan Hajnoczi wrote: On Thu, May 16, 2019 at 03:47:41AM -0400, Jason Wang wrote: @@ -183,7 +184,8 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) virtio_transport_deliver_tap_pkt(pkt); virtio_transport_free_pkt(pkt); - } +

Re: [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Jakub Staroń
On 5/14/19 7:54 AM, Pankaj Gupta wrote: > + if (!list_empty(>req_list)) { > + req_buf = list_first_entry(>req_list, > + struct virtio_pmem_request, list); > + req_buf->wq_buf_avail = true; > +

[PATCH V2 1/4] vhost: introduce vhost_exceeds_weight()

2019-05-16 Thread Jason Wang
We used to have vhost_exceeds_weight() for vhost-net to: - prevent vhost kthread from hogging the cpu - balance the time spent between TX and RX This function could be useful for vsock and scsi as well. So move it to vhost.c. Device must specify a weight which counts the number of requests, or

Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Pankaj Gupta
Hi Jakub, > > On 5/14/19 7:54 AM, Pankaj Gupta wrote: > > + if (!list_empty(>req_list)) { > > + req_buf = list_first_entry(>req_list, > > + struct virtio_pmem_request, list); > > + req_buf->wq_buf_avail = true; >

[PATCH V2 3/4] vhost: vsock: add weight support

2019-05-16 Thread Jason Wang
This patch will check the weight and exit the loop if we exceeds the weight. This is useful for preventing vsock kthread from hogging cpu which is guest triggerable. The weight can help to avoid starving the request from on direction while another direction is being processed. The value of weight

[PATCH V2 2/4] vhost_net: fix possible infinite loop

2019-05-16 Thread Jason Wang
When the rx buffer is too small for a packet, we will discard the vq descriptor and retry it for the next packet: while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, _intr))) { ... /* On overrun, truncate and discard */ if

[PATCH V2 0/4] Prevent vhost kthread from hogging CPU

2019-05-16 Thread Jason Wang
Hi: This series try to prevent a guest triggerable CPU hogging through vhost kthread. This is done by introducing and checking the weight after each requrest. The patch has been tested with reproducer of vsock and virtio-net. Only compile test is done for vhost-scsi. Please review. This

[PATCH V2 4/4] vhost: scsi: add weight support

2019-05-16 Thread Jason Wang
This patch will check the weight and exit the loop if we exceeds the weight. This is useful for preventing scsi kthread from hogging cpu which is guest triggerable. This addresses CVE-2019-3900. Cc: Paolo Bonzini Cc: Stefan Hajnoczi Fixes: 057cbf49a1f0 ("tcm_vhost: Initial merge for vhost

Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Pankaj Gupta
> > On Wed, May 15, 2019 at 10:46:00PM +0200, David Hildenbrand wrote: > > > + vpmem->vdev = vdev; > > > + vdev->priv = vpmem; > > > + err = init_vq(vpmem); > > > + if (err) { > > > + dev_err(>dev, "failed to initialize virtio pmem vq's\n"); > > > + goto out_err; > > > + } > > >

Re: [PATCH 05/10] s390/cio: introduce DMA pools to cio

2019-05-16 Thread Cornelia Huck
On Wed, 15 May 2019 19:12:57 +0200 Halil Pasic wrote: > On Mon, 13 May 2019 15:29:24 +0200 > Cornelia Huck wrote: > > > On Sun, 12 May 2019 20:22:56 +0200 > > Halil Pasic wrote: > > > > > On Fri, 10 May 2019 16:10:13 +0200 > > > Cornelia Huck wrote: > > > > > > > On Fri, 10 May 2019

Re: [PATCH 06/10] s390/cio: add basic protected virtualization support

2019-05-16 Thread Cornelia Huck
On Wed, 15 May 2019 22:51:58 +0200 Halil Pasic wrote: > On Mon, 13 May 2019 11:41:36 +0200 > Cornelia Huck wrote: > > > On Fri, 26 Apr 2019 20:32:41 +0200 > > Halil Pasic wrote: > > > > > As virtio-ccw devices are channel devices, we need to use the dma area > > > for any communication

Re: [PATCH 06/10] s390/cio: add basic protected virtualization support

2019-05-16 Thread Cornelia Huck
On Wed, 15 May 2019 23:08:17 +0200 Halil Pasic wrote: > On Tue, 14 May 2019 10:47:34 -0400 > "Jason J. Herne" wrote: > > Are we > > worried that virtio data structures are going to be a burden on the 31-bit > > address space? > > > > > > That is a good question I can not answer. Since

Re: [PATCH 06/10] s390/cio: add basic protected virtualization support

2019-05-16 Thread Cornelia Huck
On Thu, 16 May 2019 15:42:45 +0200 Halil Pasic wrote: > On Thu, 16 May 2019 08:32:28 +0200 > Cornelia Huck wrote: > > > On Wed, 15 May 2019 23:08:17 +0200 > > Halil Pasic wrote: > > > > > On Tue, 14 May 2019 10:47:34 -0400 > > > "Jason J. Herne" wrote: > > > > > > Are we > > > >

Re: [PATCH 06/10] s390/cio: add basic protected virtualization support

2019-05-16 Thread Halil Pasic
On Thu, 16 May 2019 08:32:28 +0200 Cornelia Huck wrote: > On Wed, 15 May 2019 23:08:17 +0200 > Halil Pasic wrote: > > > On Tue, 14 May 2019 10:47:34 -0400 > > "Jason J. Herne" wrote: > > > > Are we > > > worried that virtio data structures are going to be a burden on the > > > 31-bit

Re: [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Michael S. Tsirkin
On Wed, May 15, 2019 at 10:46:00PM +0200, David Hildenbrand wrote: > > + vpmem->vdev = vdev; > > + vdev->priv = vpmem; > > + err = init_vq(vpmem); > > + if (err) { > > + dev_err(>dev, "failed to initialize virtio pmem vq's\n"); > > + goto out_err; > > + } > > + > > +

Re: [PATCH 05/10] s390/cio: introduce DMA pools to cio

2019-05-16 Thread Sebastian Ott
On Sun, 12 May 2019, Halil Pasic wrote: > I've also got code that deals with AIRQ_IV_CACHELINE by turning the > kmem_cache into a dma_pool. > > Cornelia, Sebastian which approach do you prefer: > 1) get rid of cio_dma_pool and AIRQ_IV_CACHELINE, and waste a page per > vector, or > 2) go with the

Re: [PATCH] vsock/virtio: Initialize core virtio vsock before registering the driver

2019-05-16 Thread Stefan Hajnoczi
On Thu, May 16, 2019 at 09:48:52AM +0200, Stefano Garzarella wrote: > On Wed, May 15, 2019 at 04:24:00PM +0100, Stefan Hajnoczi wrote: > > On Tue, May 07, 2019 at 02:25:43PM +0200, Stefano Garzarella wrote: > > > Hi Jorge, > > > > > > On Mon, May 06, 2019 at 01:19:55PM -0700, Jorge Moreira Broche

Re: [PATCH net 3/4] vhost: vsock: add weight support

2019-05-16 Thread Stefan Hajnoczi
On Thu, May 16, 2019 at 03:47:41AM -0400, Jason Wang wrote: > @@ -183,7 +184,8 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > virtio_transport_deliver_tap_pkt(pkt); > > virtio_transport_free_pkt(pkt); > - } > + total_len += pkt->len;

Re: [PATCH] vsock/virtio: Initialize core virtio vsock before registering the driver

2019-05-16 Thread Stefano Garzarella
On Wed, May 15, 2019 at 04:24:00PM +0100, Stefan Hajnoczi wrote: > On Tue, May 07, 2019 at 02:25:43PM +0200, Stefano Garzarella wrote: > > Hi Jorge, > > > > On Mon, May 06, 2019 at 01:19:55PM -0700, Jorge Moreira Broche wrote: > > > > On Wed, May 01, 2019 at 03:08:31PM -0400, Stefan Hajnoczi

[PATCH net 2/4] vhost_net: fix possible infinite loop

2019-05-16 Thread Jason Wang
When the rx buffer is too small for a packet, we will discard the vq descriptor and retry it for the next packet: while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, _intr))) { ... /* On overrun, truncate and discard */ if

[PATCH net 4/4] vhost: scsi: add weight support

2019-05-16 Thread Jason Wang
This patch will check the weight and exit the loop if we exceeds the weight. This is useful for preventing scsi kthread from hogging cpu which is guest triggerable. This addresses CVE-2019-3900. Cc: Paolo Bonzini Cc: Stefan Hajnoczi Fixes: 057cbf49a1f0 ("tcm_vhost: Initial merge for vhost

Re: [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Pankaj Gupta
> > > + vpmem->vdev = vdev; > > + vdev->priv = vpmem; > > + err = init_vq(vpmem); > > + if (err) { > > + dev_err(>dev, "failed to initialize virtio pmem vq's\n"); > > + goto out_err; > > + } > > + > > + virtio_cread(vpmem->vdev, struct virtio_pmem_config, > > +

[PATCH net 1/4] vhost: introduce vhost_exceeds_weight()

2019-05-16 Thread Jason Wang
We used to have vhost_exceeds_weight() for vhost-net to: - prevent vhost kthread from hogging the cpu - balance the time spent between TX and RX This function could be useful for vsock and scsi as well. So move it to vhost.c. Device must specify a weight which counts the number of requests, or

[PATCH net 0/4] Prevent vhost kthread from hogging CPU

2019-05-16 Thread Jason Wang
Hi: This series try to prvernt a guest triggerable CPU hogging through vhost kthread. This is done by introducing and checking the weight after each requrest. The patch has been tested with reproducer of vsock and virtio-net. Only compile test is done for vhost-scsi. Please review. This

[PATCH net 3/4] vhost: vsock: add weight support

2019-05-16 Thread Jason Wang
This patch will check the weight and exit the loop if we exceeds the weight. This is useful for preventing vsock kthread from hogging cpu which is guest triggerable. The weight can help to avoid starving the request from on direction while another direction is being processed. The value of weight

Re: [PATCH] vsock/virtio: Initialize core virtio vsock before registering the driver

2019-05-16 Thread Stefan Hajnoczi
On Tue, Apr 30, 2019 at 05:30:01PM -0700, Jorge E. Moreira wrote: > Avoid a race in which static variables in net/vmw_vsock/af_vsock.c are > accessed (while handling interrupts) before they are initialized. > > [4.201410] BUG: unable to handle kernel paging request at ffe8 > [

Re: [PATCH net 0/4] Prevent vhost kthread from hogging CPU

2019-05-16 Thread Stefan Hajnoczi
On Thu, May 16, 2019 at 03:47:38AM -0400, Jason Wang wrote: > Hi: > > This series try to prvernt a guest triggerable CPU hogging through > vhost kthread. This is done by introducing and checking the weight > after each requrest. The patch has been tested with reproducer of > vsock and virtio-net.

Re: [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Pankaj Gupta
> >> + vpmem->vdev = vdev; > >> + vdev->priv = vpmem; > >> + err = init_vq(vpmem); > >> + if (err) { > >> + dev_err(>dev, "failed to initialize virtio pmem vq's\n"); > >> + goto out_err; > >> + } > >> + > >> + virtio_cread(vpmem->vdev, struct virtio_pmem_config, > >> +

Re: [PATCH v2 2/8] vsock/virtio: free packets during the socket release

2019-05-16 Thread Stefan Hajnoczi
On Fri, May 10, 2019 at 02:58:37PM +0200, Stefano Garzarella wrote: > When the socket is released, we should free all packets > queued in the per-socket list in order to avoid a memory > leak. > > Signed-off-by: Stefano Garzarella > --- > net/vmw_vsock/virtio_transport_common.c | 8 >

Re: [PATCH v2 1/8] vsock/virtio: limit the memory used per-socket

2019-05-16 Thread Stefan Hajnoczi
On Fri, May 10, 2019 at 02:58:36PM +0200, Stefano Garzarella wrote: > +struct virtio_vsock_buf { Please add a comment describing the purpose of this struct and to differentiate its use from struct virtio_vsock_pkt. > +static struct virtio_vsock_buf * > +virtio_transport_alloc_buf(struct

[PATCH 2/2] drm: Reserve/unreserve GEM VRAM BOs from within pin/unpin functions

2019-05-16 Thread Thomas Zimmermann
The original bochs and vbox implementations of pin and unpin functions automatically reserved BOs during validation. This functionality got lost while converting the code to a generic implementation. This may result in validating unlocked TTM BOs. Adding the reserve and unreserve operations to

[PATCH 1/2] drm: Add drm_gem_vram_{pin/unpin}_reserved() and convert mgag200

2019-05-16 Thread Thomas Zimmermann
The new interfaces drm_gem_vram_{pin/unpin}_reserved() are variants of the GEM VRAM pin/unpin functions that do not reserve the BO during validation. The mgag200 driver requires this behavior for its cursor handling. The patch also converts the driver to use the new interfaces. Signed-off-by:

[PATCH 0/2] Add BO reservation to GEM VRAM pin/unpin/push_to_system

2019-05-16 Thread Thomas Zimmermann
A kernel test bot reported a problem with the locktorture testcase that was triggered by the GEM VRAM helpers. ... [ 10.004734] RIP: 0010:ttm_bo_validate+0x41/0x141 [ttm] ... [ 10.015669] ? kvm_sched_clock_read+0x5/0xd [ 10.016157] ? get_lock_stats+0x11/0x3f [ 10.016607]