On 2019/5/16 下午5:33, Stefan Hajnoczi wrote:
On Thu, May 16, 2019 at 03:47:41AM -0400, Jason Wang wrote:
@@ -183,7 +184,8 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid)
virtio_transport_deliver_tap_pkt(pkt);
virtio_transport_free_pkt(pkt);
- }
+
On 5/14/19 7:54 AM, Pankaj Gupta wrote:
> + if (!list_empty(>req_list)) {
> + req_buf = list_first_entry(>req_list,
> + struct virtio_pmem_request, list);
> + req_buf->wq_buf_avail = true;
> +
We used to have vhost_exceeds_weight() for vhost-net to:
- prevent vhost kthread from hogging the cpu
- balance the time spent between TX and RX
This function could be useful for vsock and scsi as well. So move it
to vhost.c. Device must specify a weight which counts the number of
requests, or
Hi Jakub,
>
> On 5/14/19 7:54 AM, Pankaj Gupta wrote:
> > + if (!list_empty(>req_list)) {
> > + req_buf = list_first_entry(>req_list,
> > + struct virtio_pmem_request, list);
> > + req_buf->wq_buf_avail = true;
>
This patch will check the weight and exit the loop if we exceeds the
weight. This is useful for preventing vsock kthread from hogging cpu
which is guest triggerable. The weight can help to avoid starving the
request from on direction while another direction is being processed.
The value of weight
When the rx buffer is too small for a packet, we will discard the vq
descriptor and retry it for the next packet:
while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
_intr))) {
...
/* On overrun, truncate and discard */
if
Hi:
This series try to prevent a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This
This patch will check the weight and exit the loop if we exceeds the
weight. This is useful for preventing scsi kthread from hogging cpu
which is guest triggerable.
This addresses CVE-2019-3900.
Cc: Paolo Bonzini
Cc: Stefan Hajnoczi
Fixes: 057cbf49a1f0 ("tcm_vhost: Initial merge for vhost
>
> On Wed, May 15, 2019 at 10:46:00PM +0200, David Hildenbrand wrote:
> > > + vpmem->vdev = vdev;
> > > + vdev->priv = vpmem;
> > > + err = init_vq(vpmem);
> > > + if (err) {
> > > + dev_err(>dev, "failed to initialize virtio pmem vq's\n");
> > > + goto out_err;
> > > + }
> > >
On Wed, 15 May 2019 19:12:57 +0200
Halil Pasic wrote:
> On Mon, 13 May 2019 15:29:24 +0200
> Cornelia Huck wrote:
>
> > On Sun, 12 May 2019 20:22:56 +0200
> > Halil Pasic wrote:
> >
> > > On Fri, 10 May 2019 16:10:13 +0200
> > > Cornelia Huck wrote:
> > >
> > > > On Fri, 10 May 2019
On Wed, 15 May 2019 22:51:58 +0200
Halil Pasic wrote:
> On Mon, 13 May 2019 11:41:36 +0200
> Cornelia Huck wrote:
>
> > On Fri, 26 Apr 2019 20:32:41 +0200
> > Halil Pasic wrote:
> >
> > > As virtio-ccw devices are channel devices, we need to use the dma area
> > > for any communication
On Wed, 15 May 2019 23:08:17 +0200
Halil Pasic wrote:
> On Tue, 14 May 2019 10:47:34 -0400
> "Jason J. Herne" wrote:
> > Are we
> > worried that virtio data structures are going to be a burden on the 31-bit
> > address space?
> >
> >
>
> That is a good question I can not answer. Since
On Thu, 16 May 2019 15:42:45 +0200
Halil Pasic wrote:
> On Thu, 16 May 2019 08:32:28 +0200
> Cornelia Huck wrote:
>
> > On Wed, 15 May 2019 23:08:17 +0200
> > Halil Pasic wrote:
> >
> > > On Tue, 14 May 2019 10:47:34 -0400
> > > "Jason J. Herne" wrote:
> >
> > > > Are we
> > > >
On Thu, 16 May 2019 08:32:28 +0200
Cornelia Huck wrote:
> On Wed, 15 May 2019 23:08:17 +0200
> Halil Pasic wrote:
>
> > On Tue, 14 May 2019 10:47:34 -0400
> > "Jason J. Herne" wrote:
>
> > > Are we
> > > worried that virtio data structures are going to be a burden on the
> > > 31-bit
On Wed, May 15, 2019 at 10:46:00PM +0200, David Hildenbrand wrote:
> > + vpmem->vdev = vdev;
> > + vdev->priv = vpmem;
> > + err = init_vq(vpmem);
> > + if (err) {
> > + dev_err(>dev, "failed to initialize virtio pmem vq's\n");
> > + goto out_err;
> > + }
> > +
> > +
On Sun, 12 May 2019, Halil Pasic wrote:
> I've also got code that deals with AIRQ_IV_CACHELINE by turning the
> kmem_cache into a dma_pool.
>
> Cornelia, Sebastian which approach do you prefer:
> 1) get rid of cio_dma_pool and AIRQ_IV_CACHELINE, and waste a page per
> vector, or
> 2) go with the
On Thu, May 16, 2019 at 09:48:52AM +0200, Stefano Garzarella wrote:
> On Wed, May 15, 2019 at 04:24:00PM +0100, Stefan Hajnoczi wrote:
> > On Tue, May 07, 2019 at 02:25:43PM +0200, Stefano Garzarella wrote:
> > > Hi Jorge,
> > >
> > > On Mon, May 06, 2019 at 01:19:55PM -0700, Jorge Moreira Broche
On Thu, May 16, 2019 at 03:47:41AM -0400, Jason Wang wrote:
> @@ -183,7 +184,8 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid)
> virtio_transport_deliver_tap_pkt(pkt);
>
> virtio_transport_free_pkt(pkt);
> - }
> + total_len += pkt->len;
On Wed, May 15, 2019 at 04:24:00PM +0100, Stefan Hajnoczi wrote:
> On Tue, May 07, 2019 at 02:25:43PM +0200, Stefano Garzarella wrote:
> > Hi Jorge,
> >
> > On Mon, May 06, 2019 at 01:19:55PM -0700, Jorge Moreira Broche wrote:
> > > > On Wed, May 01, 2019 at 03:08:31PM -0400, Stefan Hajnoczi
When the rx buffer is too small for a packet, we will discard the vq
descriptor and retry it for the next packet:
while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
_intr))) {
...
/* On overrun, truncate and discard */
if
This patch will check the weight and exit the loop if we exceeds the
weight. This is useful for preventing scsi kthread from hogging cpu
which is guest triggerable.
This addresses CVE-2019-3900.
Cc: Paolo Bonzini
Cc: Stefan Hajnoczi
Fixes: 057cbf49a1f0 ("tcm_vhost: Initial merge for vhost
>
> > + vpmem->vdev = vdev;
> > + vdev->priv = vpmem;
> > + err = init_vq(vpmem);
> > + if (err) {
> > + dev_err(>dev, "failed to initialize virtio pmem vq's\n");
> > + goto out_err;
> > + }
> > +
> > + virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > +
We used to have vhost_exceeds_weight() for vhost-net to:
- prevent vhost kthread from hogging the cpu
- balance the time spent between TX and RX
This function could be useful for vsock and scsi as well. So move it
to vhost.c. Device must specify a weight which counts the number of
requests, or
Hi:
This series try to prvernt a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This
This patch will check the weight and exit the loop if we exceeds the
weight. This is useful for preventing vsock kthread from hogging cpu
which is guest triggerable. The weight can help to avoid starving the
request from on direction while another direction is being processed.
The value of weight
On Tue, Apr 30, 2019 at 05:30:01PM -0700, Jorge E. Moreira wrote:
> Avoid a race in which static variables in net/vmw_vsock/af_vsock.c are
> accessed (while handling interrupts) before they are initialized.
>
> [4.201410] BUG: unable to handle kernel paging request at ffe8
> [
On Thu, May 16, 2019 at 03:47:38AM -0400, Jason Wang wrote:
> Hi:
>
> This series try to prvernt a guest triggerable CPU hogging through
> vhost kthread. This is done by introducing and checking the weight
> after each requrest. The patch has been tested with reproducer of
> vsock and virtio-net.
> >> + vpmem->vdev = vdev;
> >> + vdev->priv = vpmem;
> >> + err = init_vq(vpmem);
> >> + if (err) {
> >> + dev_err(>dev, "failed to initialize virtio pmem vq's\n");
> >> + goto out_err;
> >> + }
> >> +
> >> + virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> >> +
On Fri, May 10, 2019 at 02:58:37PM +0200, Stefano Garzarella wrote:
> When the socket is released, we should free all packets
> queued in the per-socket list in order to avoid a memory
> leak.
>
> Signed-off-by: Stefano Garzarella
> ---
> net/vmw_vsock/virtio_transport_common.c | 8
>
On Fri, May 10, 2019 at 02:58:36PM +0200, Stefano Garzarella wrote:
> +struct virtio_vsock_buf {
Please add a comment describing the purpose of this struct and to
differentiate its use from struct virtio_vsock_pkt.
> +static struct virtio_vsock_buf *
> +virtio_transport_alloc_buf(struct
The original bochs and vbox implementations of pin and unpin functions
automatically reserved BOs during validation. This functionality got lost
while converting the code to a generic implementation. This may result
in validating unlocked TTM BOs.
Adding the reserve and unreserve operations to
The new interfaces drm_gem_vram_{pin/unpin}_reserved() are variants of the
GEM VRAM pin/unpin functions that do not reserve the BO during validation.
The mgag200 driver requires this behavior for its cursor handling. The
patch also converts the driver to use the new interfaces.
Signed-off-by:
A kernel test bot reported a problem with the locktorture testcase that
was triggered by the GEM VRAM helpers.
...
[ 10.004734] RIP: 0010:ttm_bo_validate+0x41/0x141 [ttm]
...
[ 10.015669] ? kvm_sched_clock_read+0x5/0xd
[ 10.016157] ? get_lock_stats+0x11/0x3f
[ 10.016607]
33 matches
Mail list logo