[bug report] virtio_net: rework mergeable buffer handling

2017-04-05 Thread Dan Carpenter
Hello Michael S. Tsirkin,

The patch 6c8e5f3c41c8: "virtio_net: rework mergeable buffer
handling" from Mar 6, 2017, leads to the following static checker
warning:

drivers/net/virtio_net.c:1042 virtnet_receive()
error: uninitialized symbol 'ctx'.

drivers/net/virtio_net.c
  1030  static int virtnet_receive(struct receive_queue *rq, int budget)
  1031  {
  1032  struct virtnet_info *vi = rq->vq->vdev->priv;
  1033  unsigned int len, received = 0, bytes = 0;
  1034  void *buf;
  1035  struct virtnet_stats *stats = this_cpu_ptr(vi->stats);
  1036  
  1037  if (vi->mergeable_rx_bufs) {
  1038  void *ctx;
  ^^^
  1039  
  1040  while (received < budget &&
  1041 (buf = virtqueue_get_buf_ctx(rq->vq, , 
))) {
   
  1042  bytes += receive_buf(vi, rq, buf, len, ctx);
   ^^^

It's possible that this code is correct, but I looked at it and wasn't
immediately convinced.  Returning non-NULL buf is not sufficient to
show that "ctx" is initialized, because if it's vq->indirect then "buf"
is still unintialized.  Also it's possible that receive_buf() checks
vq->indirect through some side effect way that I didn't see so it
doesn't use the uninitialized value...

I feel like if this is a false positive, that means the rules are too
subtle...  :/

  1043  received++;
  1044  }
  1045  } else {
  1046  while (received < budget &&
  1047 (buf = virtqueue_get_buf(rq->vq, )) != NULL) 
{
  1048  bytes += receive_buf(vi, rq, buf, len, NULL);
  1049  received++;
  1050  }
  1051  }
  1052  

regards,
dan carpenter
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] drm: virtio: fix virtio_gpu_cursor_formats

2017-04-05 Thread Ville Syrjälä
On Wed, Apr 05, 2017 at 10:09:15AM +0200, Laurent Vivier wrote:
> When we use virtio-vga with a big-endian guest,
> the mouse pointer disappears.
> 
> To fix that, on big-endian use DRM_FORMAT_BGRA
> instead of DRM_FORMAT_ARGB.
> 
> Signed-off-by: Laurent Vivier 
> ---
>  drivers/gpu/drm/virtio/virtgpu_plane.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
> b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index 11288ff..3ed7174 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -39,7 +39,11 @@ static const uint32_t virtio_gpu_formats[] = {
>  };
>  
>  static const uint32_t virtio_gpu_cursor_formats[] = {
> +#ifdef __BIG_ENDIAN
> + DRM_FORMAT_BGRA,
> +#else
>   DRM_FORMAT_ARGB,
> +#endif

DRM formats are supposed to be little endian, so this isn't really
correct.

>  };
>  
>  static void virtio_gpu_plane_destroy(struct drm_plane *plane)
> -- 
> 2.9.3
> 
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Ville Syrjälä
Intel OTC
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] drm: virtio: fix virtio_gpu_cursor_formats

2017-04-05 Thread Laurent Vivier
When we use virtio-vga with a big-endian guest,
the mouse pointer disappears.

To fix that, on big-endian use DRM_FORMAT_BGRA
instead of DRM_FORMAT_ARGB.

Signed-off-by: Laurent Vivier 
---
 drivers/gpu/drm/virtio/virtgpu_plane.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 11288ff..3ed7174 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -39,7 +39,11 @@ static const uint32_t virtio_gpu_formats[] = {
 };
 
 static const uint32_t virtio_gpu_cursor_formats[] = {
+#ifdef __BIG_ENDIAN
+   DRM_FORMAT_BGRA,
+#else
DRM_FORMAT_ARGB,
+#endif
 };
 
 static void virtio_gpu_plane_destroy(struct drm_plane *plane)
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH kernel v8 2/4] virtio-balloon: VIRTIO_BALLOON_F_CHUNK_TRANSFER

2017-04-05 Thread Wang, Wei W
On Wednesday, April 5, 2017 12:31 PM, Wei Wang wrote:
> On Wednesday, April 5, 2017 11:54 AM, Michael S. Tsirkin wrote:
> > On Wed, Apr 05, 2017 at 03:31:36AM +, Wang, Wei W wrote:
> > > On Thursday, March 16, 2017 3:09 PM Wei Wang wrote:
> > > > The implementation of the current virtio-balloon is not very
> > > > efficient, because the ballooned pages are transferred to the host
> > > > one by one. Here is the breakdown of the time in percentage spent
> > > > on each step of the balloon inflating process (inflating 7GB of an 8GB 
> > > > idle
> guest).
> > > >
> > > > 1) allocating pages (6.5%)
> > > > 2) sending PFNs to host (68.3%)
> > > > 3) address translation (6.1%)
> > > > 4) madvise (19%)
> > > >
> > > > It takes about 4126ms for the inflating process to complete.
> > > > The above profiling shows that the bottlenecks are stage 2) and stage 
> > > > 4).
> > > >
> > > > This patch optimizes step 2) by transferring pages to the host in
> > > > chunks. A chunk consists of guest physically continuous pages, and
> > > > it is offered to the host via a base PFN (i.e. the start PFN of
> > > > those physically continuous pages) and the size (i.e. the total
> > > > number of the
> > pages). A chunk is formated as below:
> > > >
> > > > 
> > > > | Base (52 bit)| Rsvd (12 bit) |
> > > > 
> > > > 
> > > > | Size (52 bit)| Rsvd (12 bit) |
> > > > 
> > > >
> > > > By doing so, step 4) can also be optimized by doing address
> > > > translation and
> > > > madvise() in chunks rather than page by page.
> > > >
> > > > This optimization requires the negotiation of a new feature bit,
> > > > VIRTIO_BALLOON_F_CHUNK_TRANSFER.
> > > >
> > > > With this new feature, the above ballooning process takes ~590ms
> > > > resulting in an improvement of ~85%.
> > > >
> > > > TODO: optimize stage 1) by allocating/freeing a chunk of pages
> > > > instead of a single page each time.
> > > >
> > > > Signed-off-by: Liang Li 
> > > > Signed-off-by: Wei Wang 
> > > > Suggested-by: Michael S. Tsirkin 
> > > > ---
> > > >  drivers/virtio/virtio_balloon.c | 371
> > +-
> > > > --
> > > >  include/uapi/linux/virtio_balloon.h |   9 +
> > > >  2 files changed, 353 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/drivers/virtio/virtio_balloon.c
> > > > b/drivers/virtio/virtio_balloon.c index
> > > > f59cb4f..3f4a161 100644
> > > > --- a/drivers/virtio/virtio_balloon.c
> > > > +++ b/drivers/virtio/virtio_balloon.c
> > > > @@ -42,6 +42,10 @@
> > > >  #define OOM_VBALLOON_DEFAULT_PAGES 256  #define
> > > > VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
> > > >
> > > > +#define PAGE_BMAP_SIZE (8 * PAGE_SIZE)
> > > > +#define PFNS_PER_PAGE_BMAP (PAGE_BMAP_SIZE * BITS_PER_BYTE)
> > > > +#define PAGE_BMAP_COUNT_MAX32
> > > > +
> > > >  static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES;
> > > > module_param(oom_pages, int, S_IRUSR | S_IWUSR);
> > > > MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -50,6
> > +54,14
> > > > @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");  static
> > > > struct vfsmount *balloon_mnt;  #endif
> > > >
> > > > +#define BALLOON_CHUNK_BASE_SHIFT 12 #define
> > > > +BALLOON_CHUNK_SIZE_SHIFT 12 struct balloon_page_chunk {
> > > > +   __le64 base;
> > > > +   __le64 size;
> > > > +};
> > > > +
> > > > +typedef __le64 resp_data_t;
> > > >  struct virtio_balloon {
> > > > struct virtio_device *vdev;
> > > > struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; @@ -67,6
> > > > +79,31 @@ struct virtio_balloon {
> > > >
> > > > /* Number of balloon pages we've told the Host we're not using. 
> > > > */
> > > > unsigned int num_pages;
> > > > +   /* Pointer to the response header. */
> > > > +   struct virtio_balloon_resp_hdr *resp_hdr;
> > > > +   /* Pointer to the start address of response data. */
> > > > +   resp_data_t *resp_data;
> > >
> > > I think the implementation has an issue here - both the balloon
> > > pages and the
> > unused pages use the same buffer ("resp_data" above) to store chunks.
> > It would cause a race in this case: live migration starts while ballooning 
> > is also
> in progress.
> > I plan to use separate buffers for CHUNKS_OF_BALLOON_PAGES and
> > CHUNKS_OF_UNUSED_PAGES. Please let me know if you have a different
> > suggestion. Thanks.
> > >
> > > Best,
> > > Wei
> >
> > Is only one resp data ever in flight for each kind?
> > If not you want as many buffers as vq allows.
> >
> 
> No, all the kinds were using only one resp_data. I will make it one resp_data 
> for
> each kind.
> 

Just in case it wasn't well explained - it is one resp data in flight for each