Re: [PATCH v2] mic: vop: Fix broken virtqueues

2019-01-30 Thread Greg KH
On Wed, Jan 30, 2019 at 05:28:09PM +0100, Vincent Whitchurch wrote:
> VOP is broken in mainline since commit 1ce9e6055fa0a9043 ("virtio_ring:
> introduce packed ring support"); attempting to use the virtqueues leads
> to various kernel crashes.  I'm testing it with my not-yet-merged
> loopback patches, but even the in-tree MIC hardware cannot work.
> 
> The problem is not in the referenced commit per se, but is due to the
> following hack in vop_find_vq() which depends on the layout of private
> structures in other source files, which that commit happened to change:
> 
>   /*
>* To reassign the used ring here we are directly accessing
>* struct vring_virtqueue which is a private data structure
>* in virtio_ring.c. At the minimum, a BUILD_BUG_ON() in
>* vring_new_virtqueue() would ensure that
>*  (>vring == (struct vring *) (>vq + 1));
>*/
>   vr = (struct vring *)(vq + 1);
>   vr->used = used;
> 
> Fix vop by using __vring_new_virtqueue() to create the needed vring
> layout from the start, instead of attempting to patch in the used ring
> later.  __vring_new_virtqueue() was added way back in commit
> 2a2d1382fe9dcc ("virtio: Add improved queue allocation API") in order to
> address mic's usecase, according to the commit message.
> 
> The removal patch also has the hack and we need to save the used ring
> pointer separately to remove it.
> 
> Fixes: 1ce9e6055fa0 ("virtio_ring: introduce packed ring support")
> Signed-off-by: Vincent Whitchurch 
> ---
> v2: fix removal path also
> 
>  drivers/misc/mic/vop/vop_main.c | 69 +++--
>  1 file changed, 40 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/misc/mic/vop/vop_main.c b/drivers/misc/mic/vop/vop_main.c
> index b60b5ec4a28a..c4517703dc1d 100644
> --- a/drivers/misc/mic/vop/vop_main.c
> +++ b/drivers/misc/mic/vop/vop_main.c
> @@ -48,7 +48,8 @@
>   * @dc: Virtio device control
>   * @vpdev: VOP device which is the parent for this virtio device
>   * @vr: Buffer for accessing the VRING
> - * @used: Buffer for used
> + * @used_virt: Virtual address of used ring
> + * @used: DMA address of used ring
>   * @used_size: Size of the used buffer
>   * @reset_done: Track whether VOP reset is complete
>   * @virtio_cookie: Cookie returned upon requesting a interrupt
> @@ -62,6 +63,7 @@ struct _vop_vdev {
>   struct mic_device_ctrl __iomem *dc;
>   struct vop_device *vpdev;
>   void __iomem *vr[VOP_MAX_VRINGS];
> + void *used_virt[VOP_MAX_VRINGS];
>   dma_addr_t used[VOP_MAX_VRINGS];
>   int used_size[VOP_MAX_VRINGS];
>   struct completion reset_done;
> @@ -261,12 +263,12 @@ static bool vop_notify(struct virtqueue *vq)
>  static void vop_del_vq(struct virtqueue *vq, int n)
>  {
>   struct _vop_vdev *vdev = to_vopvdev(vq->vdev);
> - struct vring *vr = (struct vring *)(vq + 1);
>   struct vop_device *vpdev = vdev->vpdev;
>  
>   dma_unmap_single(>dev, vdev->used[n],
>vdev->used_size[n], DMA_BIDIRECTIONAL);
> - free_pages((unsigned long)vr->used, get_order(vdev->used_size[n]));
> + free_pages((unsigned long)vdev->used_virt[n],
> +get_order(vdev->used_size[n]));
>   vring_del_virtqueue(vq);
>   vpdev->hw_ops->unmap(vpdev, vdev->vr[n]);
>   vdev->vr[n] = NULL;
> @@ -284,6 +286,26 @@ static void vop_del_vqs(struct virtio_device *dev)
>   vop_del_vq(vq, idx++);
>  }
>  
> +static struct virtqueue *vop_new_virtqueue(unsigned int index,
> +   unsigned int num,
> +   struct virtio_device *vdev,
> +   bool context,
> +   void *pages,
> +   bool (*notify)(struct virtqueue *vq),
> +   void (*callback)(struct virtqueue *vq),
> +   const char *name,
> +   void *used)
> +{
> + bool weak_barriers = false;
> + struct vring vring;
> +
> + vring_init(, num, pages, MIC_VIRTIO_RING_ALIGN);
> + vring.used = used;
> +
> + return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> +  notify, callback, name);
> +}
> +
>  /*
>   * This routine will assign vring's allocated in host/io memory. Code in
>   * virtio_ring.c however continues to access this io memory as if it were 
> local
> @@ -303,7 +325,6 @@ static struct virtqueue *vop_find_vq(struct virtio_device 
> *dev,
>   struct _mic_vring_info __iomem *info;
>   void *used;
>   int vr_size, _vr_size, err, magic;
> - struct vring *vr;
>   u8 type = ioread8(>desc->type);
>  
>   if (index >= ioread8(>desc->num_vq))
> @@ -322,17 +343,7 @@ static struct virtqueue *vop_find_vq(struct 
> virtio_device *dev,
>   return ERR_PTR(-ENOMEM);
>   vdev->vr[index] = va;
>   memset_io(va, 0x0, _vr_size);
> - vq = 

Re: [PATCH 0/5 v5] Fix virtio-blk issue with SWIOTLB

2019-01-30 Thread Konrad Rzeszutek Wilk
On Wed, Jan 30, 2019 at 05:40:02PM +0100, Joerg Roedel wrote:
> Hi,
> 
> here is the next version of this patch-set. Previous
> versions can be found here:
> 
>   V1: https://lore.kernel.org/lkml/20190110134433.15672-1-j...@8bytes.org/
> 
>   V2: https://lore.kernel.org/lkml/20190115132257.6426-1-j...@8bytes.org/
> 
>   V3: https://lore.kernel.org/lkml/20190123163049.24863-1-j...@8bytes.org/
> 
>   V4: https://lore.kernel.org/lkml/20190129084342.26030-1-j...@8bytes.org/
> 
> The problem solved here is a limitation of the SWIOTLB implementation,
> which does not support allocations larger than 256kb.  When the
> virtio-blk driver tries to read/write a block larger than that, the
> allocation of the dma-handle fails and an IO error is reported.
> 
> Changes to v4 are:
> 
>   - Added Reviewed-by tags from Christoph
> 
>   - Added missing EXPORT_SYMBOL(_GPL) lines
> 
> Please review.

I can put it in my tree and send it to Linus .. unless folks want
to do it through a different tree?


> 
> Thanks,
> 
>   Joerg
> Joerg Roedel (5):
>   swiotlb: Introduce swiotlb_max_mapping_size()
>   swiotlb: Add is_swiotlb_active() function
>   dma: Introduce dma_max_mapping_size()
>   virtio: Introduce virtio_max_dma_size()
>   virtio-blk: Consider virtio_max_dma_size() for maximum segment size
> 
>  Documentation/DMA-API.txt|  8 
>  drivers/block/virtio_blk.c   | 10 ++
>  drivers/virtio/virtio_ring.c | 11 +++
>  include/linux/dma-mapping.h  | 16 
>  include/linux/swiotlb.h  | 11 +++
>  include/linux/virtio.h   |  2 ++
>  kernel/dma/direct.c  | 12 
>  kernel/dma/swiotlb.c | 14 ++
>  8 files changed, 80 insertions(+), 4 deletions(-)
> 
> -- 
> 2.17.1
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 1/5] swiotlb: Introduce swiotlb_max_mapping_size()

2019-01-30 Thread Joerg Roedel
From: Joerg Roedel 

The function returns the maximum size that can be remapped
by the SWIOTLB implementation. This function will be later
exposed to users through the DMA-API.

Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Joerg Roedel 
---
 include/linux/swiotlb.h | 5 +
 kernel/dma/swiotlb.c| 5 +
 2 files changed, 10 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 7c007ed7505f..1c22d96e1742 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -62,6 +62,7 @@ extern void swiotlb_tbl_sync_single(struct device *hwdev,
 
 extern int
 swiotlb_dma_supported(struct device *hwdev, u64 mask);
+size_t swiotlb_max_mapping_size(struct device *dev);
 
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
@@ -95,6 +96,10 @@ static inline unsigned int swiotlb_max_segment(void)
 {
return 0;
 }
+static inline size_t swiotlb_max_mapping_size(struct device *dev)
+{
+   return SIZE_MAX;
+}
 #endif /* CONFIG_SWIOTLB */
 
 extern void swiotlb_print_info(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1fb6fd68b9c7..9cb21259cb0b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -662,3 +662,8 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
 }
+
+size_t swiotlb_max_mapping_size(struct device *dev)
+{
+   return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
+}
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 2/5] swiotlb: Add is_swiotlb_active() function

2019-01-30 Thread Joerg Roedel
From: Joerg Roedel 

This function will be used from dma_direct code to determine
the maximum segment size of a dma mapping.

Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Joerg Roedel 
---
 include/linux/swiotlb.h | 6 ++
 kernel/dma/swiotlb.c| 9 +
 2 files changed, 15 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 1c22d96e1742..e9e786b4b598 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -63,6 +63,7 @@ extern void swiotlb_tbl_sync_single(struct device *hwdev,
 extern int
 swiotlb_dma_supported(struct device *hwdev, u64 mask);
 size_t swiotlb_max_mapping_size(struct device *dev);
+bool is_swiotlb_active(void);
 
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
@@ -100,6 +101,11 @@ static inline size_t swiotlb_max_mapping_size(struct 
device *dev)
 {
return SIZE_MAX;
 }
+
+static inline bool is_swiotlb_active(void)
+{
+   return false;
+}
 #endif /* CONFIG_SWIOTLB */
 
 extern void swiotlb_print_info(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 9cb21259cb0b..c873f9cc2146 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -667,3 +667,12 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 {
return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
 }
+
+bool is_swiotlb_active(void)
+{
+   /*
+* When SWIOTLB is initialized, even if io_tlb_start points to physical
+* address zero, io_tlb_end surely doesn't.
+*/
+   return io_tlb_end != 0;
+}
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 0/5 v5] Fix virtio-blk issue with SWIOTLB

2019-01-30 Thread Joerg Roedel
Hi,

here is the next version of this patch-set. Previous
versions can be found here:

V1: https://lore.kernel.org/lkml/20190110134433.15672-1-j...@8bytes.org/

V2: https://lore.kernel.org/lkml/20190115132257.6426-1-j...@8bytes.org/

V3: https://lore.kernel.org/lkml/20190123163049.24863-1-j...@8bytes.org/

V4: https://lore.kernel.org/lkml/20190129084342.26030-1-j...@8bytes.org/

The problem solved here is a limitation of the SWIOTLB implementation,
which does not support allocations larger than 256kb.  When the
virtio-blk driver tries to read/write a block larger than that, the
allocation of the dma-handle fails and an IO error is reported.

Changes to v4 are:

- Added Reviewed-by tags from Christoph

- Added missing EXPORT_SYMBOL(_GPL) lines

Please review.

Thanks,

Joerg
Joerg Roedel (5):
  swiotlb: Introduce swiotlb_max_mapping_size()
  swiotlb: Add is_swiotlb_active() function
  dma: Introduce dma_max_mapping_size()
  virtio: Introduce virtio_max_dma_size()
  virtio-blk: Consider virtio_max_dma_size() for maximum segment size

 Documentation/DMA-API.txt|  8 
 drivers/block/virtio_blk.c   | 10 ++
 drivers/virtio/virtio_ring.c | 11 +++
 include/linux/dma-mapping.h  | 16 
 include/linux/swiotlb.h  | 11 +++
 include/linux/virtio.h   |  2 ++
 kernel/dma/direct.c  | 12 
 kernel/dma/swiotlb.c | 14 ++
 8 files changed, 80 insertions(+), 4 deletions(-)

-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio_net: Introduce extended RSC feature

2019-01-30 Thread Michael S. Tsirkin
On Wed, Jan 30, 2019 at 09:42:07AM +0200, Yuri Benditovich wrote:
> 
> 
> On Tue, Jan 29, 2019 at 6:07 PM Michael S. Tsirkin  wrote:
> 
> On Tue, Jan 29, 2019 at 02:52:36PM +0200, Yuri Benditovich wrote:
> > VIRTIO_NET_F_RSC_EXT feature bit indicates that the device
> > is able to provide extended RSC information. When the feature
> > is active and 'gso_type' field in received packet is not GSO_NONE,
> > the device reports number of coalesced packets in 'csum_start'
> > field and number of duplicated acks in 'csum_offset' field
> > and sets VIRTIO_NET_HDR_F_RSC_INFO in 'flags' field.
> >
> > Signed-off-by: Yuri Benditovich 
> > ---
> >  include/uapi/linux/virtio_net.h | 15 ++-
> >  1 file changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/
> virtio_net.h
> > index a3715a3224c1..93c71d714475 100644
> > --- a/include/uapi/linux/virtio_net.h
> > +++ b/include/uapi/linux/virtio_net.h
> > @@ -56,7 +56,7 @@
> >  #define VIRTIO_NET_F_MQ      22      /* Device supports Receive Flow
> >                                        * Steering */
> >  #define VIRTIO_NET_F_CTRL_MAC_ADDR 23        /* Set MAC address */
> > -
> > +#define VIRTIO_NET_F_RSC_EXT   61    /* Provides extended RSC info */
> >  #define VIRTIO_NET_F_STANDBY   62    /* Act as standby for another
> device
> >                                        * with the same MAC.
> >                                        */
> > @@ -104,6 +104,7 @@ struct virtio_net_config {
> >  struct virtio_net_hdr_v1 {
> >  #define VIRTIO_NET_HDR_F_NEEDS_CSUM  1       /* Use csum_start,
> csum_offset */
> >  #define VIRTIO_NET_HDR_F_DATA_VALID  2       /* Csum is valid */
> > +#define VIRTIO_NET_HDR_F_RSC_INFO    4   /* rsc_ext data in csum_ 
> fields
> */
> >       __u8 flags;
> >  #define VIRTIO_NET_HDR_GSO_NONE              0       /* Not a GSO frame
> */
> >  #define VIRTIO_NET_HDR_GSO_TCPV4     1       /* GSO frame, IPv4 TCP
> (TSO) */
> > @@ -140,6 +141,18 @@ struct virtio_net_hdr_mrg_rxbuf {
> >       struct virtio_net_hdr hdr;
> >       __virtio16 num_buffers; /* Number of merged rx buffers */
> >  };
> > +
> > +static inline __virtio16 *virtio_net_rsc_ext_num_packets(
> > +     struct virtio_net_hdr *hdr)
> > +{
> > +     return >csum_start;
> > +}
> > +
> > +static inline __virtio16 *virtio_net_rsc_ext_num_dupacks(
> > +     struct virtio_net_hdr *hdr)
> > +{
> > +     return >csum_offset;
> > +}
> >  #endif /* ...VIRTIO_NET_NO_LEGACY */

I also wonder why do we want to put this code in the legacy section
and use the legacy virtio_net_hdr as opposed to the new virtio_net_hdr_v1.

> 
> Coding style is off here. But really I don't think these inlines are
> needed here. Put them in qemu or something.
> 
> 
> 
> The are copied from qemu as is (what exactly is wrong?).


coding style says:

Descendants are always substantially shorter than the parent and
are placed substantially to the right.

placing a line to the left of ( doesn't count as substantially to the
right :)
Maybe start a new line at virtio_net_rsc_ext_num_dupacks.

Lack of documentation is also a problem.

> The reason I place these inlines here is following:
> We pull this include sometimes to virtio-win repo exactly as qemu do.
> If I place them into qemu, then we'll have these inlines in virtio-win and in
> qemu and they are not synchronized.
> 
> So, I suggest to keep them in one common header and fix style problems, if 
> they
> present. 
> Please respond if you still disagree.

Okay but this assumes specific usage. E.g. someone might want
an offset and not a pointer. Or have a struct instance on stack.

Given all above issues (and also header version issues
described above) I'm inclined to say macros are better:


#define virtio_net_rsc_ext_num_packets csum_start
#define virtio_net_rsc_ext_num_dupacks csum_offset


But please in any case also add documentation same as we have for
fields.

> 
> >  /*
> > --
> > 2.17.1
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] mic: vop: Fix broken virtqueues

2019-01-30 Thread Sudeep Dutt
On Tue, 2019-01-29 at 11:22 +0100, Vincent Whitchurch wrote:
> VOP is broken in mainline since commit 1ce9e6055fa0a9043 ("virtio_ring:
> introduce packed ring support"); attempting to use the virtqueues leads
> to various kernel crashes.  I'm testing it with my not-yet-merged
> loopback patches, but even the in-tree MIC hardware cannot work.
> 
> The problem is not in the referenced commit per se, but is due to the
> following hack in vop_find_vq() which depends on the layout of private
> structures in other source files, which that commit happened to change:
> 
>   /*
>* To reassign the used ring here we are directly accessing
>* struct vring_virtqueue which is a private data structure
>* in virtio_ring.c. At the minimum, a BUILD_BUG_ON() in
>* vring_new_virtqueue() would ensure that
>*  (>vring == (struct vring *) (>vq + 1));
>*/
>   vr = (struct vring *)(vq + 1);
>   vr->used = used;
> 
> Fix vop by using __vring_new_virtqueue() to create the needed vring
> layout from the start, instead of attempting to patch in the used ring
> later.  __vring_new_virtqueue() was added way back in commit
> 2a2d1382fe9dcc ("virtio: Add improved queue allocation API") in order to
> address mic's usecase, according to the commit message.
> 

Thank you for fixing this up Vincent.

Reviewed-by: Sudeep Dutt 

> Signed-off-by: Vincent Whitchurch 
> ---
>  drivers/misc/mic/vop/vop_main.c | 60 +++--
>  1 file changed, 34 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/misc/mic/vop/vop_main.c b/drivers/misc/mic/vop/vop_main.c
> index d2b9782eee87..fef45bf6d519 100644
> --- a/drivers/misc/mic/vop/vop_main.c
> +++ b/drivers/misc/mic/vop/vop_main.c
> @@ -284,6 +284,26 @@ static void vop_del_vqs(struct virtio_device *dev)
>   vop_del_vq(vq, idx++);
>  }
>  
> +static struct virtqueue *vop_new_virtqueue(unsigned int index,
> +   unsigned int num,
> +   struct virtio_device *vdev,
> +   bool context,
> +   void *pages,
> +   bool (*notify)(struct virtqueue *vq),
> +   void (*callback)(struct virtqueue *vq),
> +   const char *name,
> +   void *used)
> +{
> + bool weak_barriers = false;
> + struct vring vring;
> +
> + vring_init(, num, pages, MIC_VIRTIO_RING_ALIGN);
> + vring.used = used;
> +
> + return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> +  notify, callback, name);
> +}
> +
>  /*
>   * This routine will assign vring's allocated in host/io memory. Code in
>   * virtio_ring.c however continues to access this io memory as if it were 
> local
> @@ -303,7 +323,6 @@ static struct virtqueue *vop_find_vq(struct virtio_device 
> *dev,
>   struct _mic_vring_info __iomem *info;
>   void *used;
>   int vr_size, _vr_size, err, magic;
> - struct vring *vr;
>   u8 type = ioread8(>desc->type);
>  
>   if (index >= ioread8(>desc->num_vq))
> @@ -322,17 +341,7 @@ static struct virtqueue *vop_find_vq(struct 
> virtio_device *dev,
>   return ERR_PTR(-ENOMEM);
>   vdev->vr[index] = va;
>   memset_io(va, 0x0, _vr_size);
> - vq = vring_new_virtqueue(
> - index,
> - le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN,
> - dev,
> - false,
> - ctx,
> - (void __force *)va, vop_notify, callback, name);
> - if (!vq) {
> - err = -ENOMEM;
> - goto unmap;
> - }
> +
>   info = va + _vr_size;
>   magic = ioread32(>magic);
>  
> @@ -341,7 +350,6 @@ static struct virtqueue *vop_find_vq(struct virtio_device 
> *dev,
>   goto unmap;
>   }
>  
> - /* Allocate and reassign used ring now */
>   vdev->used_size[index] = PAGE_ALIGN(sizeof(__u16) * 3 +
>sizeof(struct vring_used_elem) *
>le16_to_cpu(config.num));
> @@ -351,8 +359,17 @@ static struct virtqueue *vop_find_vq(struct 
> virtio_device *dev,
>   err = -ENOMEM;
>   dev_err(_vop_dev(vdev), "%s %d err %d\n",
>   __func__, __LINE__, err);
> - goto del_vq;
> + goto unmap;
> + }
> +
> + vq = vop_new_virtqueue(index, le16_to_cpu(config.num), dev, ctx,
> +(void __force *)va, vop_notify, callback,
> +name, used);
> + if (!vq) {
> + err = -ENOMEM;
> + goto free_used;
>   }
> +
>   vdev->used[index] = dma_map_single(>dev, used,
>   vdev->used_size[index],
>  

Re: [PATCH 4/5] virtio: Introduce virtio_max_dma_size()

2019-01-30 Thread Lendacky, Thomas
On 1/29/19 2:43 AM, Joerg Roedel wrote:
> From: Joerg Roedel 
> 
> This function returns the maximum segment size for a single
> dma transaction of a virtio device. The possible limit comes
> from the SWIOTLB implementation in the Linux kernel, that
> has an upper limit of (currently) 256kb of contiguous
> memory it can map. Other DMA-API implementations might also
> have limits.
> 
> Use the new dma_max_mapping_size() function to determine the
> maximum mapping size when DMA-API is in use for virtio.
> 
> Reviewed-by: Konrad Rzeszutek Wilk 
> Signed-off-by: Joerg Roedel 
> ---
>  drivers/virtio/virtio_ring.c | 10 ++
>  include/linux/virtio.h   |  2 ++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index cd7e755484e3..9ca3fe6af9fa 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -266,6 +266,16 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
>   return false;
>  }
>  
> +size_t virtio_max_dma_size(struct virtio_device *vdev)
> +{
> + size_t max_segment_size = SIZE_MAX;
> +
> + if (vring_use_dma_api(vdev))
> + max_segment_size = dma_max_mapping_size(>dev);
> +
> + return max_segment_size;
> +}

When I build with these patches and with the virtio devices as modules I
get a build failure. Looks like this needs an EXPORT_SYMBOL_GPL().

Thanks,
Tom

> +
>  static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
> dma_addr_t *dma_handle, gfp_t flag)
>  {
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index fa1b5da2804e..673fe3ef3607 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -157,6 +157,8 @@ int virtio_device_freeze(struct virtio_device *dev);
>  int virtio_device_restore(struct virtio_device *dev);
>  #endif
>  
> +size_t virtio_max_dma_size(struct virtio_device *vdev);
> +
>  #define virtio_device_for_each_vq(vdev, vq) \
>   list_for_each_entry(vq, >vqs, list)
>  
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/5] dma: Introduce dma_max_mapping_size()

2019-01-30 Thread Lendacky, Thomas
On 1/29/19 2:43 AM, Joerg Roedel wrote:
> From: Joerg Roedel 
> 
> The function returns the maximum size that can be mapped
> using DMA-API functions. The patch also adds the
> implementation for direct DMA and a new dma_map_ops pointer
> so that other implementations can expose their limit.
> 
> Reviewed-by: Konrad Rzeszutek Wilk 
> Signed-off-by: Joerg Roedel 
> ---
>  Documentation/DMA-API.txt   |  8 
>  include/linux/dma-mapping.h | 16 
>  kernel/dma/direct.c | 11 +++
>  3 files changed, 35 insertions(+)
> 
> diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
> index e133ccd60228..acfe3d0f78d1 100644
> --- a/Documentation/DMA-API.txt
> +++ b/Documentation/DMA-API.txt
> @@ -195,6 +195,14 @@ Requesting the required mask does not alter the current 
> mask.  If you
>  wish to take advantage of it, you should issue a dma_set_mask()
>  call to set the mask to the value returned.
>  
> +::
> +
> + size_t
> + dma_direct_max_mapping_size(struct device *dev);
> +
> +Returns the maximum size of a mapping for the device. The size parameter
> +of the mapping functions like dma_map_single(), dma_map_page() and
> +others should not be larger than the returned value.
>  
>  Part Id - Streaming DMA mappings
>  
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index f6ded992c183..a3ca8a71a704 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -130,6 +130,7 @@ struct dma_map_ops {
>   enum dma_data_direction direction);
>   int (*dma_supported)(struct device *dev, u64 mask);
>   u64 (*get_required_mask)(struct device *dev);
> + size_t (*max_mapping_size)(struct device *dev);
>  };
>  
>  #define DMA_MAPPING_ERROR(~(dma_addr_t)0)
> @@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct 
> device *dev,
>  }
>  #endif
>  
> +size_t dma_direct_max_mapping_size(struct device *dev);
> +
>  #ifdef CONFIG_HAS_DMA
>  #include 
>  
> @@ -440,6 +443,19 @@ static inline int dma_mapping_error(struct device *dev, 
> dma_addr_t dma_addr)
>   return 0;
>  }
>  
> +static inline size_t dma_max_mapping_size(struct device *dev)
> +{
> + const struct dma_map_ops *ops = get_dma_ops(dev);
> + size_t size = SIZE_MAX;
> +
> + if (dma_is_direct(ops))
> + size = dma_direct_max_mapping_size(dev);
> + else if (ops && ops->max_mapping_size)
> + size = ops->max_mapping_size(dev);
> +
> + return size;
> +}
> +
>  void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t 
> *dma_handle,
>   gfp_t flag, unsigned long attrs);
>  void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 355d16acee6d..6310ad01f915 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -380,3 +380,14 @@ int dma_direct_supported(struct device *dev, u64 mask)
>*/
>   return mask >= __phys_to_dma(dev, min_mask);
>  }
> +
> +size_t dma_direct_max_mapping_size(struct device *dev)
> +{
> + size_t size = SIZE_MAX;
> +
> + /* If SWIOTLB is active, use its maximum mapping size */
> + if (is_swiotlb_active())
> + size = swiotlb_max_mapping_size(dev);
> +
> + return size;
> +}

When I build with these patches and with the virtio devices as modules I
get a build failure. Looks like this needs an EXPORT_SYMBOL() (I'm
assuming EXPORT_SYMBOL for DMA functions and not EXPORT_SYMBOL_GPL?).

Thanks,
Tom

> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 0/5 v5] Fix virtio-blk issue with SWIOTLB

2019-01-30 Thread Lendacky, Thomas
On 1/30/19 10:40 AM, Joerg Roedel wrote:
> Hi,
> 
> here is the next version of this patch-set. Previous
> versions can be found here:
> 
>   V1: https://lore.kernel.org/lkml/20190110134433.15672-1-j...@8bytes.org/
> 
>   V2: https://lore.kernel.org/lkml/20190115132257.6426-1-j...@8bytes.org/
> 
>   V3: https://lore.kernel.org/lkml/20190123163049.24863-1-j...@8bytes.org/
> 
>   V4: https://lore.kernel.org/lkml/20190129084342.26030-1-j...@8bytes.org/
> 
> The problem solved here is a limitation of the SWIOTLB implementation,
> which does not support allocations larger than 256kb.  When the
> virtio-blk driver tries to read/write a block larger than that, the
> allocation of the dma-handle fails and an IO error is reported.
> 
> Changes to v4 are:
> 
>   - Added Reviewed-by tags from Christoph
> 
>   - Added missing EXPORT_SYMBOL(_GPL) lines
> 
> Please review.

Looks good. Booted and tested using an SEV guest without any issues.

Tested-by: Tom Lendacky 

Thanks,
Tom

> 
> Thanks,
> 
>   Joerg
> Joerg Roedel (5):
>   swiotlb: Introduce swiotlb_max_mapping_size()
>   swiotlb: Add is_swiotlb_active() function
>   dma: Introduce dma_max_mapping_size()
>   virtio: Introduce virtio_max_dma_size()
>   virtio-blk: Consider virtio_max_dma_size() for maximum segment size
> 
>  Documentation/DMA-API.txt|  8 
>  drivers/block/virtio_blk.c   | 10 ++
>  drivers/virtio/virtio_ring.c | 11 +++
>  include/linux/dma-mapping.h  | 16 
>  include/linux/swiotlb.h  | 11 +++
>  include/linux/virtio.h   |  2 ++
>  kernel/dma/direct.c  | 12 
>  kernel/dma/swiotlb.c | 14 ++
>  8 files changed, 80 insertions(+), 4 deletions(-)
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 5/5] virtio-blk: Consider virtio_max_dma_size() for maximum segment size

2019-01-30 Thread Joerg Roedel
From: Joerg Roedel 

Segments can't be larger than the maximum DMA mapping size
supported on the platform. Take that into account when
setting the maximum segment size for a block device.

Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Joerg Roedel 
---
 drivers/block/virtio_blk.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index b16a887bbd02..4bc083b7c9b5 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -723,7 +723,7 @@ static int virtblk_probe(struct virtio_device *vdev)
struct request_queue *q;
int err, index;
 
-   u32 v, blk_size, sg_elems, opt_io_size;
+   u32 v, blk_size, max_size, sg_elems, opt_io_size;
u16 min_io_size;
u8 physical_block_exp, alignment_offset;
 
@@ -826,14 +826,16 @@ static int virtblk_probe(struct virtio_device *vdev)
/* No real sector limit. */
blk_queue_max_hw_sectors(q, -1U);
 
+   max_size = virtio_max_dma_size(vdev);
+
/* Host can optionally specify maximum segment size and number of
 * segments. */
err = virtio_cread_feature(vdev, VIRTIO_BLK_F_SIZE_MAX,
   struct virtio_blk_config, size_max, );
if (!err)
-   blk_queue_max_segment_size(q, v);
-   else
-   blk_queue_max_segment_size(q, -1U);
+   max_size = min(max_size, v);
+
+   blk_queue_max_segment_size(q, max_size);
 
/* Host can optionally specify the block size of the device */
err = virtio_cread_feature(vdev, VIRTIO_BLK_F_BLK_SIZE,
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 4/5] virtio: Introduce virtio_max_dma_size()

2019-01-30 Thread Joerg Roedel
From: Joerg Roedel 

This function returns the maximum segment size for a single
dma transaction of a virtio device. The possible limit comes
from the SWIOTLB implementation in the Linux kernel, that
has an upper limit of (currently) 256kb of contiguous
memory it can map. Other DMA-API implementations might also
have limits.

Use the new dma_max_mapping_size() function to determine the
maximum mapping size when DMA-API is in use for virtio.

Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Joerg Roedel 
---
 drivers/virtio/virtio_ring.c | 11 +++
 include/linux/virtio.h   |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index cd7e755484e3..8a31c6862b2b 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -266,6 +266,17 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
return false;
 }
 
+size_t virtio_max_dma_size(struct virtio_device *vdev)
+{
+   size_t max_segment_size = SIZE_MAX;
+
+   if (vring_use_dma_api(vdev))
+   max_segment_size = dma_max_mapping_size(>dev);
+
+   return max_segment_size;
+}
+EXPORT_SYMBOL_GPL(virtio_max_dma_size);
+
 static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
  dma_addr_t *dma_handle, gfp_t flag)
 {
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index fa1b5da2804e..673fe3ef3607 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -157,6 +157,8 @@ int virtio_device_freeze(struct virtio_device *dev);
 int virtio_device_restore(struct virtio_device *dev);
 #endif
 
+size_t virtio_max_dma_size(struct virtio_device *vdev);
+
 #define virtio_device_for_each_vq(vdev, vq) \
list_for_each_entry(vq, >vqs, list)
 
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 3/5] dma: Introduce dma_max_mapping_size()

2019-01-30 Thread Joerg Roedel
From: Joerg Roedel 

The function returns the maximum size that can be mapped
using DMA-API functions. The patch also adds the
implementation for direct DMA and a new dma_map_ops pointer
so that other implementations can expose their limit.

Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Joerg Roedel 
---
 Documentation/DMA-API.txt   |  8 
 include/linux/dma-mapping.h | 16 
 kernel/dma/direct.c | 12 
 3 files changed, 36 insertions(+)

diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index e133ccd60228..acfe3d0f78d1 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current 
mask.  If you
 wish to take advantage of it, you should issue a dma_set_mask()
 call to set the mask to the value returned.
 
+::
+
+   size_t
+   dma_direct_max_mapping_size(struct device *dev);
+
+Returns the maximum size of a mapping for the device. The size parameter
+of the mapping functions like dma_map_single(), dma_map_page() and
+others should not be larger than the returned value.
 
 Part Id - Streaming DMA mappings
 
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index f6ded992c183..a3ca8a71a704 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -130,6 +130,7 @@ struct dma_map_ops {
enum dma_data_direction direction);
int (*dma_supported)(struct device *dev, u64 mask);
u64 (*get_required_mask)(struct device *dev);
+   size_t (*max_mapping_size)(struct device *dev);
 };
 
 #define DMA_MAPPING_ERROR  (~(dma_addr_t)0)
@@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct device 
*dev,
 }
 #endif
 
+size_t dma_direct_max_mapping_size(struct device *dev);
+
 #ifdef CONFIG_HAS_DMA
 #include 
 
@@ -440,6 +443,19 @@ static inline int dma_mapping_error(struct device *dev, 
dma_addr_t dma_addr)
return 0;
 }
 
+static inline size_t dma_max_mapping_size(struct device *dev)
+{
+   const struct dma_map_ops *ops = get_dma_ops(dev);
+   size_t size = SIZE_MAX;
+
+   if (dma_is_direct(ops))
+   size = dma_direct_max_mapping_size(dev);
+   else if (ops && ops->max_mapping_size)
+   size = ops->max_mapping_size(dev);
+
+   return size;
+}
+
 void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag, unsigned long attrs);
 void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 355d16acee6d..81ca8170b928 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -380,3 +380,15 @@ int dma_direct_supported(struct device *dev, u64 mask)
 */
return mask >= __phys_to_dma(dev, min_mask);
 }
+
+size_t dma_direct_max_mapping_size(struct device *dev)
+{
+   size_t size = SIZE_MAX;
+
+   /* If SWIOTLB is active, use its maximum mapping size */
+   if (is_swiotlb_active())
+   size = swiotlb_max_mapping_size(dev);
+
+   return size;
+}
+EXPORT_SYMBOL(dma_direct_max_mapping_size);
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 6/6] drm/virtio: move virtio_gpu_cmd_create_resource call into virtio_gpu_object_create

2019-01-30 Thread Gerd Hoffmann
Specifically call virtio_gpu_object_create() before ttm_bo_init(), so
the object is already created when ttm calls the
virtio_gpu_ttm_tt_bind() callback (which in turn calls
virtio_gpu_object_attach()).

With that in place virtio_gpu_object_attach() will never be called with
an object which is not yet created, so the extra
virtio_gpu_object_attach() calls done after
virtio_gpu_cmd_create_resource() is not needed any more.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h|  2 ++
 drivers/gpu/drm/virtio/virtgpu_gem.c| 12 +---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 10 +-
 drivers/gpu/drm/virtio/virtgpu_object.c | 10 --
 drivers/gpu/drm/virtio/virtgpu_vq.c |  4 ++--
 5 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 3265e62725..52f3950f82 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -56,7 +56,9 @@ struct virtio_gpu_object_params {
uint32_t height;
unsigned long size;
bool pinned;
+   bool dumb;
/* 3d */
+   bool virgl;
uint32_t target;
uint32_t bind;
uint32_t depth;
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 3a63ffcd4b..b5d7df17ac 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -82,9 +82,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
 {
-   struct virtio_gpu_device *vgdev = dev->dev_private;
struct drm_gem_object *gobj;
-   struct virtio_gpu_object *obj;
struct virtio_gpu_object_params params = { 0 };
int ret;
uint32_t pitch;
@@ -101,20 +99,12 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
params.width = args->width;
params.height = args->height;
params.size = args->size;
+   params.dumb = true;
ret = virtio_gpu_gem_create(file_priv, dev, , ,
>handle);
if (ret)
goto fail;
 
-   obj = gem_to_virtio_gpu_obj(gobj);
-   virtio_gpu_cmd_create_resource(vgdev, obj, );
-
-   /* attach the object to the resource */
-   ret = virtio_gpu_object_attach(vgdev, obj, NULL);
-   if (ret)
-   goto fail;
-
-   obj->dumb = true;
args->pitch = pitch;
return ret;
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index da06ebbb3a..3a1c447098 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -300,6 +300,7 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
params.height = rc->height;
params.size = rc->size;
if (vgdev->has_virgl_3d) {
+   params.virgl = true;
params.target = rc->target;
params.bind = rc->bind;
params.depth = rc->depth;
@@ -317,15 +318,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
return PTR_ERR(qobj);
obj = >gem_base;
 
-   if (!vgdev->has_virgl_3d) {
-   virtio_gpu_cmd_create_resource(vgdev, qobj, );
-
-   ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
-   } else {
-   virtio_gpu_cmd_resource_create_3d(vgdev, qobj, );
-   ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
-   }
-
ret = drm_gem_handle_create(file_priv, obj, );
if (ret) {
drm_gem_object_release(obj);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
b/drivers/gpu/drm/virtio/virtgpu_object.c
index 62367e3f80..94da9e68d2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -106,9 +106,15 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
*vgdev,
kfree(bo);
return ret;
}
-   bo->dumb = false;
+   bo->dumb = params->dumb;
+
+   if (params->virgl) {
+   virtio_gpu_cmd_resource_create_3d(vgdev, bo, params);
+   } else {
+   virtio_gpu_cmd_create_resource(vgdev, bo, params);
+   }
+
virtio_gpu_init_ttm_placement(bo, params->pinned);
-
ret = ttm_bo_init(>mman.bdev, >tbo, params->size,
  ttm_bo_type_device, >placement, 0,
  true, acc_size, NULL, NULL,
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c 
b/drivers/gpu/drm/virtio/virtgpu_vq.c
index ca93ec6ca3..292663c192 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -932,8 +932,8 @@ int virtio_gpu_object_attach(struct virtio_gpu_device 
*vgdev,
struct scatterlist *sg;
int 

[PATCH v2 2/6] drm/virtio: use struct to pass params to virtio_gpu_object_create()

2019-01-30 Thread Gerd Hoffmann
Create virtio_gpu_object_params, use that to pass object parameters to
virtio_gpu_object_create.  This is just the first step, followup patches
will add more parameters to the struct.  The plan is to use the struct
for all object parameters.

Also drop unused "kernel" parameter for virtio_gpu_alloc_object(), it is
unused and always false.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h| 15 ++-
 drivers/gpu/drm/virtio/virtgpu_gem.c| 17 ++---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 11 ++-
 drivers/gpu/drm/virtio/virtgpu_object.c | 22 +-
 4 files changed, 35 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index d577cb76f5..40928980a2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -50,6 +50,11 @@
 #define DRIVER_MINOR 1
 #define DRIVER_PATCHLEVEL 0
 
+struct virtio_gpu_object_params {
+   unsigned long size;
+   bool pinned;
+};
+
 struct virtio_gpu_object {
struct drm_gem_object gem_base;
uint32_t hw_res_handle;
@@ -217,16 +222,16 @@ int virtio_gpu_gem_init(struct virtio_gpu_device *vgdev);
 void virtio_gpu_gem_fini(struct virtio_gpu_device *vgdev);
 int virtio_gpu_gem_create(struct drm_file *file,
  struct drm_device *dev,
- uint64_t size,
+ struct virtio_gpu_object_params *params,
  struct drm_gem_object **obj_p,
  uint32_t *handle_p);
 int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
   struct drm_file *file);
 void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
 struct drm_file *file);
-struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
- size_t size, bool kernel,
- bool pinned);
+struct virtio_gpu_object*
+virtio_gpu_alloc_object(struct drm_device *dev,
+   struct virtio_gpu_object_params *params);
 int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args);
@@ -342,7 +347,7 @@ void virtio_gpu_fence_event_process(struct 
virtio_gpu_device *vdev,
 
 /* virtio_gpu_object */
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
-unsigned long size, bool kernel, bool pinned,
+struct virtio_gpu_object_params *params,
 struct virtio_gpu_object **bo_ptr);
 void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo);
 int virtio_gpu_object_kmap(struct virtio_gpu_object *bo);
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index f065863939..b5f2d94ce5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -34,15 +34,15 @@ void virtio_gpu_gem_free_object(struct drm_gem_object 
*gem_obj)
virtio_gpu_object_unref();
 }
 
-struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
- size_t size, bool kernel,
- bool pinned)
+struct virtio_gpu_object*
+virtio_gpu_alloc_object(struct drm_device *dev,
+   struct virtio_gpu_object_params *params)
 {
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_object *obj;
int ret;
 
-   ret = virtio_gpu_object_create(vgdev, size, kernel, pinned, );
+   ret = virtio_gpu_object_create(vgdev, params, );
if (ret)
return ERR_PTR(ret);
 
@@ -51,7 +51,7 @@ struct virtio_gpu_object *virtio_gpu_alloc_object(struct 
drm_device *dev,
 
 int virtio_gpu_gem_create(struct drm_file *file,
  struct drm_device *dev,
- uint64_t size,
+ struct virtio_gpu_object_params *params,
  struct drm_gem_object **obj_p,
  uint32_t *handle_p)
 {
@@ -59,7 +59,7 @@ int virtio_gpu_gem_create(struct drm_file *file,
int ret;
u32 handle;
 
-   obj = virtio_gpu_alloc_object(dev, size, false, false);
+   obj = virtio_gpu_alloc_object(dev, params);
if (IS_ERR(obj))
return PTR_ERR(obj);
 
@@ -85,6 +85,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct drm_gem_object *gobj;
struct virtio_gpu_object *obj;
+   struct virtio_gpu_object_params params = { 0 };
int ret;
uint32_t pitch;
uint32_t format;
@@ -96,7 +97,9 @@ int virtio_gpu_mode_dumb_create(struct drm_file 

[PATCH v2 5/6] drm/virtio: drop fencing in virtio_gpu_resource_create_ioctl

2019-01-30 Thread Gerd Hoffmann
There is no need to wait for completion here.

The host will process commands in submit order, so commands can
reference the new resource just fine even when queued up before
completion.

On the guest side there is no need to wait for completion too.  Which
btw is different from resource destroy, where we have to make sure the
host has seen the destroy and thus doesn't use it any more before
releasing the pages backing the resource.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 51 +-
 1 file changed, 1 insertion(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 431e5d767e..da06ebbb3a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -279,10 +279,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
struct virtio_gpu_object *qobj;
struct drm_gem_object *obj;
uint32_t handle = 0;
-   struct list_head validate_list;
-   struct ttm_validate_buffer mainbuf;
-   struct virtio_gpu_fence *fence = NULL;
-   struct ww_acquire_ctx ticket;
struct virtio_gpu_object_params params = { 0 };
 
if (vgdev->has_virgl_3d == false) {
@@ -298,9 +294,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
return -EINVAL;
}
 
-   INIT_LIST_HEAD(_list);
-   memset(, 0, sizeof(struct ttm_validate_buffer));
-
params.pinned = false,
params.format = rc->format;
params.width = rc->width;
@@ -329,62 +322,20 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
 
ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
} else {
-   /* use a gem reference since unref list undoes them */
-   drm_gem_object_get(>gem_base);
-   mainbuf.bo = >tbo;
-   list_add(, _list);
-
-   ret = virtio_gpu_object_list_validate(, _list);
-   if (ret) {
-   DRM_DEBUG("failed to validate\n");
-   goto fail_unref;
-   }
-
-   fence = virtio_gpu_fence_alloc(vgdev);
-   if (!fence) {
-   ret = -ENOMEM;
-   goto fail_backoff;
-   }
-
virtio_gpu_cmd_resource_create_3d(vgdev, qobj, );
-   ret = virtio_gpu_object_attach(vgdev, qobj, fence);
-   if (ret) {
-   dma_fence_put(>f);
-   goto fail_backoff;
-   }
-   ttm_eu_fence_buffer_objects(, _list, >f);
+   ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
}
 
ret = drm_gem_handle_create(file_priv, obj, );
if (ret) {
-
drm_gem_object_release(obj);
-   if (vgdev->has_virgl_3d) {
-   virtio_gpu_unref_list(_list);
-   dma_fence_put(>f);
-   }
return ret;
}
drm_gem_object_put_unlocked(obj);
 
rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */
rc->bo_handle = handle;
-
-   if (vgdev->has_virgl_3d) {
-   virtio_gpu_unref_list(_list);
-   dma_fence_put(>f);
-   }
return 0;
-fail_backoff:
-   ttm_eu_backoff_reservation(, _list);
-fail_unref:
-   if (vgdev->has_virgl_3d) {
-   virtio_gpu_unref_list(_list);
-   dma_fence_put(>f);
-   }
-//fail_obj:
-// drm_gem_object_handle_unreference_unlocked(obj);
-   return ret;
 }
 
 static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data,
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 1/6] drm/virtio: move virtio_gpu_object_{attach, detach} calls.

2019-01-30 Thread Gerd Hoffmann
Drop the dummy ttm backend implementation, add a real one for
TTM_PL_FLAG_TT objects.  The bin/unbind callbacks will call
virtio_gpu_object_{attach,detach}, to update the object state
on the host side, instead of invoking those calls from the
move_notify() callback.

With that in place the move and move_notify callbacks are not
needed any more, so drop them.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_ttm.c | 92 ++--
 1 file changed, 24 insertions(+), 68 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ttm.c 
b/drivers/gpu/drm/virtio/virtgpu_ttm.c
index 4bfbf25fab..77407976c7 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ttm.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ttm.c
@@ -194,42 +194,45 @@ static void virtio_gpu_ttm_io_mem_free(struct 
ttm_bo_device *bdev,
  */
 struct virtio_gpu_ttm_tt {
struct ttm_dma_tt   ttm;
-   struct virtio_gpu_device*vgdev;
-   u64 offset;
+   struct virtio_gpu_object*obj;
 };
 
-static int virtio_gpu_ttm_backend_bind(struct ttm_tt *ttm,
-  struct ttm_mem_reg *bo_mem)
+static int virtio_gpu_ttm_tt_bind(struct ttm_tt *ttm,
+ struct ttm_mem_reg *bo_mem)
 {
-   struct virtio_gpu_ttm_tt *gtt = (void *)ttm;
+   struct virtio_gpu_ttm_tt *gtt =
+   container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
+   struct virtio_gpu_device *vgdev =
+   virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
 
-   gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT);
-   if (!ttm->num_pages)
-   WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
-ttm->num_pages, bo_mem, ttm);
-
-   /* Not implemented */
+   virtio_gpu_object_attach(vgdev, gtt->obj, NULL);
return 0;
 }
 
-static int virtio_gpu_ttm_backend_unbind(struct ttm_tt *ttm)
+static int virtio_gpu_ttm_tt_unbind(struct ttm_tt *ttm)
 {
-   /* Not implemented */
+   struct virtio_gpu_ttm_tt *gtt =
+   container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
+   struct virtio_gpu_device *vgdev =
+   virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
+
+   virtio_gpu_object_detach(vgdev, gtt->obj);
return 0;
 }
 
-static void virtio_gpu_ttm_backend_destroy(struct ttm_tt *ttm)
+static void virtio_gpu_ttm_tt_destroy(struct ttm_tt *ttm)
 {
-   struct virtio_gpu_ttm_tt *gtt = (void *)ttm;
+   struct virtio_gpu_ttm_tt *gtt =
+   container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
 
ttm_dma_tt_fini(>ttm);
kfree(gtt);
 }
 
-static struct ttm_backend_func virtio_gpu_backend_func = {
-   .bind = _gpu_ttm_backend_bind,
-   .unbind = _gpu_ttm_backend_unbind,
-   .destroy = _gpu_ttm_backend_destroy,
+static struct ttm_backend_func virtio_gpu_tt_func = {
+   .bind = _gpu_ttm_tt_bind,
+   .unbind = _gpu_ttm_tt_unbind,
+   .destroy = _gpu_ttm_tt_destroy,
 };
 
 static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
@@ -242,8 +245,8 @@ static struct ttm_tt *virtio_gpu_ttm_tt_create(struct 
ttm_buffer_object *bo,
gtt = kzalloc(sizeof(struct virtio_gpu_ttm_tt), GFP_KERNEL);
if (gtt == NULL)
return NULL;
-   gtt->ttm.ttm.func = _gpu_backend_func;
-   gtt->vgdev = vgdev;
+   gtt->ttm.ttm.func = _gpu_tt_func;
+   gtt->obj = container_of(bo, struct virtio_gpu_object, tbo);
if (ttm_dma_tt_init(>ttm, bo, page_flags)) {
kfree(gtt);
return NULL;
@@ -251,51 +254,6 @@ static struct ttm_tt *virtio_gpu_ttm_tt_create(struct 
ttm_buffer_object *bo,
return >ttm.ttm;
 }
 
-static void virtio_gpu_move_null(struct ttm_buffer_object *bo,
-struct ttm_mem_reg *new_mem)
-{
-   struct ttm_mem_reg *old_mem = >mem;
-
-   BUG_ON(old_mem->mm_node != NULL);
-   *old_mem = *new_mem;
-   new_mem->mm_node = NULL;
-}
-
-static int virtio_gpu_bo_move(struct ttm_buffer_object *bo, bool evict,
- struct ttm_operation_ctx *ctx,
- struct ttm_mem_reg *new_mem)
-{
-   int ret;
-
-   ret = ttm_bo_wait(bo, ctx->interruptible, ctx->no_wait_gpu);
-   if (ret)
-   return ret;
-
-   virtio_gpu_move_null(bo, new_mem);
-   return 0;
-}
-
-static void virtio_gpu_bo_move_notify(struct ttm_buffer_object *tbo,
- bool evict,
- struct ttm_mem_reg *new_mem)
-{
-   struct virtio_gpu_object *bo;
-   struct virtio_gpu_device *vgdev;
-
-   bo = container_of(tbo, struct virtio_gpu_object, tbo);
-   vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
-
-   if (!new_mem || (new_mem->placement & TTM_PL_FLAG_SYSTEM)) {
-   if (bo->hw_res_handle)
-   

[PATCH v2 3/6] drm/virtio: params struct for virtio_gpu_cmd_create_resource()

2019-01-30 Thread Gerd Hoffmann
Add format, width and height fields to the virtio_gpu_object_params
struct.  With that in place we can use the parameter struct for
virtio_gpu_cmd_create_resource() calls too.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 ---
 drivers/gpu/drm/virtio/virtgpu_gem.c   |  8 
 drivers/gpu/drm/virtio/virtgpu_ioctl.c |  6 --
 drivers/gpu/drm/virtio/virtgpu_vq.c| 10 --
 4 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 40928980a2..a40215c10e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -51,6 +51,9 @@
 #define DRIVER_PATCHLEVEL 0
 
 struct virtio_gpu_object_params {
+   uint32_t format;
+   uint32_t width;
+   uint32_t height;
unsigned long size;
bool pinned;
 };
@@ -248,9 +251,7 @@ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
 void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *bo,
-   uint32_t format,
-   uint32_t width,
-   uint32_t height);
+   struct virtio_gpu_object_params *params);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
   uint32_t resource_id);
 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index b5f2d94ce5..3a63ffcd4b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -88,7 +88,6 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct virtio_gpu_object_params params = { 0 };
int ret;
uint32_t pitch;
-   uint32_t format;
 
if (args->bpp != 32)
return -EINVAL;
@@ -98,16 +97,17 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
args->size = ALIGN(args->size, PAGE_SIZE);
 
params.pinned = false,
+   params.format = virtio_gpu_translate_format(DRM_FORMAT_HOST_XRGB);
+   params.width = args->width;
+   params.height = args->height;
params.size = args->size;
ret = virtio_gpu_gem_create(file_priv, dev, , ,
>handle);
if (ret)
goto fail;
 
-   format = virtio_gpu_translate_format(DRM_FORMAT_HOST_XRGB);
obj = gem_to_virtio_gpu_obj(gobj);
-   virtio_gpu_cmd_create_resource(vgdev, obj, format,
-  args->width, args->height);
+   virtio_gpu_cmd_create_resource(vgdev, obj, );
 
/* attach the object to the resource */
ret = virtio_gpu_object_attach(vgdev, obj, NULL);
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index fa7b958ca2..84c2216fd4 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -303,6 +303,9 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
memset(, 0, sizeof(struct ttm_validate_buffer));
 
params.pinned = false,
+   params.format = rc->format;
+   params.width = rc->width;
+   params.height = rc->height;
params.size = rc->size;
 
/* allocate a single page size object */
@@ -315,8 +318,7 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
obj = >gem_base;
 
if (!vgdev->has_virgl_3d) {
-   virtio_gpu_cmd_create_resource(vgdev, qobj, rc->format,
-  rc->width, rc->height);
+   virtio_gpu_cmd_create_resource(vgdev, qobj, );
 
ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
} else {
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c 
b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 6bc2008b0d..363b8b8577 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -376,9 +376,7 @@ static int virtio_gpu_queue_cursor(struct virtio_gpu_device 
*vgdev,
 /* create a basic resource */
 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *bo,
-   uint32_t format,
-   uint32_t width,
-   uint32_t height)
+   struct virtio_gpu_object_params *params)
 {
struct virtio_gpu_resource_create_2d *cmd_p;
struct virtio_gpu_vbuffer *vbuf;
@@ -388,9 +386,9 @@ void virtio_gpu_cmd_create_resource(struct 
virtio_gpu_device *vgdev,
 
cmd_p->hdr.type = 

Re: INFO: task hung in vhost_init_device_iotlb

2019-01-30 Thread Dmitry Vyukov via Virtualization
On Tue, Jan 29, 2019 at 5:06 PM Michael S. Tsirkin  wrote:
>
> On Tue, Jan 29, 2019 at 01:22:02AM -0800, syzbot wrote:
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit:983542434e6b Merge tag 'edac_fix_for_5.0' of git://git.ker..
> > git tree:   upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=17476498c0
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=505743eba4e4f68
> > dashboard link: https://syzkaller.appspot.com/bug?extid=40e28a8bd59d10ed0c42
> > compiler:   gcc (GCC) 9.0.0 20181231 (experimental)
> >
> > Unfortunately, I don't have any reproducer for this crash yet.
>
> Hmm nothing obvious below. Generic corruption elsewhere?

Hard to say, a silent memory corruption is definitely possible.
If there is nothing obvious let's wait, maybe syzbot will come up with
a repro or we get more such hangs so that it will be possible to rule
out flakes/corruptions.


> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: syzbot+40e28a8bd59d10ed0...@syzkaller.appspotmail.com
> >
> > protocol 88fb is buggy, dev hsr_slave_1
> > protocol 88fb is buggy, dev hsr_slave_0
> > protocol 88fb is buggy, dev hsr_slave_1
> > protocol 88fb is buggy, dev hsr_slave_0
> > protocol 88fb is buggy, dev hsr_slave_1
> > INFO: task syz-executor5:9417 blocked for more than 140 seconds.
> >   Not tainted 5.0.0-rc3+ #48
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > syz-executor5   D27576  9417   8469 0x0004
> > Call Trace:
> >  context_switch kernel/sched/core.c:2831 [inline]
> >  __schedule+0x897/0x1e60 kernel/sched/core.c:3472
> >  schedule+0xfe/0x350 kernel/sched/core.c:3516
> > protocol 88fb is buggy, dev hsr_slave_0
> > protocol 88fb is buggy, dev hsr_slave_1
> >  schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:3574
> >  __mutex_lock_common kernel/locking/mutex.c:1002 [inline]
> >  __mutex_lock+0xa3b/0x1670 kernel/locking/mutex.c:1072
> >  mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
> >  vhost_init_device_iotlb+0x124/0x280 drivers/vhost/vhost.c:1606
> >  vhost_net_set_features drivers/vhost/net.c:1674 [inline]
> >  vhost_net_ioctl+0x1282/0x1c00 drivers/vhost/net.c:1739
> >  vfs_ioctl fs/ioctl.c:46 [inline]
> >  file_ioctl fs/ioctl.c:509 [inline]
> >  do_vfs_ioctl+0x107b/0x17d0 fs/ioctl.c:696
> >  ksys_ioctl+0xab/0xd0 fs/ioctl.c:713
> >  __do_sys_ioctl fs/ioctl.c:720 [inline]
> >  __se_sys_ioctl fs/ioctl.c:718 [inline]
> >  __x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:718
> >  do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290
> > protocol 88fb is buggy, dev hsr_slave_0
> > protocol 88fb is buggy, dev hsr_slave_1
> >  entry_SYSCALL_64_after_hwframe+0x49/0xbe
> > RIP: 0033:0x458099
> > Code: Bad RIP value.
> > RSP: 002b:7efd7ca9bc78 EFLAGS: 0246 ORIG_RAX: 0010
> > RAX: ffda RBX: 0003 RCX: 00458099
> > RDX: 2080 RSI: 4008af00 RDI: 0003
> > RBP: 0073bfa0 R08:  R09: 
> > R10:  R11: 0246 R12: 7efd7ca9c6d4
> > R13: 004c295b R14: 004d5280 R15: 
> > INFO: task syz-executor5:9418 blocked for more than 140 seconds.
> >   Not tainted 5.0.0-rc3+ #48
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > syz-executor5   D27800  9418   8469 0x0004
> > Call Trace:
> >  context_switch kernel/sched/core.c:2831 [inline]
> >  __schedule+0x897/0x1e60 kernel/sched/core.c:3472
> >  schedule+0xfe/0x350 kernel/sched/core.c:3516
> >  schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:3574
> >  __mutex_lock_common kernel/locking/mutex.c:1002 [inline]
> >  __mutex_lock+0xa3b/0x1670 kernel/locking/mutex.c:1072
> >  mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
> >  vhost_net_set_owner drivers/vhost/net.c:1697 [inline]
> >  vhost_net_ioctl+0x426/0x1c00 drivers/vhost/net.c:1754
> >  vfs_ioctl fs/ioctl.c:46 [inline]
> >  file_ioctl fs/ioctl.c:509 [inline]
> >  do_vfs_ioctl+0x107b/0x17d0 fs/ioctl.c:696
> >  ksys_ioctl+0xab/0xd0 fs/ioctl.c:713
> >  __do_sys_ioctl fs/ioctl.c:720 [inline]
> >  __se_sys_ioctl fs/ioctl.c:718 [inline]
> >  __x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:718
> >  do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290
> >  entry_SYSCALL_64_after_hwframe+0x49/0xbe
> > RIP: 0033:0x458099
> > Code: Bad RIP value.
> > RSP: 002b:7efd7ca7ac78 EFLAGS: 0246 ORIG_RAX: 0010
> > RAX: ffda RBX: 0003 RCX: 00458099
> > RDX:  RSI: 4001af01 RDI: 0003
> > RBP: 0073c040 R08:  R09: 
> > R10:  R11: 0246 R12: 7efd7ca7b6d4
> > R13: 004c33a4 R14: 004d5e80 R15: 
> >
> > Showing all locks held in the system:
> > 1 lock held by khungtaskd/1040:
> >  #0: b7479fbe 

Re: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-01-30 Thread Christoph Hellwig
On Tue, Jan 29, 2019 at 09:36:08PM -0500, Michael S. Tsirkin wrote:
> This has been discussed ad nauseum. virtio is all about compatibility.
> Losing a couple of lines of code isn't worth breaking working setups.
> People that want "just use DMA API no tricks" now have the option.
> Setting a flag in a feature bit map is literally a single line
> of code in the hypervisor. So stop pushing for breaking working
> legacy setups and just fix it in the right place.

I agree with the legacy aspect.  What I am missing is an extremely
strong wording that says you SHOULD always set this flag for new
hosts, including an explanation why.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization