Re: [PATCH v3] virtio_pmem: fix sparse warning

2019-07-16 Thread Dan Williams
On Fri, Jul 12, 2019 at 7:11 AM Michael S. Tsirkin  wrote:
>
> On Fri, Jul 12, 2019 at 10:46:10AM +0530, Pankaj Gupta wrote:
> > This patch fixes below sparse warning related to __virtio
> > type in virtio pmem driver. This is reported by Intel test
> > bot on linux-next tree.
> >
> > nd_virtio.c:56:28: warning: incorrect type in assignment
> > (different base types)
> > nd_virtio.c:56:28:expected unsigned int [unsigned] [usertype] type
> > nd_virtio.c:56:28:got restricted __virtio32
> > nd_virtio.c:93:59: warning: incorrect type in argument 2
> > (different base types)
> > nd_virtio.c:93:59:expected restricted __virtio32 [usertype] val
> > nd_virtio.c:93:59:got unsigned int [unsigned] [usertype] ret
> >
> > Reported-by: kbuild test robot 
> > Signed-off-by: Pankaj Gupta 
>
> Acked-by: Michael S. Tsirkin 
>
> Pls merge - I assume nvdimm tree?

Yes, I'll push this with the rest to Linus tomorrow morning.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 2/3] drm: plumb attaching dev thru to prime_pin/unpin

2019-07-16 Thread Rob Clark
From: Rob Clark 

Needed in the following patch for cache operations.

Signed-off-by: Rob Clark 
---
v3: rebased on drm-tip

 drivers/gpu/drm/drm_gem.c   | 8 
 drivers/gpu/drm/drm_internal.h  | 4 ++--
 drivers/gpu/drm/drm_prime.c | 4 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 4 ++--
 drivers/gpu/drm/msm/msm_drv.h   | 4 ++--
 drivers/gpu/drm/msm/msm_gem_prime.c | 4 ++--
 drivers/gpu/drm/nouveau/nouveau_gem.h   | 4 ++--
 drivers/gpu/drm/nouveau/nouveau_prime.c | 4 ++--
 drivers/gpu/drm/qxl/qxl_prime.c | 4 ++--
 drivers/gpu/drm/radeon/radeon_prime.c   | 4 ++--
 drivers/gpu/drm/vgem/vgem_drv.c | 4 ++--
 include/drm/drm_drv.h   | 5 ++---
 12 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 84689ccae885..af2549c45027 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1215,22 +1215,22 @@ void drm_gem_print_info(struct drm_printer *p, unsigned 
int indent,
obj->dev->driver->gem_print_info(p, indent, obj);
 }
 
-int drm_gem_pin(struct drm_gem_object *obj)
+int drm_gem_pin(struct drm_gem_object *obj, struct device *dev)
 {
if (obj->funcs && obj->funcs->pin)
return obj->funcs->pin(obj);
else if (obj->dev->driver->gem_prime_pin)
-   return obj->dev->driver->gem_prime_pin(obj);
+   return obj->dev->driver->gem_prime_pin(obj, dev);
else
return 0;
 }
 
-void drm_gem_unpin(struct drm_gem_object *obj)
+void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev)
 {
if (obj->funcs && obj->funcs->unpin)
obj->funcs->unpin(obj);
else if (obj->dev->driver->gem_prime_unpin)
-   obj->dev->driver->gem_prime_unpin(obj);
+   obj->dev->driver->gem_prime_unpin(obj, dev);
 }
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 51a2055c8f18..e64090373e3a 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -133,8 +133,8 @@ void drm_gem_release(struct drm_device *dev, struct 
drm_file *file_private);
 void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *obj);
 
-int drm_gem_pin(struct drm_gem_object *obj);
-void drm_gem_unpin(struct drm_gem_object *obj);
+int drm_gem_pin(struct drm_gem_object *obj, struct device *dev);
+void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev);
 void *drm_gem_vmap(struct drm_gem_object *obj);
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 189d980402ad..126860432ff9 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -575,7 +575,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
 {
struct drm_gem_object *obj = dma_buf->priv;
 
-   return drm_gem_pin(obj);
+   return drm_gem_pin(obj, attach->dev);
 }
 EXPORT_SYMBOL(drm_gem_map_attach);
 
@@ -593,7 +593,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
 {
struct drm_gem_object *obj = dma_buf->priv;
 
-   drm_gem_unpin(obj);
+   drm_gem_unpin(obj, attach->dev);
 }
 EXPORT_SYMBOL(drm_gem_map_detach);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c 
b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a05292e8ed6f..67e69a5f00f2 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -43,7 +43,7 @@ int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
return etnaviv_obj->ops->mmap(etnaviv_obj, vma);
 }
 
-int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
+int etnaviv_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
if (!obj->import_attach) {
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
@@ -55,7 +55,7 @@ int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
return 0;
 }
 
-void etnaviv_gem_prime_unpin(struct drm_gem_object *obj)
+void etnaviv_gem_prime_unpin(struct drm_gem_object *obj, struct device *dev)
 {
if (!obj->import_attach) {
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index ee7b512dc158..0eea68618b68 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -288,8 +288,8 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, void 
*vaddr);
 int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach, struct sg_table *sg);
-int msm_gem_prime_pin(struct drm_gem_object *obj);
-void 

Re: [PATCH v2 2/3] drm: plumb attaching dev thru to prime_pin/unpin

2019-07-16 Thread Chris Wilson
Quoting Rob Clark (2019-07-16 18:43:22)
> From: Rob Clark 
> 
> Needed in the following patch for cache operations.

What's the base for this patch? (I'm missing the ancestor for drm_gem.c)
-Chris
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 2/3] drm: plumb attaching dev thru to prime_pin/unpin

2019-07-16 Thread Rob Clark
From: Rob Clark 

Needed in the following patch for cache operations.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/drm_gem.c   | 10 ++
 drivers/gpu/drm/drm_gem_vram_helper.c   |  6 --
 drivers/gpu/drm/drm_prime.c |  4 ++--
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  4 ++--
 drivers/gpu/drm/msm/msm_drv.h   |  4 ++--
 drivers/gpu/drm/msm/msm_gem_prime.c |  4 ++--
 drivers/gpu/drm/nouveau/nouveau_gem.h   |  4 ++--
 drivers/gpu/drm/nouveau/nouveau_prime.c |  4 ++--
 drivers/gpu/drm/qxl/qxl_prime.c |  4 ++--
 drivers/gpu/drm/radeon/radeon_prime.c   |  4 ++--
 drivers/gpu/drm/vboxvideo/vbox_prime.c  |  4 ++--
 drivers/gpu/drm/vgem/vgem_drv.c |  4 ++--
 include/drm/drm_drv.h   |  4 ++--
 include/drm/drm_gem.h   |  4 ++--
 include/drm/drm_gem_vram_helper.h   |  4 ++--
 15 files changed, 36 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 7d6242cc69f2..0a2645769624 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1219,18 +1219,19 @@ void drm_gem_print_info(struct drm_printer *p, unsigned 
int indent,
 /**
  * drm_gem_pin - Pin backing buffer in memory
  * @obj: GEM object
+ * @dev: the device the buffer is being pinned for
  *
  * Make sure the backing buffer is pinned in memory.
  *
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-int drm_gem_pin(struct drm_gem_object *obj)
+int drm_gem_pin(struct drm_gem_object *obj, struct device *dev)
 {
if (obj->funcs && obj->funcs->pin)
return obj->funcs->pin(obj);
else if (obj->dev->driver->gem_prime_pin)
-   return obj->dev->driver->gem_prime_pin(obj);
+   return obj->dev->driver->gem_prime_pin(obj, dev);
else
return 0;
 }
@@ -1239,15 +1240,16 @@ EXPORT_SYMBOL(drm_gem_pin);
 /**
  * drm_gem_unpin - Unpin backing buffer from memory
  * @obj: GEM object
+ * @dev: the device the buffer is being pinned for
  *
  * Relax the requirement that the backing buffer is pinned in memory.
  */
-void drm_gem_unpin(struct drm_gem_object *obj)
+void drm_gem_unpin(struct drm_gem_object *obj, struct device *dev)
 {
if (obj->funcs && obj->funcs->unpin)
obj->funcs->unpin(obj);
else if (obj->dev->driver->gem_prime_unpin)
-   obj->dev->driver->gem_prime_unpin(obj);
+   obj->dev->driver->gem_prime_unpin(obj, dev);
 }
 EXPORT_SYMBOL(drm_gem_unpin);
 
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c 
b/drivers/gpu/drm/drm_gem_vram_helper.c
index 4de782ca26b2..62fafec93948 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -548,7 +548,8 @@ EXPORT_SYMBOL(drm_gem_vram_driver_dumb_mmap_offset);
  * 0 on success, or
  * a negative errno code otherwise.
  */
-int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *gem)
+int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *gem,
+ struct device *dev)
 {
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
@@ -569,7 +570,8 @@ EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_pin);
Implements  drm_driver.gem_prime_unpin
  * @gem:   The GEM object to unpin
  */
-void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *gem)
+void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *gem,
+struct device *dev)
 {
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index d0c01318076b..505893cfac8e 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -196,7 +196,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
 {
struct drm_gem_object *obj = dma_buf->priv;
 
-   return drm_gem_pin(obj);
+   return drm_gem_pin(obj, attach->dev);
 }
 EXPORT_SYMBOL(drm_gem_map_attach);
 
@@ -213,7 +213,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
 {
struct drm_gem_object *obj = dma_buf->priv;
 
-   drm_gem_unpin(obj);
+   drm_gem_unpin(obj, attach->dev);
 }
 EXPORT_SYMBOL(drm_gem_map_detach);
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c 
b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 00e8b6a817e3..44385d590aa7 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -43,7 +43,7 @@ int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
return etnaviv_obj->ops->mmap(etnaviv_obj, vma);
 }
 
-int etnaviv_gem_prime_pin(struct drm_gem_object *obj)
+int etnaviv_gem_prime_pin(struct drm_gem_object *obj, struct device *dev)
 {
if (!obj->import_attach) {
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
@@ -55,7 +55,7 @@ int etnaviv_gem_prime_pin(struct 

[PULL] virtio, vhost: fixes, features, performance

2019-07-16 Thread Michael S. Tsirkin
The following changes since commit c1ea02f15ab5efb3e93fc3144d895410bf79fcf2:

  vhost: scsi: add weight support (2019-05-27 11:08:23 -0400)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus

for you to fetch changes up to 5e663f0410fa2f355042209154029842ba1abd43:

  virtio-mmio: add error check for platform_get_irq (2019-07-11 16:22:29 -0400)


virtio, vhost: fixes, features, performance

new iommu device
vhost guest memory access using vmap (just meta-data for now)
minor fixes

Signed-off-by: Michael S. Tsirkin 

Note: due to code driver changes the driver-core tree, the following
patch is needed when merging tree with commit 92ce7e83b4e5
("driver_find_device: Unify the match function with
class_find_device()") in the driver-core tree:

From: Nathan Chancellor 
Subject: [PATCH] iommu/virtio: Constify data parameter in viommu_match_node

After commit 92ce7e83b4e5 ("driver_find_device: Unify the match
function with class_find_device()") in the driver-core tree.

Signed-off-by: Nathan Chancellor 
Signed-off-by: Michael S. Tsirkin 

---
 drivers/iommu/virtio-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 4620dd221ffd..433f4d2ee956 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -839,7 +839,7 @@ static void viommu_put_resv_regions(struct device *dev, 
struct list_head *head)
 static struct iommu_ops viommu_ops;
 static struct virtio_driver virtio_iommu_drv;

-static int viommu_match_node(struct device *dev, void *data)
+static int viommu_match_node(struct device *dev, const void *data)
 {
return dev->parent->fwnode == data;
 }


Gustavo A. R. Silva (1):
  scsi: virtio_scsi: Use struct_size() helper

Ihor Matushchak (1):
  virtio-mmio: add error check for platform_get_irq

Jason Wang (6):
  vhost: generalize adding used elem
  vhost: fine grain userspace memory accessors
  vhost: rename vq_iotlb_prefetch() to vq_meta_prefetch()
  vhost: introduce helpers to get the size of metadata area
  vhost: factor out setting vring addr and num
  vhost: access vq metadata through kernel virtual address

Jean-Philippe Brucker (7):
  dt-bindings: virtio-mmio: Add IOMMU description
  dt-bindings: virtio: Add virtio-pci-iommu node
  of: Allow the iommu-map property to omit untranslated devices
  PCI: OF: Initialize dev->fwnode appropriately
  iommu: Add virtio-iommu driver
  iommu/virtio: Add probe request
  iommu/virtio: Add event queue

Michael S. Tsirkin (1):
  vhost: fix clang build warning

 Documentation/devicetree/bindings/virtio/iommu.txt |   66 ++
 Documentation/devicetree/bindings/virtio/mmio.txt  |   30 +
 MAINTAINERS|7 +
 drivers/iommu/Kconfig  |   11 +
 drivers/iommu/Makefile |1 +
 drivers/iommu/virtio-iommu.c   | 1158 
 drivers/of/base.c  |   10 +-
 drivers/pci/of.c   |8 +
 drivers/scsi/virtio_scsi.c |2 +-
 drivers/vhost/net.c|4 +-
 drivers/vhost/vhost.c  |  850 +++---
 drivers/vhost/vhost.h  |   43 +-
 drivers/virtio/virtio_mmio.c   |7 +-
 include/uapi/linux/virtio_ids.h|1 +
 include/uapi/linux/virtio_iommu.h  |  161 +++
 15 files changed, 2228 insertions(+), 131 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock

2019-07-16 Thread Stefano Garzarella
On Tue, Jul 16, 2019 at 06:01:33AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > > > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> > 
> > [...]
> > 
> > > > > 
> > > > > 
> > > > > I think it's just a branch, for ethernet, go for networking stack. 
> > > > > otherwise
> > > > > go for vsock core?
> > > > > 
> > > > 
> > > > Yes, that should work.
> > > > 
> > > > So, I should refactor the functions that can be called also from the 
> > > > vsock
> > > > core, in order to remove "struct net_device *dev" parameter.
> > > > Maybe creating some wrappers for the network stack.
> > > > 
> > > > Otherwise I should create a fake net_device for vsock_core.
> > > > 
> > > > What do you suggest?
> > > 
> > > Neither.
> > > 
> > > I think what Jason was saying all along is this:
> > > 
> > > virtio net doesn't actually lose packets, at least most
> > > of the time. And it actually most of the time
> > > passes all packets to host. So it's possible to use a virtio net
> > > device (possibly with a feature flag that says "does not lose packets,
> > > all packets go to host") and build vsock on top.
> > 
> > Yes, I got it after the latest Jason's reply.
> > 
> > > 
> > > and all of this is nice, but don't expect anything easy,
> > > or any quick results.
> > 
> > I expected this... :-(
> > 
> > > 
> > > Also, in a sense it's a missed opportunity: we could cut out a lot
> > > of fat and see just how fast can a protocol that is completely
> > > new and separate from networking stack go.
> > 
> > In this case, if we will try to do a PoC, what do you think is better?
> > 1. new AF_VSOCK + network-stack + virtio-net modified
> > Maybe it is allow us to reuse a lot of stuff already written,
> > but we will go through the network stack
> > 
> > 2. new AF_VSOCK + glue + virtio-net modified
> > Intermediate approach, similar to Jason's proposal
> > 
> > 3, new AF_VSOCK + new virtio-vsock
> > Can be the thinnest, but we have to rewrite many things, with the 
> > risk
> > of making the same mistakes as the current implementation.
> > 
> 
> 1 or 3 imho. I wouldn't expect a lot from 2.  I slightly favor 3 and
> Jason 1. So take your pick :)
> 

Yes, I agree :)

Maybe "Jason 1" could be the short term (and an opportunity to study better the
code and sources of overhead) and "new AF_VSOCK + new virtio-vsock" the long
term goal with the multi-transport support in mind.

Thank you so much for your guidance and useful advice,
Stefano
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock

2019-07-16 Thread Michael S. Tsirkin
On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> 
> [...]
> 
> > > > 
> > > > 
> > > > I think it's just a branch, for ethernet, go for networking stack. 
> > > > otherwise
> > > > go for vsock core?
> > > > 
> > > 
> > > Yes, that should work.
> > > 
> > > So, I should refactor the functions that can be called also from the vsock
> > > core, in order to remove "struct net_device *dev" parameter.
> > > Maybe creating some wrappers for the network stack.
> > > 
> > > Otherwise I should create a fake net_device for vsock_core.
> > > 
> > > What do you suggest?
> > 
> > Neither.
> > 
> > I think what Jason was saying all along is this:
> > 
> > virtio net doesn't actually lose packets, at least most
> > of the time. And it actually most of the time
> > passes all packets to host. So it's possible to use a virtio net
> > device (possibly with a feature flag that says "does not lose packets,
> > all packets go to host") and build vsock on top.
> 
> Yes, I got it after the latest Jason's reply.
> 
> > 
> > and all of this is nice, but don't expect anything easy,
> > or any quick results.
> 
> I expected this... :-(
> 
> > 
> > Also, in a sense it's a missed opportunity: we could cut out a lot
> > of fat and see just how fast can a protocol that is completely
> > new and separate from networking stack go.
> 
> In this case, if we will try to do a PoC, what do you think is better?
> 1. new AF_VSOCK + network-stack + virtio-net modified
> Maybe it is allow us to reuse a lot of stuff already written,
> but we will go through the network stack
> 
> 2. new AF_VSOCK + glue + virtio-net modified
> Intermediate approach, similar to Jason's proposal
> 
> 3, new AF_VSOCK + new virtio-vsock
> Can be the thinnest, but we have to rewrite many things, with the risk
> of making the same mistakes as the current implementation.
> 

1 or 3 imho. I wouldn't expect a lot from 2.  I slightly favor 3 and
Jason 1. So take your pick :)

> > Instead vsock implementation carries so much baggage from both
> > networking stack - such as softirq processing - and itself such as
> > workqueues, global state and crude locking - to the point where
> > it's actually slower than TCP.
> 
> I agree, and I'm finding new issues while I'm trying to support nested
> VMs, allowing multiple vsock transports (virtio-vsock and vhost-vsock in
> the KVM case) at runtime.
> 
> > 
> 
> [...]
> 
> > > > 
> > > > I suggest to do this step by step:
> > > > 
> > > > 1) use virtio-net but keep some protocol logic
> > > > 
> > > > 2) separate protocol logic and merge it to exist Linux networking stack
> > > 
> > > Make sense, thanks for the suggestions, I'll try to do these steps!
> > > 
> > > Thanks,
> > > Stefano
> > 
> > 
> > An alternative is look at sources of overhead in vsock and get rid of
> > them, or rewrite it from scratch focusing on performance.
> 
> I started looking at virtio-vsock and vhost-vsock trying to do very
> simple changes [1] to increase the performance. I should send a v4 of that
> series as a very short term, then I'd like to have a deeper look to understand
> if it is better to try to optimize or rewrite it from scratch.
> 
> 
> Thanks,
> Stefano
> 
> [1] https://patchwork.kernel.org/cover/10970145/
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock

2019-07-16 Thread Stefano Garzarella
On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:

[...]

> > > 
> > > 
> > > I think it's just a branch, for ethernet, go for networking stack. 
> > > otherwise
> > > go for vsock core?
> > > 
> > 
> > Yes, that should work.
> > 
> > So, I should refactor the functions that can be called also from the vsock
> > core, in order to remove "struct net_device *dev" parameter.
> > Maybe creating some wrappers for the network stack.
> > 
> > Otherwise I should create a fake net_device for vsock_core.
> > 
> > What do you suggest?
> 
> Neither.
> 
> I think what Jason was saying all along is this:
> 
> virtio net doesn't actually lose packets, at least most
> of the time. And it actually most of the time
> passes all packets to host. So it's possible to use a virtio net
> device (possibly with a feature flag that says "does not lose packets,
> all packets go to host") and build vsock on top.

Yes, I got it after the latest Jason's reply.

> 
> and all of this is nice, but don't expect anything easy,
> or any quick results.

I expected this... :-(

> 
> Also, in a sense it's a missed opportunity: we could cut out a lot
> of fat and see just how fast can a protocol that is completely
> new and separate from networking stack go.

In this case, if we will try to do a PoC, what do you think is better?
1. new AF_VSOCK + network-stack + virtio-net modified
Maybe it is allow us to reuse a lot of stuff already written,
but we will go through the network stack

2. new AF_VSOCK + glue + virtio-net modified
Intermediate approach, similar to Jason's proposal

3, new AF_VSOCK + new virtio-vsock
Can be the thinnest, but we have to rewrite many things, with the risk
of making the same mistakes as the current implementation.


> Instead vsock implementation carries so much baggage from both
> networking stack - such as softirq processing - and itself such as
> workqueues, global state and crude locking - to the point where
> it's actually slower than TCP.

I agree, and I'm finding new issues while I'm trying to support nested
VMs, allowing multiple vsock transports (virtio-vsock and vhost-vsock in
the KVM case) at runtime.

> 

[...]

> > > 
> > > I suggest to do this step by step:
> > > 
> > > 1) use virtio-net but keep some protocol logic
> > > 
> > > 2) separate protocol logic and merge it to exist Linux networking stack
> > 
> > Make sense, thanks for the suggestions, I'll try to do these steps!
> > 
> > Thanks,
> > Stefano
> 
> 
> An alternative is look at sources of overhead in vsock and get rid of
> them, or rewrite it from scratch focusing on performance.

I started looking at virtio-vsock and vhost-vsock trying to do very
simple changes [1] to increase the performance. I should send a v4 of that
series as a very short term, then I'd like to have a deeper look to understand
if it is better to try to optimize or rewrite it from scratch.


Thanks,
Stefano

[1] https://patchwork.kernel.org/cover/10970145/

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Xen-devel] [PATCH 0/2] Remove 32-bit Xen PV guest support

2019-07-16 Thread Jan Beulich
On 15.07.2019 19:28, Andy Lutomirski wrote:
> On Mon, Jul 15, 2019 at 9:34 AM Andi Kleen  wrote:
>>
>> Juergen Gross  writes:
>>
>>> The long term plan has been to replace Xen PV guests by PVH. The first
>>> victim of that plan are now 32-bit PV guests, as those are used only
>>> rather seldom these days. Xen on x86 requires 64-bit support and with
>>> Grub2 now supporting PVH officially since version 2.04 there is no
>>> need to keep 32-bit PV guest support alive in the Linux kernel.
>>> Additionally Meltdown mitigation is not available in the kernel running
>>> as 32-bit PV guest, so dropping this mode makes sense from security
>>> point of view, too.
>>
>> Normally we have a deprecation period for feature removals like this.
>> You would make the kernel print a warning for some releases, and when
>> no user complains you can then remove. If a user complains you can't.
>>
> 
> As I understand it, the kernel rules do allow changes like this even
> if there's a complaint: this is a patch that removes what is
> effectively hardware support.  If the maintenance cost exceeds the
> value, then removal is fair game.  (Obviously we weight the value to
> preserving compatibility quite highly, but in this case, Xen dropped
> 32-bit hardware support a long time ago.  If the Xen hypervisor says
> that 32-bit PV guest support is deprecated, it's deprecated.)

Since it was implied but not explicit from Andrew's reply, just to
make it explicit: So far 32-bit PV guest support has not been
deprecated in Xen itself.

Jan
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization