Re: [PATCH v6 0/4] Add a vhost RPMsg API

2020-09-17 Thread Guennadi Liakhovetski
Hi Arnaud,

On Thu, Sep 17, 2020 at 05:21:02PM +0200, Arnaud POULIQUEN wrote:
> Hi Guennadi,
> 
> > -Original Message-
> > From: Guennadi Liakhovetski 
> > Sent: jeudi 17 septembre 2020 07:47
> > To: Arnaud POULIQUEN 
> > Cc: k...@vger.kernel.org; linux-remotep...@vger.kernel.org;
> > virtualization@lists.linux-foundation.org; sound-open-firmware@alsa-
> > project.org; Pierre-Louis Bossart ; 
> > Liam
> > Girdwood ; Michael S. Tsirkin
> > ; Jason Wang ; Ohad Ben-Cohen
> > ; Bjorn Andersson ; Mathieu
> > Poirier ; Vincent Whitchurch
> > 
> > Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API
> > 
> > Hi Arnaud,
> > 
> > On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > > Hi  Guennadi,
> > >
> > > On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
> > > > Hi,
> > > >
> > > > Next update:
> > > >
> > > > v6:
> > > > - rename include/linux/virtio_rpmsg.h ->
> > > > include/linux/rpmsg/virtio.h
> > > >
> > > > v5:
> > > > - don't hard-code message layout
> > > >
> > > > v4:
> > > > - add endianness conversions to comply with the VirtIO standard
> > > >
> > > > v3:
> > > > - address several checkpatch warnings
> > > > - address comments from Mathieu Poirier
> > > >
> > > > v2:
> > > > - update patch #5 with a correct vhost_dev_init() prototype
> > > > - drop patch #6 - it depends on a different patch, that is currently
> > > >   an RFC
> > > > - address comments from Pierre-Louis Bossart:
> > > >   * remove "default n" from Kconfig
> > > >
> > > > Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> > > > cases. It can however also be used for virtualisation scenarios,
> > > > e.g. when using KVM to run Linux on both the host and the guests.
> > > > This patch set adds a wrapper API to facilitate writing vhost
> > > > drivers for such RPMsg-based solutions. The first use case is an
> > > > audio DSP virtualisation project, currently under development, ready
> > > > for review and submission, available at
> > > > https://github.com/thesofproject/linux/pull/1501/commits
> > >
> > > Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
> > > service[1] that does not match with your implementation.
> > > As i come late, i hope that i did not miss something in the history...
> > > Don't hesitate to point me the discussions, if it is the case.
> > 
> > Well, as you see, this is a v6 only of this patch set, and apart from it 
> > there have
> > been several side discussions and patch sets.
> > 
> > > Regarding your patchset, it is quite confusing for me. It seems that
> > > you implement your own protocol on top of vhost forked from the RPMsg
> > one.
> > > But look to me that it is not the RPMsg protocol.
> > 
> > I'm implementing a counterpart to the rpmsg protocol over VirtIO as 
> > initially
> > implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case
> > of remoteproc over VirtIO) or the guest side in case of Linux 
> > virtualisation.
> > Since my implementation can talk to that driver, I don't think, that I'm 
> > inventing
> > a new protocol. I'm adding support for the same protocol for the opposite 
> > side
> > of the VirtIO divide.
> 
> The main point I would like to highlight here is related to the use of the 
> name "RPMsg"
> more than how you implement your IPC protocol.
> If It is a counterpart, it probably does not respect interface for RPMsg 
> clients.
> A good way to answer this, might be to respond to this question:
> Is the rpmsg sample client[4] can be used on top of your vhost RPMsg 
> implementation?
> If the response is no, describe it as a RPMsg implementation could lead to 
> confusion...

Sorry, I don't quite understand your logic. RPMsg is a communication protocol, 
not an 
API. An RPMsg implementation has to be able to communicate with other compliant 
RPMsg 
implementations, it doesn't have to provide any specific API. Am I missing 
anything?

Thanks
Guennadi

> [4] 
> https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c
> 
> Regards,
> Arnaud
> 
> > 
> > > So i would be agree with Vincent[2] which proposed to switch on a
> > > RPMsg API and creating a vhost rpmsg device. This is also proposed in
> > > the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > > Do you think that this alternative could match with your need?
> > 
> > As I replied to Vincent, I understand his proposal and the approach taken 
> > in the
> > series [3], but I'm not sure I agree, that adding yet another virtual 
> > device /
> > driver layer on the vhost side is a good idea. As far as I understand 
> > adding new
> > completely virtual devices isn't considered to be a good practice in the 
> > kernel.
> > Currently vhost is just a passive "library"
> > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of
> > converting vhost to a virtual device infrastructure.
> > 
> > Thanks for pointing me out at [3], I should have a better look at it.
> > 
> > Thanks
> > Guennadi
> > 
> > > 

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-17 Thread Jason Wang


On 2020/9/16 下午7:47, Kishon Vijay Abraham I wrote:

Hi Jason,

On 16/09/20 8:40 am, Jason Wang wrote:

On 2020/9/15 下午11:47, Kishon Vijay Abraham I wrote:

Hi Jason,

On 15/09/20 1:48 pm, Jason Wang wrote:

Hi Kishon:

On 2020/9/14 下午3:23, Kishon Vijay Abraham I wrote:

Then you need something that is functional equivalent to virtio PCI
which is actually the concept of vDPA (e.g vDPA provides
alternatives if
the queue_sel is hard in the EP implementation).

Okay, I just tried to compare the 'struct vdpa_config_ops' and 'struct
vhost_config_ops' ( introduced in [RFC PATCH 03/22] vhost: Add ops for
the VHOST driver to configure VHOST device).

struct vdpa_config_ops {
  /* Virtqueue ops */
  int (*set_vq_address)(struct vdpa_device *vdev,
    u16 idx, u64 desc_area, u64 driver_area,
    u64 device_area);
  void (*set_vq_num)(struct vdpa_device *vdev, u16 idx, u32 num);
  void (*kick_vq)(struct vdpa_device *vdev, u16 idx);
  void (*set_vq_cb)(struct vdpa_device *vdev, u16 idx,
    struct vdpa_callback *cb);
  void (*set_vq_ready)(struct vdpa_device *vdev, u16 idx, bool
ready);
  bool (*get_vq_ready)(struct vdpa_device *vdev, u16 idx);
  int (*set_vq_state)(struct vdpa_device *vdev, u16 idx,
  const struct vdpa_vq_state *state);
  int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
  struct vdpa_vq_state *state);
  struct vdpa_notification_area
  (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
  /* vq irq is not expected to be changed once DRIVER_OK is set */
  int (*get_vq_irq)(struct vdpa_device *vdv, u16 idx);

  /* Device ops */
  u32 (*get_vq_align)(struct vdpa_device *vdev);
  u64 (*get_features)(struct vdpa_device *vdev);
  int (*set_features)(struct vdpa_device *vdev, u64 features);
  void (*set_config_cb)(struct vdpa_device *vdev,
    struct vdpa_callback *cb);
  u16 (*get_vq_num_max)(struct vdpa_device *vdev);
  u32 (*get_device_id)(struct vdpa_device *vdev);
  u32 (*get_vendor_id)(struct vdpa_device *vdev);
  u8 (*get_status)(struct vdpa_device *vdev);
  void (*set_status)(struct vdpa_device *vdev, u8 status);
  void (*get_config)(struct vdpa_device *vdev, unsigned int offset,
     void *buf, unsigned int len);
  void (*set_config)(struct vdpa_device *vdev, unsigned int offset,
     const void *buf, unsigned int len);
  u32 (*get_generation)(struct vdpa_device *vdev);

  /* DMA ops */
  int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb
*iotlb);
  int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size,
     u64 pa, u32 perm);
  int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size);

  /* Free device resources */
  void (*free)(struct vdpa_device *vdev);
};

+struct vhost_config_ops {
+    int (*create_vqs)(struct vhost_dev *vdev, unsigned int nvqs,
+  unsigned int num_bufs, struct vhost_virtqueue *vqs[],
+  vhost_vq_callback_t *callbacks[],
+  const char * const names[]);
+    void (*del_vqs)(struct vhost_dev *vdev);
+    int (*write)(struct vhost_dev *vdev, u64 vhost_dst, void *src,
int len);
+    int (*read)(struct vhost_dev *vdev, void *dst, u64 vhost_src, int
len);
+    int (*set_features)(struct vhost_dev *vdev, u64 device_features);
+    int (*set_status)(struct vhost_dev *vdev, u8 status);
+    u8 (*get_status)(struct vhost_dev *vdev);
+};
+
struct virtio_config_ops
I think there's some overlap here and some of the ops tries to do the
same thing.

I think it differs in (*set_vq_address)() and (*create_vqs)().
[create_vqs() introduced in struct vhost_config_ops provides
complimentary functionality to (*find_vqs)() in struct
virtio_config_ops. It seemingly encapsulates the functionality of
(*set_vq_address)(), (*set_vq_num)(), (*set_vq_cb)(),..].

Back to the difference between (*set_vq_address)() and (*create_vqs)(),
set_vq_address() directly provides the virtqueue address to the vdpa
device but create_vqs() only provides the parameters of the virtqueue
(like the number of virtqueues, number of buffers) but does not
directly
provide the address. IMO the backend client drivers (like net or vhost)
shouldn't/cannot by itself know how to access the vring created on
virtio front-end. The vdpa device/vhost device should have logic for
that. That will help the client drivers to work with different types of
vdpa device/vhost device and can access the vring created by virtio
irrespective of whether the vring can be accessed via mmio or kernel
space or user space.

I think vdpa always works with client drivers in userspace and
providing
userspace address for vring.

Sorry for being unclear. What I meant is not replacing vDPA with the
vhost(bus) you proposed but the possibility of replacing virtio-pci-epf
with vDPA in:

Okay, so the virtio back-end still use vhost and front end 

[PATCH v3 -next] vdpa: mlx5: change Kconfig depends to fix build errors

2020-09-17 Thread Randy Dunlap
From: Randy Dunlap 

drivers/vdpa/mlx5/ uses vhost_iotlb*() interfaces, so add a dependency
on VHOST to eliminate build errors.

ld: drivers/vdpa/mlx5/core/mr.o: in function `add_direct_chain':
mr.c:(.text+0x106): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x1cf): undefined reference to `vhost_iotlb_itree_next'
ld: mr.c:(.text+0x30d): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x3e8): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `_mlx5_vdpa_create_mr':
mr.c:(.text+0x908): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x9e6): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `mlx5_vdpa_handle_set_map':
mr.c:(.text+0xf1d): undefined reference to `vhost_iotlb_itree_first'

Signed-off-by: Randy Dunlap 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: virtualization@lists.linux-foundation.org
Cc: Saeed Mahameed 
Cc: Leon Romanovsky 
Cc: net...@vger.kernel.org
---
v2: change from select to depends on VHOST (Saeed)
v3: change to depends on VHOST_IOTLB (Jason)

 drivers/vdpa/Kconfig |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-next-20200917.orig/drivers/vdpa/Kconfig
+++ linux-next-20200917/drivers/vdpa/Kconfig
@@ -31,7 +31,7 @@ config IFCVF
 
 config MLX5_VDPA
bool "MLX5 VDPA support library for ConnectX devices"
-   depends on MLX5_CORE
+   depends on VHOST_IOTLB && MLX5_CORE
default n
help
  Support library for Mellanox VDPA drivers. Provides code that is

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [vhost next 0/2] mlx5 vdpa fix netdev status

2020-09-17 Thread Jason Wang


On 2020/9/17 下午8:13, Eli Cohen wrote:

Hi Michael,

the following two patches aim to fix a failure to set the vdpa driver
status bit VIRTIO_NET_S_LINK_UP thus causing failure to bring the link
up. I break it to two patches:

1. Introduce proper mlx5 API to set 16 bit status fields per virtio
requirements.
2. Fix the failure to set the bit

Eli Cohen (2):
   vdpa/mlx5: Make use of a specific 16 bit endianness API
   vdpa/mlx5: Fix failure to bring link up

  drivers/vdpa/mlx5/net/mlx5_vnet.c | 9 +++--
  1 file changed, 7 insertions(+), 2 deletions(-)



Acked-by: Jason Wang 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v2 -next] vdpa: mlx5: change Kconfig depends to fix build errors

2020-09-17 Thread Jason Wang


On 2020/9/18 上午3:45, Randy Dunlap wrote:

From: Randy Dunlap 

drivers/vdpa/mlx5/ uses vhost_iotlb*() interfaces, so add a dependency
on VHOST to eliminate build errors.

ld: drivers/vdpa/mlx5/core/mr.o: in function `add_direct_chain':
mr.c:(.text+0x106): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x1cf): undefined reference to `vhost_iotlb_itree_next'
ld: mr.c:(.text+0x30d): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x3e8): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `_mlx5_vdpa_create_mr':
mr.c:(.text+0x908): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x9e6): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `mlx5_vdpa_handle_set_map':
mr.c:(.text+0xf1d): undefined reference to `vhost_iotlb_itree_first'

Signed-off-by: Randy Dunlap 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: virtualization@lists.linux-foundation.org
Cc: Saeed Mahameed 
Cc: Leon Romanovsky 
Cc: net...@vger.kernel.org
---
v2: change from select to depends (Saeed)

  drivers/vdpa/Kconfig |2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-next-20200917.orig/drivers/vdpa/Kconfig
+++ linux-next-20200917/drivers/vdpa/Kconfig
@@ -31,7 +31,7 @@ config IFCVF
  
  config MLX5_VDPA

bool "MLX5 VDPA support library for ConnectX devices"
-   depends on MLX5_CORE
+   depends on VHOST && MLX5_CORE



It looks to me that depending on VHOST is too heavyweight.

I guess what it really needs is VHOST_IOTLB. So we can use select 
VHOST_IOTLB here.


Thanks



default n
help
  Support library for Mellanox VDPA drivers. Provides code that is



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v2 -next] vdpa: mlx5: change Kconfig depends to fix build errors

2020-09-17 Thread Randy Dunlap
From: Randy Dunlap 

drivers/vdpa/mlx5/ uses vhost_iotlb*() interfaces, so add a dependency
on VHOST to eliminate build errors.

ld: drivers/vdpa/mlx5/core/mr.o: in function `add_direct_chain':
mr.c:(.text+0x106): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x1cf): undefined reference to `vhost_iotlb_itree_next'
ld: mr.c:(.text+0x30d): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x3e8): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `_mlx5_vdpa_create_mr':
mr.c:(.text+0x908): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x9e6): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `mlx5_vdpa_handle_set_map':
mr.c:(.text+0xf1d): undefined reference to `vhost_iotlb_itree_first'

Signed-off-by: Randy Dunlap 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: virtualization@lists.linux-foundation.org
Cc: Saeed Mahameed 
Cc: Leon Romanovsky 
Cc: net...@vger.kernel.org
---
v2: change from select to depends (Saeed)

 drivers/vdpa/Kconfig |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-next-20200917.orig/drivers/vdpa/Kconfig
+++ linux-next-20200917/drivers/vdpa/Kconfig
@@ -31,7 +31,7 @@ config IFCVF
 
 config MLX5_VDPA
bool "MLX5 VDPA support library for ConnectX devices"
-   depends on MLX5_CORE
+   depends on VHOST && MLX5_CORE
default n
help
  Support library for Mellanox VDPA drivers. Provides code that is

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH -next] vdpa: mlx5: select VHOST to fix build errors

2020-09-17 Thread Randy Dunlap
From: Randy Dunlap 

drivers/vdpa/mlx5/ uses vhost_iotlb*() interfaces, so select
VHOST to eliminate build errors.

ld: drivers/vdpa/mlx5/core/mr.o: in function `add_direct_chain':
mr.c:(.text+0x106): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x1cf): undefined reference to `vhost_iotlb_itree_next'
ld: mr.c:(.text+0x30d): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x3e8): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `_mlx5_vdpa_create_mr':
mr.c:(.text+0x908): undefined reference to `vhost_iotlb_itree_first'
ld: mr.c:(.text+0x9e6): undefined reference to `vhost_iotlb_itree_next'
ld: drivers/vdpa/mlx5/core/mr.o: in function `mlx5_vdpa_handle_set_map':
mr.c:(.text+0xf1d): undefined reference to `vhost_iotlb_itree_first'

Signed-off-by: Randy Dunlap 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: virtualization@lists.linux-foundation.org
Cc: Saeed Mahameed 
Cc: Leon Romanovsky 
Cc: net...@vger.kernel.org
---
Note: This patch may not be the right thing, but it fixes the build errors.

 drivers/vdpa/Kconfig |1 +
 1 file changed, 1 insertion(+)

--- linux-next-20200917.orig/drivers/vdpa/Kconfig
+++ linux-next-20200917/drivers/vdpa/Kconfig
@@ -32,6 +32,7 @@ config IFCVF
 config MLX5_VDPA
bool "MLX5 VDPA support library for ConnectX devices"
depends on MLX5_CORE
+   select VHOST
default n
help
  Support library for Mellanox VDPA drivers. Provides code that is

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH v6 0/4] Add a vhost RPMsg API

2020-09-17 Thread Arnaud POULIQUEN
Hi Guennadi,

> -Original Message-
> From: Guennadi Liakhovetski 
> Sent: jeudi 17 septembre 2020 07:47
> To: Arnaud POULIQUEN 
> Cc: k...@vger.kernel.org; linux-remotep...@vger.kernel.org;
> virtualization@lists.linux-foundation.org; sound-open-firmware@alsa-
> project.org; Pierre-Louis Bossart ; Liam
> Girdwood ; Michael S. Tsirkin
> ; Jason Wang ; Ohad Ben-Cohen
> ; Bjorn Andersson ; Mathieu
> Poirier ; Vincent Whitchurch
> 
> Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API
> 
> Hi Arnaud,
> 
> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > Hi  Guennadi,
> >
> > On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
> > > Hi,
> > >
> > > Next update:
> > >
> > > v6:
> > > - rename include/linux/virtio_rpmsg.h ->
> > > include/linux/rpmsg/virtio.h
> > >
> > > v5:
> > > - don't hard-code message layout
> > >
> > > v4:
> > > - add endianness conversions to comply with the VirtIO standard
> > >
> > > v3:
> > > - address several checkpatch warnings
> > > - address comments from Mathieu Poirier
> > >
> > > v2:
> > > - update patch #5 with a correct vhost_dev_init() prototype
> > > - drop patch #6 - it depends on a different patch, that is currently
> > >   an RFC
> > > - address comments from Pierre-Louis Bossart:
> > >   * remove "default n" from Kconfig
> > >
> > > Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> > > cases. It can however also be used for virtualisation scenarios,
> > > e.g. when using KVM to run Linux on both the host and the guests.
> > > This patch set adds a wrapper API to facilitate writing vhost
> > > drivers for such RPMsg-based solutions. The first use case is an
> > > audio DSP virtualisation project, currently under development, ready
> > > for review and submission, available at
> > > https://github.com/thesofproject/linux/pull/1501/commits
> >
> > Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
> > service[1] that does not match with your implementation.
> > As i come late, i hope that i did not miss something in the history...
> > Don't hesitate to point me the discussions, if it is the case.
> 
> Well, as you see, this is a v6 only of this patch set, and apart from it 
> there have
> been several side discussions and patch sets.
> 
> > Regarding your patchset, it is quite confusing for me. It seems that
> > you implement your own protocol on top of vhost forked from the RPMsg
> one.
> > But look to me that it is not the RPMsg protocol.
> 
> I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially
> implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case
> of remoteproc over VirtIO) or the guest side in case of Linux virtualisation.
> Since my implementation can talk to that driver, I don't think, that I'm 
> inventing
> a new protocol. I'm adding support for the same protocol for the opposite side
> of the VirtIO divide.

The main point I would like to highlight here is related to the use of the name 
"RPMsg"
more than how you implement your IPC protocol.
If It is a counterpart, it probably does not respect interface for RPMsg 
clients.
A good way to answer this, might be to respond to this question:
Is the rpmsg sample client[4] can be used on top of your vhost RPMsg 
implementation?
If the response is no, describe it as a RPMsg implementation could lead to 
confusion...

[4] 
https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c

Regards,
Arnaud

> 
> > So i would be agree with Vincent[2] which proposed to switch on a
> > RPMsg API and creating a vhost rpmsg device. This is also proposed in
> > the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > Do you think that this alternative could match with your need?
> 
> As I replied to Vincent, I understand his proposal and the approach taken in 
> the
> series [3], but I'm not sure I agree, that adding yet another virtual device /
> driver layer on the vhost side is a good idea. As far as I understand adding 
> new
> completely virtual devices isn't considered to be a good practice in the 
> kernel.
> Currently vhost is just a passive "library"
> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of
> converting vhost to a virtual device infrastructure.
> 
> Thanks for pointing me out at [3], I should have a better look at it.
> 
> Thanks
> Guennadi
> 
> > [1].
> > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338
> > 335 [2].
> > https://www.spinics.net/lists/linux-virtualization/msg44195.html
> > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
> >
> > Thanks,
> > Arnaud
> >
> > >
> > > Thanks
> > > Guennadi
> > >
> > > Guennadi Liakhovetski (4):
> > >   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
> > >   rpmsg: move common structures and defines to headers
> > >   rpmsg: update documentation
> > >   vhost: add an RPMsg API
> > >
> > >  Documentation/rpmsg.txt  |   6 +-
> > >  

RE: [EXTERNAL] Re: [PATCH RFC v1 08/18] x86/hyperv: handling hypercall page setup for root

2020-09-17 Thread Vitaly Kuznetsov
Sunil Muthuswamy  writes:

>> 
>> On Tue, Sep 15, 2020 at 12:32:29PM +0200, Vitaly Kuznetsov wrote:
>> > Wei Liu  writes:
>> >
>> > > When Linux is running as the root partition, the hypercall page will
>> > > have already been setup by Hyper-V. Copy the content over to the
>> > > allocated page.
>> >
>> > And we can't setup a new hypercall page by writing something different
>> > to HV_X64_MSR_HYPERCALL, right?
>> >
>> 
>> My understanding is that we can't, but Sunil can maybe correct me.
>
> That is correct. For root partition, the hypervisor has already allocated the
> hypercall page. The root is required to query the page, map it in its address
> space and wrmsr to enable it. It cannot change the location of the page. For
> guest, it can allocate and assign the hypercall page. This is covered a bit 
> in the
> hypervisor TLFS (section 3.13 in TLFS v6), for the guest side. The root side 
> is 
> not covered there, yet.

Ok, so it is guaranteed that root partition doesn't have this page in
its address space yet, otherwise it could've been used for something
else (in case it's just normal memory from its PoV).

Please add a comment about this as it is not really obvious.

Thanks,

-- 
Vitaly

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v6 0/4] Add a vhost RPMsg API

2020-09-17 Thread Guennadi Liakhovetski
Hi Vincent,

On Thu, Sep 17, 2020 at 10:36:44AM +0200, Vincent Whitchurch wrote:
> On Thu, Sep 17, 2020 at 07:47:06AM +0200, Guennadi Liakhovetski wrote:
> > On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > > So i would be agree with Vincent[2] which proposed to switch on a RPMsg 
> > > API
> > > and creating a vhost rpmsg device. This is also proposed in the 
> > > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > > Do you think that this alternative could match with your need?
> > 
> > As I replied to Vincent, I understand his proposal and the approach taken 
> > in the series [3], but I'm not sure I agree, that adding yet another 
> > virtual device / driver layer on the vhost side is a good idea. As far as 
> > I understand adding new completely virtual devices isn't considered to be 
> > a good practice in the kernel. Currently vhost is just a passive "library" 
> > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of 
> > converting vhost to a virtual device infrastructure.
> 
> I know it wasn't what you meant, but I noticed that the above paragraph
> could be read as if my suggestion was to convert vhost to a virtual
> device infrastructure, so I just want to clarify that that those are not
> related.  The only similarity between what I suggested in the thread in
> [2] and Kishon's RFC in [3] is that both involve creating a generic
> vhost-rpmsg driver which would allow the RPMsg API to be used for both
> sides of the link, instead of introducing a new API just for the server
> side.  That can be done without rewriting drivers/vhost/.

Thanks for the clarification. Another flexibility, that I'm trying to preserve 
with my approach is keeping direct access to iovec style data buffers for 
cases where that's the structure, that's already used by the respective 
driver on the host side. Since we already do packing and unpacking on the 
guest / client side, we don't need the same on the host / server side again.

Thanks
Guennadi

> > > [1]. 
> > > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 
> > > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html
> > > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html  
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v7 3/3] vhost: add an RPMsg API

2020-09-17 Thread Vincent Whitchurch
On Thu, Sep 10, 2020 at 01:13:51PM +0200, Guennadi Liakhovetski wrote:
> +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter 
> *iter,
> +unsigned int qid, ssize_t len)
> + __acquires(vq->mutex)
> +{
> + struct vhost_virtqueue *vq = vr->vq + qid;
> + unsigned int cnt;
> + ssize_t ret;
> + size_t tmp;
> +
> + if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> + return -EINVAL;
> +
> + iter->vq = vq;
> +
> + mutex_lock(>mutex);
> + vhost_disable_notify(>dev, vq);
> +
> + iter->head = vhost_rpmsg_get_msg(vq, );
> + if (iter->head == vq->num)
> + iter->head = -EAGAIN;
> +
> + if (iter->head < 0) {
> + ret = iter->head;
> + goto unlock;
> + }
> +
[...]
> +
> +return_buf:
> + vhost_add_used(vq, iter->head, 0);
> +unlock:
> + vhost_enable_notify(>dev, vq);
> + mutex_unlock(>mutex);
> +
> + return ret;
> +}

There is a race condition here.  New buffers could have been added while
notifications were disabled (between vhost_disable_notify() and
vhost_enable_notify()), so the other vhost drivers check the return
value of vhost_enable_notify() and rerun their work loops if it returns
true.  This driver doesn't do that so it stops processing requests if
that condition hits.

Something like the below seems to fix it but the correct fix could maybe
involve changing this API to account for this case so that it looks more
like the code in other vhost drivers.

diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
index 7c753258d42..673dd4ec865 100644
--- a/drivers/vhost/rpmsg.c
+++ b/drivers/vhost/rpmsg.c
@@ -302,8 +302,14 @@ static void handle_rpmsg_req_kick(struct vhost_work *work)
struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
  poll.work);
struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+   struct vhost_virtqueue *reqvq = vr->vq + VIRTIO_RPMSG_REQUEST;
 
-   while (handle_rpmsg_req_single(vr, vq))
+   /*
+* The !vhost_vq_avail_empty() check is needed since the vhost_rpmsg*
+* APIs don't check the return value of vhost_enable_notify() and retry
+* if there were buffers added while notifications were disabled.
+*/
+   while (handle_rpmsg_req_single(vr, vq) || 
!vhost_vq_avail_empty(reqvq->dev, reqvq))
;
 }
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v6 0/4] Add a vhost RPMsg API

2020-09-17 Thread Vincent Whitchurch
On Thu, Sep 17, 2020 at 07:47:06AM +0200, Guennadi Liakhovetski wrote:
> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > So i would be agree with Vincent[2] which proposed to switch on a RPMsg API
> > and creating a vhost rpmsg device. This is also proposed in the 
> > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > Do you think that this alternative could match with your need?
> 
> As I replied to Vincent, I understand his proposal and the approach taken 
> in the series [3], but I'm not sure I agree, that adding yet another 
> virtual device / driver layer on the vhost side is a good idea. As far as 
> I understand adding new completely virtual devices isn't considered to be 
> a good practice in the kernel. Currently vhost is just a passive "library" 
> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of 
> converting vhost to a virtual device infrastructure.

I know it wasn't what you meant, but I noticed that the above paragraph
could be read as if my suggestion was to convert vhost to a virtual
device infrastructure, so I just want to clarify that that those are not
related.  The only similarity between what I suggested in the thread in
[2] and Kishon's RFC in [3] is that both involve creating a generic
vhost-rpmsg driver which would allow the RPMsg API to be used for both
sides of the link, instead of introducing a new API just for the server
side.  That can be done without rewriting drivers/vhost/.

> > [1]. 
> > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 
> > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html
> > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html  
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization