Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-17 Thread Jason Wang



On 2020/9/16 下午7:47, Kishon Vijay Abraham I wrote:

Hi Jason,

On 16/09/20 8:40 am, Jason Wang wrote:

On 2020/9/15 下午11:47, Kishon Vijay Abraham I wrote:

Hi Jason,

On 15/09/20 1:48 pm, Jason Wang wrote:

Hi Kishon:

On 2020/9/14 下午3:23, Kishon Vijay Abraham I wrote:

Then you need something that is functional equivalent to virtio PCI
which is actually the concept of vDPA (e.g vDPA provides
alternatives if
the queue_sel is hard in the EP implementation).

Okay, I just tried to compare the 'struct vdpa_config_ops' and 'struct
vhost_config_ops' ( introduced in [RFC PATCH 03/22] vhost: Add ops for
the VHOST driver to configure VHOST device).

struct vdpa_config_ops {
  /* Virtqueue ops */
  int (*set_vq_address)(struct vdpa_device *vdev,
    u16 idx, u64 desc_area, u64 driver_area,
    u64 device_area);
  void (*set_vq_num)(struct vdpa_device *vdev, u16 idx, u32 num);
  void (*kick_vq)(struct vdpa_device *vdev, u16 idx);
  void (*set_vq_cb)(struct vdpa_device *vdev, u16 idx,
    struct vdpa_callback *cb);
  void (*set_vq_ready)(struct vdpa_device *vdev, u16 idx, bool
ready);
  bool (*get_vq_ready)(struct vdpa_device *vdev, u16 idx);
  int (*set_vq_state)(struct vdpa_device *vdev, u16 idx,
  const struct vdpa_vq_state *state);
  int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
  struct vdpa_vq_state *state);
  struct vdpa_notification_area
  (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
  /* vq irq is not expected to be changed once DRIVER_OK is set */
  int (*get_vq_irq)(struct vdpa_device *vdv, u16 idx);

  /* Device ops */
  u32 (*get_vq_align)(struct vdpa_device *vdev);
  u64 (*get_features)(struct vdpa_device *vdev);
  int (*set_features)(struct vdpa_device *vdev, u64 features);
  void (*set_config_cb)(struct vdpa_device *vdev,
    struct vdpa_callback *cb);
  u16 (*get_vq_num_max)(struct vdpa_device *vdev);
  u32 (*get_device_id)(struct vdpa_device *vdev);
  u32 (*get_vendor_id)(struct vdpa_device *vdev);
  u8 (*get_status)(struct vdpa_device *vdev);
  void (*set_status)(struct vdpa_device *vdev, u8 status);
  void (*get_config)(struct vdpa_device *vdev, unsigned int offset,
     void *buf, unsigned int len);
  void (*set_config)(struct vdpa_device *vdev, unsigned int offset,
     const void *buf, unsigned int len);
  u32 (*get_generation)(struct vdpa_device *vdev);

  /* DMA ops */
  int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb
*iotlb);
  int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size,
     u64 pa, u32 perm);
  int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size);

  /* Free device resources */
  void (*free)(struct vdpa_device *vdev);
};

+struct vhost_config_ops {
+    int (*create_vqs)(struct vhost_dev *vdev, unsigned int nvqs,
+  unsigned int num_bufs, struct vhost_virtqueue *vqs[],
+  vhost_vq_callback_t *callbacks[],
+  const char * const names[]);
+    void (*del_vqs)(struct vhost_dev *vdev);
+    int (*write)(struct vhost_dev *vdev, u64 vhost_dst, void *src,
int len);
+    int (*read)(struct vhost_dev *vdev, void *dst, u64 vhost_src, int
len);
+    int (*set_features)(struct vhost_dev *vdev, u64 device_features);
+    int (*set_status)(struct vhost_dev *vdev, u8 status);
+    u8 (*get_status)(struct vhost_dev *vdev);
+};
+
struct virtio_config_ops
I think there's some overlap here and some of the ops tries to do the
same thing.

I think it differs in (*set_vq_address)() and (*create_vqs)().
[create_vqs() introduced in struct vhost_config_ops provides
complimentary functionality to (*find_vqs)() in struct
virtio_config_ops. It seemingly encapsulates the functionality of
(*set_vq_address)(), (*set_vq_num)(), (*set_vq_cb)(),..].

Back to the difference between (*set_vq_address)() and (*create_vqs)(),
set_vq_address() directly provides the virtqueue address to the vdpa
device but create_vqs() only provides the parameters of the virtqueue
(like the number of virtqueues, number of buffers) but does not
directly
provide the address. IMO the backend client drivers (like net or vhost)
shouldn't/cannot by itself know how to access the vring created on
virtio front-end. The vdpa device/vhost device should have logic for
that. That will help the client drivers to work with different types of
vdpa device/vhost device and can access the vring created by virtio
irrespective of whether the vring can be accessed via mmio or kernel
space or user space.

I think vdpa always works with client drivers in userspace and
providing
userspace address for vring.

Sorry for being unclear. What I meant is not replacing vDPA with the
vhost(bus) you proposed but the possibility of replacing virtio-pci-epf
with vDPA in:

Okay, so the virtio back-end still use vhost and front end 

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-16 Thread Kishon Vijay Abraham I
<<< No Message Collected >>>


Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-15 Thread Jason Wang



On 2020/9/15 下午11:47, Kishon Vijay Abraham I wrote:

Hi Jason,

On 15/09/20 1:48 pm, Jason Wang wrote:

Hi Kishon:

On 2020/9/14 下午3:23, Kishon Vijay Abraham I wrote:

Then you need something that is functional equivalent to virtio PCI
which is actually the concept of vDPA (e.g vDPA provides alternatives if
the queue_sel is hard in the EP implementation).

Okay, I just tried to compare the 'struct vdpa_config_ops' and 'struct
vhost_config_ops' ( introduced in [RFC PATCH 03/22] vhost: Add ops for
the VHOST driver to configure VHOST device).

struct vdpa_config_ops {
 /* Virtqueue ops */
 int (*set_vq_address)(struct vdpa_device *vdev,
   u16 idx, u64 desc_area, u64 driver_area,
   u64 device_area);
 void (*set_vq_num)(struct vdpa_device *vdev, u16 idx, u32 num);
 void (*kick_vq)(struct vdpa_device *vdev, u16 idx);
 void (*set_vq_cb)(struct vdpa_device *vdev, u16 idx,
   struct vdpa_callback *cb);
 void (*set_vq_ready)(struct vdpa_device *vdev, u16 idx, bool ready);
 bool (*get_vq_ready)(struct vdpa_device *vdev, u16 idx);
 int (*set_vq_state)(struct vdpa_device *vdev, u16 idx,
     const struct vdpa_vq_state *state);
 int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
     struct vdpa_vq_state *state);
 struct vdpa_notification_area
 (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
 /* vq irq is not expected to be changed once DRIVER_OK is set */
 int (*get_vq_irq)(struct vdpa_device *vdv, u16 idx);

 /* Device ops */
 u32 (*get_vq_align)(struct vdpa_device *vdev);
 u64 (*get_features)(struct vdpa_device *vdev);
 int (*set_features)(struct vdpa_device *vdev, u64 features);
 void (*set_config_cb)(struct vdpa_device *vdev,
   struct vdpa_callback *cb);
 u16 (*get_vq_num_max)(struct vdpa_device *vdev);
 u32 (*get_device_id)(struct vdpa_device *vdev);
 u32 (*get_vendor_id)(struct vdpa_device *vdev);
 u8 (*get_status)(struct vdpa_device *vdev);
 void (*set_status)(struct vdpa_device *vdev, u8 status);
 void (*get_config)(struct vdpa_device *vdev, unsigned int offset,
    void *buf, unsigned int len);
 void (*set_config)(struct vdpa_device *vdev, unsigned int offset,
    const void *buf, unsigned int len);
 u32 (*get_generation)(struct vdpa_device *vdev);

 /* DMA ops */
 int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb);
 int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size,
    u64 pa, u32 perm);
 int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size);

 /* Free device resources */
 void (*free)(struct vdpa_device *vdev);
};

+struct vhost_config_ops {
+    int (*create_vqs)(struct vhost_dev *vdev, unsigned int nvqs,
+  unsigned int num_bufs, struct vhost_virtqueue *vqs[],
+  vhost_vq_callback_t *callbacks[],
+  const char * const names[]);
+    void (*del_vqs)(struct vhost_dev *vdev);
+    int (*write)(struct vhost_dev *vdev, u64 vhost_dst, void *src,
int len);
+    int (*read)(struct vhost_dev *vdev, void *dst, u64 vhost_src, int
len);
+    int (*set_features)(struct vhost_dev *vdev, u64 device_features);
+    int (*set_status)(struct vhost_dev *vdev, u8 status);
+    u8 (*get_status)(struct vhost_dev *vdev);
+};
+
struct virtio_config_ops
I think there's some overlap here and some of the ops tries to do the
same thing.

I think it differs in (*set_vq_address)() and (*create_vqs)().
[create_vqs() introduced in struct vhost_config_ops provides
complimentary functionality to (*find_vqs)() in struct
virtio_config_ops. It seemingly encapsulates the functionality of
(*set_vq_address)(), (*set_vq_num)(), (*set_vq_cb)(),..].

Back to the difference between (*set_vq_address)() and (*create_vqs)(),
set_vq_address() directly provides the virtqueue address to the vdpa
device but create_vqs() only provides the parameters of the virtqueue
(like the number of virtqueues, number of buffers) but does not directly
provide the address. IMO the backend client drivers (like net or vhost)
shouldn't/cannot by itself know how to access the vring created on
virtio front-end. The vdpa device/vhost device should have logic for
that. That will help the client drivers to work with different types of
vdpa device/vhost device and can access the vring created by virtio
irrespective of whether the vring can be accessed via mmio or kernel
space or user space.

I think vdpa always works with client drivers in userspace and providing
userspace address for vring.


Sorry for being unclear. What I meant is not replacing vDPA with the
vhost(bus) you proposed but the possibility of replacing virtio-pci-epf
with vDPA in:

Okay, so the virtio back-end still use vhost and front end should use
vDPA. I see. So the host side PCI driver for EPF should populate
vdpa_config_ops and invoke vdpa_register_device().



Yes.



My qu

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-15 Thread Kishon Vijay Abraham I
Hi Jason,

On 15/09/20 1:48 pm, Jason Wang wrote:
> Hi Kishon:
> 
> On 2020/9/14 下午3:23, Kishon Vijay Abraham I wrote:
>>> Then you need something that is functional equivalent to virtio PCI
>>> which is actually the concept of vDPA (e.g vDPA provides alternatives if
>>> the queue_sel is hard in the EP implementation).
>> Okay, I just tried to compare the 'struct vdpa_config_ops' and 'struct
>> vhost_config_ops' ( introduced in [RFC PATCH 03/22] vhost: Add ops for
>> the VHOST driver to configure VHOST device).
>>
>> struct vdpa_config_ops {
>> /* Virtqueue ops */
>> int (*set_vq_address)(struct vdpa_device *vdev,
>>   u16 idx, u64 desc_area, u64 driver_area,
>>   u64 device_area);
>> void (*set_vq_num)(struct vdpa_device *vdev, u16 idx, u32 num);
>> void (*kick_vq)(struct vdpa_device *vdev, u16 idx);
>> void (*set_vq_cb)(struct vdpa_device *vdev, u16 idx,
>>   struct vdpa_callback *cb);
>> void (*set_vq_ready)(struct vdpa_device *vdev, u16 idx, bool ready);
>> bool (*get_vq_ready)(struct vdpa_device *vdev, u16 idx);
>> int (*set_vq_state)(struct vdpa_device *vdev, u16 idx,
>>     const struct vdpa_vq_state *state);
>> int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
>>     struct vdpa_vq_state *state);
>> struct vdpa_notification_area
>> (*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
>> /* vq irq is not expected to be changed once DRIVER_OK is set */
>> int (*get_vq_irq)(struct vdpa_device *vdv, u16 idx);
>>
>> /* Device ops */
>> u32 (*get_vq_align)(struct vdpa_device *vdev);
>> u64 (*get_features)(struct vdpa_device *vdev);
>> int (*set_features)(struct vdpa_device *vdev, u64 features);
>> void (*set_config_cb)(struct vdpa_device *vdev,
>>   struct vdpa_callback *cb);
>> u16 (*get_vq_num_max)(struct vdpa_device *vdev);
>> u32 (*get_device_id)(struct vdpa_device *vdev);
>> u32 (*get_vendor_id)(struct vdpa_device *vdev);
>> u8 (*get_status)(struct vdpa_device *vdev);
>> void (*set_status)(struct vdpa_device *vdev, u8 status);
>> void (*get_config)(struct vdpa_device *vdev, unsigned int offset,
>>    void *buf, unsigned int len);
>> void (*set_config)(struct vdpa_device *vdev, unsigned int offset,
>>    const void *buf, unsigned int len);
>> u32 (*get_generation)(struct vdpa_device *vdev);
>>
>> /* DMA ops */
>> int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb);
>> int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size,
>>    u64 pa, u32 perm);
>> int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size);
>>
>> /* Free device resources */
>> void (*free)(struct vdpa_device *vdev);
>> };
>>
>> +struct vhost_config_ops {
>> +    int (*create_vqs)(struct vhost_dev *vdev, unsigned int nvqs,
>> +  unsigned int num_bufs, struct vhost_virtqueue *vqs[],
>> +  vhost_vq_callback_t *callbacks[],
>> +  const char * const names[]);
>> +    void (*del_vqs)(struct vhost_dev *vdev);
>> +    int (*write)(struct vhost_dev *vdev, u64 vhost_dst, void *src,
>> int len);
>> +    int (*read)(struct vhost_dev *vdev, void *dst, u64 vhost_src, int
>> len);
>> +    int (*set_features)(struct vhost_dev *vdev, u64 device_features);
>> +    int (*set_status)(struct vhost_dev *vdev, u8 status);
>> +    u8 (*get_status)(struct vhost_dev *vdev);
>> +};
>> +
>> struct virtio_config_ops
>> I think there's some overlap here and some of the ops tries to do the
>> same thing.
>>
>> I think it differs in (*set_vq_address)() and (*create_vqs)().
>> [create_vqs() introduced in struct vhost_config_ops provides
>> complimentary functionality to (*find_vqs)() in struct
>> virtio_config_ops. It seemingly encapsulates the functionality of
>> (*set_vq_address)(), (*set_vq_num)(), (*set_vq_cb)(),..].
>>
>> Back to the difference between (*set_vq_address)() and (*create_vqs)(),
>> set_vq_address() directly provides the virtqueue address to the vdpa
>> device but create_vqs() only provides the parameters of the virtqueue
>> (like the number of virtqueues, number of buffers) but does not directly
>> provide the address. IMO the backend client drivers (like net or vhost)
>> shouldn't/cannot by itself know how to access the vring created on
>> virtio front-end. The vdpa device/vhost device should have logic for
>> that. That will help the client drivers to work with different types of
>> vdpa device/vhost device and can access the vring created by virtio
>> irrespective of whether the vring can be accessed via mmio or kernel
>> space or user space.
>>
>> I think vdpa always works with client drivers in userspace and providing
>> userspace address for vring.
> 
> 
> Sorry for being unclear. What I meant is not replacing vDPA with the
> vhost(bus) you proposed but the possibility of replacing virtio-pci-epf
> with vDPA in:

Okay, so the

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-15 Thread Jason Wang

Hi Kishon:

On 2020/9/14 下午3:23, Kishon Vijay Abraham I wrote:

Then you need something that is functional equivalent to virtio PCI
which is actually the concept of vDPA (e.g vDPA provides alternatives if
the queue_sel is hard in the EP implementation).

Okay, I just tried to compare the 'struct vdpa_config_ops' and 'struct
vhost_config_ops' ( introduced in [RFC PATCH 03/22] vhost: Add ops for
the VHOST driver to configure VHOST device).

struct vdpa_config_ops {
/* Virtqueue ops */
int (*set_vq_address)(struct vdpa_device *vdev,
  u16 idx, u64 desc_area, u64 driver_area,
  u64 device_area);
void (*set_vq_num)(struct vdpa_device *vdev, u16 idx, u32 num);
void (*kick_vq)(struct vdpa_device *vdev, u16 idx);
void (*set_vq_cb)(struct vdpa_device *vdev, u16 idx,
  struct vdpa_callback *cb);
void (*set_vq_ready)(struct vdpa_device *vdev, u16 idx, bool ready);
bool (*get_vq_ready)(struct vdpa_device *vdev, u16 idx);
int (*set_vq_state)(struct vdpa_device *vdev, u16 idx,
const struct vdpa_vq_state *state);
int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
struct vdpa_vq_state *state);
struct vdpa_notification_area
(*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
/* vq irq is not expected to be changed once DRIVER_OK is set */
int (*get_vq_irq)(struct vdpa_device *vdv, u16 idx);

/* Device ops */
u32 (*get_vq_align)(struct vdpa_device *vdev);
u64 (*get_features)(struct vdpa_device *vdev);
int (*set_features)(struct vdpa_device *vdev, u64 features);
void (*set_config_cb)(struct vdpa_device *vdev,
  struct vdpa_callback *cb);
u16 (*get_vq_num_max)(struct vdpa_device *vdev);
u32 (*get_device_id)(struct vdpa_device *vdev);
u32 (*get_vendor_id)(struct vdpa_device *vdev);
u8 (*get_status)(struct vdpa_device *vdev);
void (*set_status)(struct vdpa_device *vdev, u8 status);
void (*get_config)(struct vdpa_device *vdev, unsigned int offset,
   void *buf, unsigned int len);
void (*set_config)(struct vdpa_device *vdev, unsigned int offset,
   const void *buf, unsigned int len);
u32 (*get_generation)(struct vdpa_device *vdev);

/* DMA ops */
int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb);
int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size,
   u64 pa, u32 perm);
int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size);

/* Free device resources */
void (*free)(struct vdpa_device *vdev);
};

+struct vhost_config_ops {
+   int (*create_vqs)(struct vhost_dev *vdev, unsigned int nvqs,
+ unsigned int num_bufs, struct vhost_virtqueue *vqs[],
+ vhost_vq_callback_t *callbacks[],
+ const char * const names[]);
+   void (*del_vqs)(struct vhost_dev *vdev);
+   int (*write)(struct vhost_dev *vdev, u64 vhost_dst, void *src, int len);
+   int (*read)(struct vhost_dev *vdev, void *dst, u64 vhost_src, int len);
+   int (*set_features)(struct vhost_dev *vdev, u64 device_features);
+   int (*set_status)(struct vhost_dev *vdev, u8 status);
+   u8 (*get_status)(struct vhost_dev *vdev);
+};
+
struct virtio_config_ops
I think there's some overlap here and some of the ops tries to do the
same thing.

I think it differs in (*set_vq_address)() and (*create_vqs)().
[create_vqs() introduced in struct vhost_config_ops provides
complimentary functionality to (*find_vqs)() in struct
virtio_config_ops. It seemingly encapsulates the functionality of
(*set_vq_address)(), (*set_vq_num)(), (*set_vq_cb)(),..].

Back to the difference between (*set_vq_address)() and (*create_vqs)(),
set_vq_address() directly provides the virtqueue address to the vdpa
device but create_vqs() only provides the parameters of the virtqueue
(like the number of virtqueues, number of buffers) but does not directly
provide the address. IMO the backend client drivers (like net or vhost)
shouldn't/cannot by itself know how to access the vring created on
virtio front-end. The vdpa device/vhost device should have logic for
that. That will help the client drivers to work with different types of
vdpa device/vhost device and can access the vring created by virtio
irrespective of whether the vring can be accessed via mmio or kernel
space or user space.

I think vdpa always works with client drivers in userspace and providing
userspace address for vring.



Sorry for being unclear. What I meant is not replacing vDPA with the 
vhost(bus) you proposed but the possibility of replacing virtio-pci-epf 
with vDPA in:


My question is basically for the part of virtio_pci_epf_sen

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-14 Thread Kishon Vijay Abraham I
Hi Jason,

On 01/09/20 2:20 pm, Jason Wang wrote:
> 
> On 2020/9/1 下午1:24, Kishon Vijay Abraham I wrote:
>> Hi,
>>
>> On 28/08/20 4:04 pm, Cornelia Huck wrote:
>>> On Thu, 9 Jul 2020 14:26:53 +0800
>>> Jason Wang  wrote:
>>>
>>> [Let me note right at the beginning that I first noted this while
>>> listening to Kishon's talk at LPC on Wednesday. I might be very
>>> confused about the background here, so let me apologize beforehand for
>>> any confusion I might spread.]
>>>
 On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:
> Hi Jason,
>
> On 7/8/2020 4:52 PM, Jason Wang wrote:
>> On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:
>>> Hi Jason,
>>>
>>> On 7/7/2020 3:17 PM, Jason Wang wrote:
 On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:
> Hi Jason,
>
> On 7/3/2020 12:46 PM, Jason Wang wrote:
>> On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:
>>> Hi Jason,
>>>
>>> On 7/2/2020 3:40 PM, Jason Wang wrote:
 On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
> On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay
> Abraham I wrote:
>> This series enhances Linux Vhost support to enable SoC-to-SoC
>> communication over MMIO. This series enables rpmsg
>> communication between
>> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>>
>> 1) Modify vhost to use standard Linux driver model
>> 2) Add support in vring to access virtqueue over MMIO
>> 3) Add vhost client driver for rpmsg
>> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver
>> (uses vhost) for
>>      rpmsg communication between two SoCs connected to
>> each other
>> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg
>> communication
>>      between two SoCs connected via NTB
>> 6) Add configfs to configure the components
>>
>> UseCase1 :
>>
>>    VHOST RPMSG VIRTIO RPMSG
>> +   +
>> |   |
>> |   |
>> |   |
>> |   |
>> +-v--+ +--v---+
>> |   Linux    | | Linux    |
>> |  Endpoint  | | Root Complex |
>> | <->  |
>> |    | |  |
>> |    SOC1    | | SOC2 |
>> ++ +--+
>>
>> UseCase 2:
>>
>>    VHOST RPMSG VIRTIO RPMSG
>> + +
>> | |
>> | |
>> | |
>> | |
>> +--v--+ +--v--+
>>      | | | |
>>      |    HOST1 |   |
>> HOST2    |
>>      | | | |
>> +--^--+ +--^--+
>> | |
>> | |
>> +-+
>>
>> | +--v--+ +--v--+  |
>> |  | | | |  |
>> |  | EP |   | EP 
>> |  |
>> |  | CONTROLLER1 |   |
>> CONTROLLER2 |  |
>> |  | <---> |  |
>> |  | | | |  |
>> |  | | | |  |
>> |  | |  SoC With Multiple EP Instances  
>> | |  |
>> |  | |  (Configured using NTB Function) 
>> | |  |
>> | +-+ +-+  |
>> +-+
>>
>>>
>>> First of all, to clarify the terminology:
>>> Is "vhost rpmsg" acting as what the virtio standard calls the 'device',
>>> and "virtio rpmsg" as the 'driver'? Or is the "vhost" part mostly just
>>
>> Right, vhost_rpmsg is 'device' and virtio_rpmsg is 'driver'.
>>> virtqueues + the exiting vhost interfaces?
>>
>> It's implemented to provide the full 'device' functionality.
>>>
>>
>> Software Layering:
>>
>> The high-level SW layering should look something like
>> below. This series
>> adds support only for RPMSG VHOST, however something
>> similar should be
>> done for net and scsi. With that any vhost device (PCI,
>> NTB, Platform
>> device, user) can use any of the vhost client driv

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-09 Thread Jason Wang



On 2020/9/9 上午12:37, Cornelia Huck wrote:

Then you need something that is functional equivalent to virtio PCI
which is actually the concept of vDPA (e.g vDPA provides alternatives if
the queue_sel is hard in the EP implementation).

It seems I really need to read up on vDPA more... do you have a pointer
for diving into this alternatives aspect?



See vpda_config_ops in include/linux/vdpa.h

Especially this part:

    int (*set_vq_address)(struct vdpa_device *vdev,
              u16 idx, u64 desc_area, u64 driver_area,
              u64 device_area);

This means for the devices (e.g endpoint device) that is hard to 
implement virtio-pci layout, it can use any other register layout or 
vendor specific way to configure the virtqueue.






"Virtio Over NTB" should anyways be a new transport.

Does that make any sense?

yeah, in the approach I used the initial features are hard-coded in
vhost-rpmsg (inherent to the rpmsg) but when we have to use adapter
layer (vhost only for accessing virtio ring and use virtio drivers on
both front end and backend), based on the functionality (e.g, rpmsg),
the vhost should be configured with features (to be presented to the
virtio) and that's why additional layer or APIs will be required.

A question here, if we go with vhost bus approach, does it mean the
virtio device can only be implemented in EP's userspace?

Can we maybe implement an alternative bus as well that would allow us
to support different virtio device implementations (in addition to the
vhost bus + userspace combination)?



That should be fine, but I'm not quite sure that implementing the device 
in kerne (kthread) is the good approach.


Thanks








Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-08 Thread Cornelia Huck
On Tue, 1 Sep 2020 16:50:03 +0800
Jason Wang  wrote:

> On 2020/9/1 下午1:24, Kishon Vijay Abraham I wrote:
> > Hi,
> >
> > On 28/08/20 4:04 pm, Cornelia Huck wrote:  
> >> On Thu, 9 Jul 2020 14:26:53 +0800
> >> Jason Wang  wrote:
> >>
> >> [Let me note right at the beginning that I first noted this while
> >> listening to Kishon's talk at LPC on Wednesday. I might be very
> >> confused about the background here, so let me apologize beforehand for
> >> any confusion I might spread.]
> >>  
> >>> On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:  
>  Hi Jason,
> 
>  On 7/8/2020 4:52 PM, Jason Wang wrote:  
> > On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:  
> >> Hi Jason,
> >>
> >> On 7/7/2020 3:17 PM, Jason Wang wrote:  
> >>> On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:  
>  Hi Jason,
> 
>  On 7/3/2020 12:46 PM, Jason Wang wrote:  
> > On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:  
> >> Hi Jason,
> >>
> >> On 7/2/2020 3:40 PM, Jason Wang wrote:  
> >>> On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:  
>  On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay 
>  Abraham I wrote:  
> > This series enhances Linux Vhost support to enable SoC-to-SoC
> > communication over MMIO. This series enables rpmsg 
> > communication between
> > two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
> >
> > 1) Modify vhost to use standard Linux driver model
> > 2) Add support in vring to access virtqueue over MMIO
> > 3) Add vhost client driver for rpmsg
> > 4) Add PCIe RC driver (uses virtio) and PCIe EP driver 
> > (uses vhost) for
> >      rpmsg communication between two SoCs connected to 
> > each other
> > 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg 
> > communication
> >      between two SoCs connected via NTB
> > 6) Add configfs to configure the components
> >
> > UseCase1 :
> >
> >    VHOST RPMSG VIRTIO RPMSG
> > +   +
> > |   |
> > |   |
> > |   |
> > |   |
> > +-v--+ +--v---+
> > |   Linux    | | Linux    |
> > |  Endpoint  | | Root Complex |
> > | <->  |
> > |    | |  |
> > |    SOC1    | | SOC2 |
> > ++ +--+
> >
> > UseCase 2:
> >
> >    VHOST RPMSG VIRTIO RPMSG
> > + +
> > | |
> > | |
> > | |
> > | |
> > +--v--+ +--v--+
> >      | | | |
> >      |    HOST1 |   | 
> > HOST2    |
> >      | | | |
> > +--^--+ +--^--+
> > | |
> > | |
> > +-+
> >  
> >
> > | +--v--+ +--v--+  |
> > |  | | | |  |
> > |  | EP |   | EP  
> > |  |
> > |  | CONTROLLER1 |   | 
> > CONTROLLER2 |  |
> > |  | <---> |  |
> > |  | | | |  |
> > |  | | | |  |
> > |  | |  SoC With Multiple EP Instances   
> > | |  |
> > |  | |  (Configured using NTB Function)  
> > | |  |
> > | +-+ +-+  |
> > +-+
> >  
> >  
> >>
> >> First of all, to clarify the terminology:
> >> Is "vhost rpmsg" acting as what the virtio standard calls the 'device',
> >> and "virtio rpmsg" as the 'driver'? Or is the "vhost" part mostly just  
> >
> > Right, vhost_rpmsg is 'device' and virtio_rpmsg is 'driver'.  
> >> virtqueues + the exiting vhost interfaces?  
> >
> > It's implemented to provide the full 'device' functionality.  

Ok.

> >>  
> >
> > Software Layering:
> >
> > The high-level SW layering should look something like 
> > below. This series
> > adds support only for RPMSG VHOST, however somet

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-09-01 Thread Jason Wang



On 2020/9/1 下午1:24, Kishon Vijay Abraham I wrote:

Hi,

On 28/08/20 4:04 pm, Cornelia Huck wrote:

On Thu, 9 Jul 2020 14:26:53 +0800
Jason Wang  wrote:

[Let me note right at the beginning that I first noted this while
listening to Kishon's talk at LPC on Wednesday. I might be very
confused about the background here, so let me apologize beforehand for
any confusion I might spread.]


On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/8/2020 4:52 PM, Jason Wang wrote:

On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:

On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay 
Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg 
communication between

two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver 
(uses vhost) for
     rpmsg communication between two SoCs connected to 
each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg 
communication

     between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

   VHOST RPMSG VIRTIO RPMSG
+   +
|   |
|   |
|   |
|   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
| <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

   VHOST RPMSG VIRTIO RPMSG
+ +
| |
| |
| |
| |
+--v--+ +--v--+
     | | | |
     |    HOST1 |   | 
HOST2    |

     | | | |
+--^--+ +--^--+
| |
| |
+-+ 


| +--v--+ +--v--+  |
|  | | | |  |
|  | EP |   | EP  
|  |
|  | CONTROLLER1 |   | 
CONTROLLER2 |  |

|  | <---> |  |
|  | | | |  |
|  | | | |  |
|  | |  SoC With Multiple EP Instances   
| |  |
|  | |  (Configured using NTB Function)  
| |  |

| +-+ +-+  |
+-+ 



First of all, to clarify the terminology:
Is "vhost rpmsg" acting as what the virtio standard calls the 'device',
and "virtio rpmsg" as the 'driver'? Or is the "vhost" part mostly just


Right, vhost_rpmsg is 'device' and virtio_rpmsg is 'driver'.

virtqueues + the exiting vhost interfaces?


It's implemented to provide the full 'device' functionality.




Software Layering:

The high-level SW layering should look something like 
below. This series
adds support only for RPMSG VHOST, however something 
similar should be
done for net and scsi. With that any vhost device (PCI, 
NTB, Platform

device, user) can use any of the vhost client driver.


      ++ +---+  ++ 
+--+
      |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST 
|  |    X |
      +---^+ +-^-+  +-^--+ 
+^-+

      | |  |  |
      | |  |  |
      | |  |  |
+---v-v--v--v--+ 

|    VHOST 
CORE    |
+^---^^--^-+ 


   | |    |  |
   | |    |  |
   | |    |  |
+v---+  +v--+ +--v--+  
+v-+
|  PCI EPF VHOST |  | NTB VHOST | |PLATFORM DEVICE VHOST|  
|    X |
++  +---+ +-+  
+--+


So, the upper half is basically various functionality types, e.g. a net
device. What is the lower half, a hardware interface? Would it be
equivalent to e.g. a normal PCI device?


Right, the upper half should provide the functionality.
The bottom layer could be a HW interface (like PCIe device or NTB 
device) or it could be a SW interface (for accessing virtio ring in 
userspace) that could be u

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-08-31 Thread Kishon Vijay Abraham I

Hi,

On 28/08/20 4:04 pm, Cornelia Huck wrote:

On Thu, 9 Jul 2020 14:26:53 +0800
Jason Wang  wrote:

[Let me note right at the beginning that I first noted this while
listening to Kishon's talk at LPC on Wednesday. I might be very
confused about the background here, so let me apologize beforehand for
any confusion I might spread.]


On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/8/2020 4:52 PM, Jason Wang wrote:

On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:

On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
     rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
     between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

   VHOST RPMSG VIRTIO RPMSG
    +   +
    |   |
    |   |
    |   |
    |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

   VHOST RPMSG  VIRTIO RPMSG
    + +
    | |
    | |
    | |
    | |
     +--v--+   +--v--+
     | |   | |
     |    HOST1    |   |    HOST2    |
     | |   | |
     +--^--+   +--^--+
    | |
    | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+


First of all, to clarify the terminology:
Is "vhost rpmsg" acting as what the virtio standard calls the 'device',
and "virtio rpmsg" as the 'driver'? Or is the "vhost" part mostly just


Right, vhost_rpmsg is 'device' and virtio_rpmsg is 'driver'.

virtqueues + the exiting vhost interfaces?


It's implemented to provide the full 'device' functionality.




Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


      ++  +---+  ++  +--+
      |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
      +---^+  +-^-+  +-^--+  +^-+
      | |  |  |
      | |  |  |
      | |  |

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-08-31 Thread Kishon Vijay Abraham I

Hi Mathieu,

On 15/07/20 10:45 pm, Mathieu Poirier wrote:

Hey Kishon,

On Wed, Jul 08, 2020 at 06:43:45PM +0530, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/8/2020 4:52 PM, Jason Wang wrote:


On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:

On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
    rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
    between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

  VHOST RPMSG VIRTIO RPMSG
   +   +
   |   |
   |   |
   |   |
   |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

  VHOST RPMSG  VIRTIO RPMSG
   + +
   | |
   | |
   | |
   | |
    +--v--+   +--v--+
    | |   | |
    |    HOST1    |   |    HOST2    |
    | |   | |
    +--^--+   +--^--+
   | |
   | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


     ++  +---+  ++  +--+
     |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
     +---^+  +-^-+  +-^--+  +^-+
     | |  |  |
     | |  |  |
     | |  |  |
+---v-v--v--v--+
|    VHOST CORE    |
+^---^^--^-+
  |   |    |  |
  |   |    |  |
  |   |    |  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
++  +---+  +---

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-08-28 Thread Cornelia Huck
On Thu, 9 Jul 2020 14:26:53 +0800
Jason Wang  wrote:

[Let me note right at the beginning that I first noted this while
listening to Kishon's talk at LPC on Wednesday. I might be very
confused about the background here, so let me apologize beforehand for
any confusion I might spread.]

> On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:
> > Hi Jason,
> >
> > On 7/8/2020 4:52 PM, Jason Wang wrote:  
> >> On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:  
> >>> Hi Jason,
> >>>
> >>> On 7/7/2020 3:17 PM, Jason Wang wrote:  
>  On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:  
> > Hi Jason,
> >
> > On 7/3/2020 12:46 PM, Jason Wang wrote:  
> >> On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:  
> >>> Hi Jason,
> >>>
> >>> On 7/2/2020 3:40 PM, Jason Wang wrote:  
>  On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:  
> > On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I 
> > wrote:  
> >> This series enhances Linux Vhost support to enable SoC-to-SoC
> >> communication over MMIO. This series enables rpmsg communication 
> >> between
> >> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
> >>
> >> 1) Modify vhost to use standard Linux driver model
> >> 2) Add support in vring to access virtqueue over MMIO
> >> 3) Add vhost client driver for rpmsg
> >> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses 
> >> vhost) for
> >>     rpmsg communication between two SoCs connected to each 
> >> other
> >> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg 
> >> communication
> >>     between two SoCs connected via NTB
> >> 6) Add configfs to configure the components
> >>
> >> UseCase1 :
> >>
> >>   VHOST RPMSG VIRTIO RPMSG
> >>    +   +
> >>    |   |
> >>    |   |
> >>    |   |
> >>    |   |
> >> +-v--+ +--v---+
> >> |   Linux    | | Linux    |
> >> |  Endpoint  | | Root Complex |
> >> |    <->  |
> >> |    | |  |
> >> |    SOC1    | | SOC2 |
> >> ++ +--+
> >>
> >> UseCase 2:
> >>
> >>   VHOST RPMSG  VIRTIO 
> >> RPMSG
> >>    + +
> >>    | |
> >>    | |
> >>    | |
> >>    | |
> >>     +--v--+   
> >> +--v--+
> >>     | |   |
> >>  |
> >>     |    HOST1    |   |    
> >> HOST2    |
> >>     | |   |
> >>  |
> >>     +--^--+   
> >> +--^--+
> >>    | |
> >>    | |
> >> +-+
> >> |  +--v--+   
> >> +--v--+  |
> >> |  | |   | 
> >> |  |
> >> |  | EP  |   | EP  
> >> |  |
> >> |  | CONTROLLER1 |   | CONTROLLER2 
> >> |  |
> >> |  | <---> 
> >> |  |
> >> |  | |   | 
> >> |  |
> >> |  | |   | 
> >> |  |
> >> |  | |  SoC With Multiple EP Instances   | 
> >> |  |
> >> |  | |  (Configured using NTB Function)  | 
> >> |  |
> >> |  +-+   
> >> +-+  |
> >> +-

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-15 Thread Mathieu Poirier
Hey Kishon,

On Wed, Jul 08, 2020 at 06:43:45PM +0530, Kishon Vijay Abraham I wrote:
> Hi Jason,
> 
> On 7/8/2020 4:52 PM, Jason Wang wrote:
> > 
> > On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:
> >> Hi Jason,
> >>
> >> On 7/7/2020 3:17 PM, Jason Wang wrote:
> >>> On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:
>  Hi Jason,
> 
>  On 7/3/2020 12:46 PM, Jason Wang wrote:
> > On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:
> >> Hi Jason,
> >>
> >> On 7/2/2020 3:40 PM, Jason Wang wrote:
> >>> On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
>  On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I 
>  wrote:
> > This series enhances Linux Vhost support to enable SoC-to-SoC
> > communication over MMIO. This series enables rpmsg communication 
> > between
> > two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
> >
> > 1) Modify vhost to use standard Linux driver model
> > 2) Add support in vring to access virtqueue over MMIO
> > 3) Add vhost client driver for rpmsg
> > 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) 
> > for
> >    rpmsg communication between two SoCs connected to each other
> > 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg 
> > communication
> >    between two SoCs connected via NTB
> > 6) Add configfs to configure the components
> >
> > UseCase1 :
> >
> >  VHOST RPMSG VIRTIO RPMSG
> >   +   +
> >   |   |
> >   |   |
> >   |   |
> >   |   |
> > +-v--+ +--v---+
> > |   Linux    | | Linux    |
> > |  Endpoint  | | Root Complex |
> > |    <->  |
> > |    | |  |
> > |    SOC1    | | SOC2 |
> > ++ +--+
> >
> > UseCase 2:
> >
> >  VHOST RPMSG  VIRTIO 
> > RPMSG
> >   + +
> >   | |
> >   | |
> >   | |
> >   | |
> >    +--v--+   
> > +--v--+
> >    | |   |  
> >    |
> >    |    HOST1    |   |    HOST2 
> >    |
> >    | |   |  
> >    |
> >    +--^--+   
> > +--^--+
> >   | |
> >   | |
> > +-+
> > |  +--v--+   
> > +--v--+  |
> > |  | |   | 
> > |  |
> > |  | EP  |   | EP  
> > |  |
> > |  | CONTROLLER1 |   | CONTROLLER2 
> > |  |
> > |  | <---> 
> > |  |
> > |  | |   | 
> > |  |
> > |  | |   | 
> > |  |
> > |  | |  SoC With Multiple EP Instances   | 
> > |  |
> > |  | |  (Configured using NTB Function)  | 
> > |  |
> > |  +-+   
> > +-+  |
> > +-+
> >
> > Software Layering:
> >
> > The high-level SW layering should look something like below. This 
> > series
> > adds support only for RPMSG VHOST, however something similar should 
> > be
> > done for net and scsi. With that any vhost device (PCI, NTB, 
> > Platform
> > device, user) can use any of th

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-08 Thread Jason Wang



On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/8/2020 4:52 PM, Jason Wang wrote:

On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:

On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
    rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
    between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

  VHOST RPMSG VIRTIO RPMSG
   +   +
   |   |
   |   |
   |   |
   |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

  VHOST RPMSG  VIRTIO RPMSG
   + +
   | |
   | |
   | |
   | |
    +--v--+   +--v--+
    | |   | |
    |    HOST1    |   |    HOST2    |
    | |   | |
    +--^--+   +--^--+
   | |
   | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


     ++  +---+  ++  +--+
     |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
     +---^+  +-^-+  +-^--+  +^-+
     | |  |  |
     | |  |  |
     | |  |  |
+---v-v--v--v--+
|    VHOST CORE    |
+^---^^--^-+
  |   |    |  |
  |   |    |  |
  |   |    |  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] ->
https://lore.

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-08 Thread Kishon Vijay Abraham I
Hi Jason,

On 7/8/2020 4:52 PM, Jason Wang wrote:
> 
> On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:
>> Hi Jason,
>>
>> On 7/7/2020 3:17 PM, Jason Wang wrote:
>>> On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:
 Hi Jason,

 On 7/3/2020 12:46 PM, Jason Wang wrote:
> On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:
>> Hi Jason,
>>
>> On 7/2/2020 3:40 PM, Jason Wang wrote:
>>> On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
 On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
> This series enhances Linux Vhost support to enable SoC-to-SoC
> communication over MMIO. This series enables rpmsg communication 
> between
> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>
> 1) Modify vhost to use standard Linux driver model
> 2) Add support in vring to access virtqueue over MMIO
> 3) Add vhost client driver for rpmsg
> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) 
> for
>    rpmsg communication between two SoCs connected to each other
> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>    between two SoCs connected via NTB
> 6) Add configfs to configure the components
>
> UseCase1 :
>
>  VHOST RPMSG VIRTIO RPMSG
>   +   +
>   |   |
>   |   |
>   |   |
>   |   |
> +-v--+ +--v---+
> |   Linux    | | Linux    |
> |  Endpoint  | | Root Complex |
> |    <->  |
> |    | |  |
> |    SOC1    | | SOC2 |
> ++ +--+
>
> UseCase 2:
>
>  VHOST RPMSG  VIRTIO RPMSG
>   + +
>   | |
>   | |
>   | |
>   | |
>    +--v--+   
> +--v--+
>    | |   |
>  |
>    |    HOST1    |   |    HOST2   
>  |
>    | |   |
>  |
>    +--^--+   
> +--^--+
>   | |
>   | |
> +-+
> |  +--v--+   +--v--+  
> |
> |  | |   | |  
> |
> |  | EP  |   | EP  |  
> |
> |  | CONTROLLER1 |   | CONTROLLER2 |  
> |
> |  | <---> |  
> |
> |  | |   | |  
> |
> |  | |   | |  
> |
> |  | |  SoC With Multiple EP Instances   | |  
> |
> |  | |  (Configured using NTB Function)  | |  
> |
> |  +-+   +-+  
> |
> +-+
>
> Software Layering:
>
> The high-level SW layering should look something like below. This 
> series
> adds support only for RPMSG VHOST, however something similar should be
> done for net and scsi. With that any vhost device (PCI, NTB, Platform
> device, user) can use any of the vhost client driver.
>
>
>     ++  +---+  ++  
> +--+
>     |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X 
> |
>     +---^+  +-^-+  +-^--+  
> +^-+
>    

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-08 Thread Jason Wang



On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:

On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
   rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
   between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

     VHOST RPMSG VIRTIO RPMSG
  +   +
  |   |
  |   |
  |   |
  |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

     VHOST RPMSG  VIRTIO RPMSG
  + +
  | |
  | |
  | |
  | |
   +--v--+   +--v--+
   | |   | |
   |    HOST1    |   |    HOST2    |
   | |   | |
   +--^--+   +--^--+
  | |
  | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


    ++  +---+  ++  +--+
    |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
    +---^+  +-^-+  +-^--+  +^-+
    | |  |  |
    | |  |  |
    | |  |  |
+---v-v--v--v--+
|    VHOST CORE    |
+^---^^--^-+
     |   |    |  |
     |   |    |  |
     |   |    |  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] ->
https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com

I find this very interesting. A huge patchset so will take a bit
to review

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-07 Thread Kishon Vijay Abraham I
Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:
> 
> On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:
>> Hi Jason,
>>
>> On 7/3/2020 12:46 PM, Jason Wang wrote:
>>> On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:
 Hi Jason,

 On 7/2/2020 3:40 PM, Jason Wang wrote:
> On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
>> On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
>>> This series enhances Linux Vhost support to enable SoC-to-SoC
>>> communication over MMIO. This series enables rpmsg communication between
>>> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>>>
>>> 1) Modify vhost to use standard Linux driver model
>>> 2) Add support in vring to access virtqueue over MMIO
>>> 3) Add vhost client driver for rpmsg
>>> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
>>>   rpmsg communication between two SoCs connected to each other
>>> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>>>   between two SoCs connected via NTB
>>> 6) Add configfs to configure the components
>>>
>>> UseCase1 :
>>>
>>>     VHOST RPMSG VIRTIO RPMSG
>>>  +   +
>>>  |   |
>>>  |   |
>>>  |   |
>>>  |   |
>>> +-v--+ +--v---+
>>> |   Linux    | | Linux    |
>>> |  Endpoint  | | Root Complex |
>>> |    <->  |
>>> |    | |  |
>>> |    SOC1    | | SOC2 |
>>> ++ +--+
>>>
>>> UseCase 2:
>>>
>>>     VHOST RPMSG  VIRTIO RPMSG
>>>  + +
>>>  | |
>>>  | |
>>>  | |
>>>  | |
>>>   +--v--+   +--v--+
>>>   | |   | |
>>>   |    HOST1    |   |    HOST2    |
>>>   | |   | |
>>>   +--^--+   +--^--+
>>>  | |
>>>  | |
>>> +-+
>>> |  +--v--+   +--v--+  |
>>> |  | |   | |  |
>>> |  | EP  |   | EP  |  |
>>> |  | CONTROLLER1 |   | CONTROLLER2 |  |
>>> |  | <---> |  |
>>> |  | |   | |  |
>>> |  | |   | |  |
>>> |  | |  SoC With Multiple EP Instances   | |  |
>>> |  | |  (Configured using NTB Function)  | |  |
>>> |  +-+   +-+  |
>>> +-+
>>>
>>> Software Layering:
>>>
>>> The high-level SW layering should look something like below. This series
>>> adds support only for RPMSG VHOST, however something similar should be
>>> done for net and scsi. With that any vhost device (PCI, NTB, Platform
>>> device, user) can use any of the vhost client driver.
>>>
>>>
>>>    ++  +---+  ++  +--+
>>>    |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
>>>    +---^+  +-^-+  +-^--+  +^-+
>>>    | |  |  |
>>>    | |  |  |
>>>    | |  |  |
>>> +---v-v--v--v--+
>>> |    VHOST CORE    |
>>> +^---^^--^-+
>>>     |   |    |  |
>>>

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-07 Thread Jason Wang



On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
  rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
  between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

    VHOST RPMSG VIRTIO RPMSG
     +   +
     |   |
     |   |
     |   |
     |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

    VHOST RPMSG  VIRTIO RPMSG
     + +
     | |
     | |
     | |
     | |
  +--v--+   +--v--+
  | |   | |
  |    HOST1    |   |    HOST2    |
  | |   | |
  +--^--+   +--^--+
     | |
     | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


   ++  +---+  ++  +--+
   |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
   +---^+  +-^-+  +-^--+  +^-+
   | |  |  |
   | |  |  |
   | |  |  |
+---v-v--v--v--+
|    VHOST CORE    |
+^---^^--^-+
    |   |    |  |
    |   |    |  |
    |   |    |  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com

I find this very interesting. A huge patchset so will take a bit
to review, but I certainly plan to do that. Thanks!

Yes, it would be better if there's a git branch for us to have a look.

I've pushed the b

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-06 Thread Kishon Vijay Abraham I
Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:
> 
> On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:
>> Hi Jason,
>>
>> On 7/2/2020 3:40 PM, Jason Wang wrote:
>>> On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
 On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
> This series enhances Linux Vhost support to enable SoC-to-SoC
> communication over MMIO. This series enables rpmsg communication between
> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>
> 1) Modify vhost to use standard Linux driver model
> 2) Add support in vring to access virtqueue over MMIO
> 3) Add vhost client driver for rpmsg
> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
>  rpmsg communication between two SoCs connected to each other
> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>  between two SoCs connected via NTB
> 6) Add configfs to configure the components
>
> UseCase1 :
>
>    VHOST RPMSG VIRTIO RPMSG
>     +   +
>     |   |
>     |   |
>     |   |
>     |   |
> +-v--+ +--v---+
> |   Linux    | | Linux    |
> |  Endpoint  | | Root Complex |
> |    <->  |
> |    | |  |
> |    SOC1    | | SOC2 |
> ++ +--+
>
> UseCase 2:
>
>    VHOST RPMSG  VIRTIO RPMSG
>     + +
>     | |
>     | |
>     | |
>     | |
>  +--v--+   +--v--+
>  | |   | |
>  |    HOST1    |   |    HOST2    |
>  | |   | |
>  +--^--+   +--^--+
>     | |
>     | |
> +-+
> |  +--v--+   +--v--+  |
> |  | |   | |  |
> |  | EP  |   | EP  |  |
> |  | CONTROLLER1 |   | CONTROLLER2 |  |
> |  | <---> |  |
> |  | |   | |  |
> |  | |   | |  |
> |  | |  SoC With Multiple EP Instances   | |  |
> |  | |  (Configured using NTB Function)  | |  |
> |  +-+   +-+  |
> +-+
>
> Software Layering:
>
> The high-level SW layering should look something like below. This series
> adds support only for RPMSG VHOST, however something similar should be
> done for net and scsi. With that any vhost device (PCI, NTB, Platform
> device, user) can use any of the vhost client driver.
>
>
>   ++  +---+  ++  +--+
>   |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
>   +---^+  +-^-+  +-^--+  +^-+
>   | |  |  |
>   | |  |  |
>   | |  |  |
> +---v-v--v--v--+
> |    VHOST CORE    |
> +^---^^--^-+
>    |   |    |  |
>    |   |    |  |
>    |   |    |  |
> +v---+  +v--+  +--v--+  +v-+
> |  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-03 Thread Jason Wang



On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
     rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
     between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

   VHOST RPMSG VIRTIO RPMSG
    +   +
    |   |
    |   |
    |   |
    |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

   VHOST RPMSG  VIRTIO RPMSG
    + +
    | |
    | |
    | |
    | |
     +--v--+   +--v--+
     | |   | |
     |    HOST1    |   |    HOST2    |
     | |   | |
     +--^--+   +--^--+
    | |
    | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


  ++  +---+  ++  +--+
  |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
  +---^+  +-^-+  +-^--+  +^-+
  | |  |  |
  | |  |  |
  | |  |  |
+---v-v--v--v--+
|    VHOST CORE    |
+^---^^--^-+
   |   |    |  |
   |   |    |  |
   |   |    |  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com

I find this very interesting. A huge patchset so will take a bit
to review, but I certainly plan to do that. Thanks!


Yes, it would be better if there's a git branch for us to have a look.

I've pushed the branch
https://github.com/kishon/linux-wip.git vhost_rpmsg_pci_ntb_rfc



Thanks



Btw, I'm not sure I get the big picture, but I va

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Kishon Vijay Abraham I
+Alan, Haotian

On 7/2/2020 11:01 PM, Mathieu Poirier wrote:
> On Thu, 2 Jul 2020 at 03:51, Michael S. Tsirkin  wrote:
>>
>> On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
>>> This series enhances Linux Vhost support to enable SoC-to-SoC
>>> communication over MMIO. This series enables rpmsg communication between
>>> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>>>
>>> 1) Modify vhost to use standard Linux driver model
>>> 2) Add support in vring to access virtqueue over MMIO
>>> 3) Add vhost client driver for rpmsg
>>> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
>>>rpmsg communication between two SoCs connected to each other
>>> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>>>between two SoCs connected via NTB
>>> 6) Add configfs to configure the components
>>>
>>> UseCase1 :
>>>
>>>  VHOST RPMSG VIRTIO RPMSG
>>>   +   +
>>>   |   |
>>>   |   |
>>>   |   |
>>>   |   |
>>> +-v--+ +--v---+
>>> |   Linux| | Linux|
>>> |  Endpoint  | | Root Complex |
>>> |<->  |
>>> || |  |
>>> |SOC1| | SOC2 |
>>> ++ +--+
>>>
>>> UseCase 2:
>>>
>>>  VHOST RPMSG  VIRTIO RPMSG
>>>   + +
>>>   | |
>>>   | |
>>>   | |
>>>   | |
>>>+--v--+   +--v--+
>>>| |   | |
>>>|HOST1|   |HOST2|
>>>| |   | |
>>>+--^--+   +--^--+
>>>   | |
>>>   | |
>>> +-+
>>> |  +--v--+   +--v--+  |
>>> |  | |   | |  |
>>> |  | EP  |   | EP  |  |
>>> |  | CONTROLLER1 |   | CONTROLLER2 |  |
>>> |  | <---> |  |
>>> |  | |   | |  |
>>> |  | |   | |  |
>>> |  | |  SoC With Multiple EP Instances   | |  |
>>> |  | |  (Configured using NTB Function)  | |  |
>>> |  +-+   +-+  |
>>> +-+
>>>
>>> Software Layering:
>>>
>>> The high-level SW layering should look something like below. This series
>>> adds support only for RPMSG VHOST, however something similar should be
>>> done for net and scsi. With that any vhost device (PCI, NTB, Platform
>>> device, user) can use any of the vhost client driver.
>>>
>>>
>>> ++  +---+  ++  +--+
>>> |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |X |
>>> +---^+  +-^-+  +-^--+  +^-+
>>> | |  |  |
>>> | |  |  |
>>> | |  |  |
>>> +---v-v--v--v--+
>>> |VHOST CORE|
>>> +^---^^--^-+
>>>  |   ||  |
>>>  |   ||  |
>>>  |   ||  |
>>> +v---+  +v--+  +--v--+  +v-+
>>> |  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |X |
>>> ++  +---+  +-+  +--+
>>>
>>> This was initially proposed here [1]
>>>
>>> [1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com
>>
>>
>> I find this very interesting. A huge patchset so will take a bit
>> to review, but I certainly 

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Mathieu Poirier
On Thu, 2 Jul 2020 at 03:51, Michael S. Tsirkin  wrote:
>
> On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
> > This series enhances Linux Vhost support to enable SoC-to-SoC
> > communication over MMIO. This series enables rpmsg communication between
> > two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
> >
> > 1) Modify vhost to use standard Linux driver model
> > 2) Add support in vring to access virtqueue over MMIO
> > 3) Add vhost client driver for rpmsg
> > 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
> >rpmsg communication between two SoCs connected to each other
> > 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
> >between two SoCs connected via NTB
> > 6) Add configfs to configure the components
> >
> > UseCase1 :
> >
> >  VHOST RPMSG VIRTIO RPMSG
> >   +   +
> >   |   |
> >   |   |
> >   |   |
> >   |   |
> > +-v--+ +--v---+
> > |   Linux| | Linux|
> > |  Endpoint  | | Root Complex |
> > |<->  |
> > || |  |
> > |SOC1| | SOC2 |
> > ++ +--+
> >
> > UseCase 2:
> >
> >  VHOST RPMSG  VIRTIO RPMSG
> >   + +
> >   | |
> >   | |
> >   | |
> >   | |
> >+--v--+   +--v--+
> >| |   | |
> >|HOST1|   |HOST2|
> >| |   | |
> >+--^--+   +--^--+
> >   | |
> >   | |
> > +-+
> > |  +--v--+   +--v--+  |
> > |  | |   | |  |
> > |  | EP  |   | EP  |  |
> > |  | CONTROLLER1 |   | CONTROLLER2 |  |
> > |  | <---> |  |
> > |  | |   | |  |
> > |  | |   | |  |
> > |  | |  SoC With Multiple EP Instances   | |  |
> > |  | |  (Configured using NTB Function)  | |  |
> > |  +-+   +-+  |
> > +-+
> >
> > Software Layering:
> >
> > The high-level SW layering should look something like below. This series
> > adds support only for RPMSG VHOST, however something similar should be
> > done for net and scsi. With that any vhost device (PCI, NTB, Platform
> > device, user) can use any of the vhost client driver.
> >
> >
> > ++  +---+  ++  +--+
> > |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |X |
> > +---^+  +-^-+  +-^--+  +^-+
> > | |  |  |
> > | |  |  |
> > | |  |  |
> > +---v-v--v--v--+
> > |VHOST CORE|
> > +^---^^--^-+
> >  |   ||  |
> >  |   ||  |
> >  |   ||  |
> > +v---+  +v--+  +--v--+  +v-+
> > |  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |X |
> > ++  +---+  +-+  +--+
> >
> > This was initially proposed here [1]
> >
> > [1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com
>
>
> I find this very interesting. A huge patchset so will take a bit
> to review, but I certainly plan to do that. Thanks!

Same here - it will take time.  This patchs

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Kishon Vijay Abraham I
Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:
> 
> On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:
>> On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
>>> This series enhances Linux Vhost support to enable SoC-to-SoC
>>> communication over MMIO. This series enables rpmsg communication between
>>> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>>>
>>> 1) Modify vhost to use standard Linux driver model
>>> 2) Add support in vring to access virtqueue over MMIO
>>> 3) Add vhost client driver for rpmsg
>>> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
>>>     rpmsg communication between two SoCs connected to each other
>>> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>>>     between two SoCs connected via NTB
>>> 6) Add configfs to configure the components
>>>
>>> UseCase1 :
>>>
>>>   VHOST RPMSG VIRTIO RPMSG
>>>    +   +
>>>    |   |
>>>    |   |
>>>    |   |
>>>    |   |
>>> +-v--+ +--v---+
>>> |   Linux    | | Linux    |
>>> |  Endpoint  | | Root Complex |
>>> |    <->  |
>>> |    | |  |
>>> |    SOC1    | | SOC2 |
>>> ++ +--+
>>>
>>> UseCase 2:
>>>
>>>   VHOST RPMSG  VIRTIO RPMSG
>>>    + +
>>>    | |
>>>    | |
>>>    | |
>>>    | |
>>>     +--v--+   +--v--+
>>>     | |   | |
>>>     |    HOST1    |   |    HOST2    |
>>>     | |   | |
>>>     +--^--+   +--^--+
>>>    | |
>>>    | |
>>> +-+
>>> |  +--v--+   +--v--+  |
>>> |  | |   | |  |
>>> |  | EP  |   | EP  |  |
>>> |  | CONTROLLER1 |   | CONTROLLER2 |  |
>>> |  | <---> |  |
>>> |  | |   | |  |
>>> |  | |   | |  |
>>> |  | |  SoC With Multiple EP Instances   | |  |
>>> |  | |  (Configured using NTB Function)  | |  |
>>> |  +-+   +-+  |
>>> +-+
>>>
>>> Software Layering:
>>>
>>> The high-level SW layering should look something like below. This series
>>> adds support only for RPMSG VHOST, however something similar should be
>>> done for net and scsi. With that any vhost device (PCI, NTB, Platform
>>> device, user) can use any of the vhost client driver.
>>>
>>>
>>>  ++  +---+  ++  +--+
>>>  |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
>>>  +---^+  +-^-+  +-^--+  +^-+
>>>  | |  |  |
>>>  | |  |  |
>>>  | |  |  |
>>> +---v-v--v--v--+
>>> |    VHOST CORE    |
>>> +^---^^--^-+
>>>   |   |    |  |
>>>   |   |    |  |
>>>   |   |    |  |
>>> +v---+  +v--+  +--v--+  +v-+
>>> |  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
>>> ++  +---+  +-+  +--+
>>>
>>> This was initially proposed here [1]
>>>
>>> [1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com
>>
>> I find this very interesting. A huge patchset so will take a bit
>> to review, but I cert

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Kishon Vijay Abraham I
Hi Michael,

On 7/2/2020 3:21 PM, Michael S. Tsirkin wrote:
> On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
>> This series enhances Linux Vhost support to enable SoC-to-SoC
>> communication over MMIO. This series enables rpmsg communication between
>> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
>>
>> 1) Modify vhost to use standard Linux driver model
>> 2) Add support in vring to access virtqueue over MMIO
>> 3) Add vhost client driver for rpmsg
>> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
>>rpmsg communication between two SoCs connected to each other
>> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>>between two SoCs connected via NTB
>> 6) Add configfs to configure the components
>>
>> UseCase1 :
>>
>>  VHOST RPMSG VIRTIO RPMSG
>>   +   +
>>   |   |
>>   |   |
>>   |   |
>>   |   |
>> +-v--+ +--v---+
>> |   Linux| | Linux|
>> |  Endpoint  | | Root Complex |
>> |<->  |
>> || |  |
>> |SOC1| | SOC2 |
>> ++ +--+
>>
>> UseCase 2:
>>
>>  VHOST RPMSG  VIRTIO RPMSG
>>   + +
>>   | |
>>   | |
>>   | |
>>   | |
>>+--v--+   +--v--+
>>| |   | |
>>|HOST1|   |HOST2|
>>| |   | |
>>+--^--+   +--^--+
>>   | |
>>   | |
>> +-+
>> |  +--v--+   +--v--+  |
>> |  | |   | |  |
>> |  | EP  |   | EP  |  |
>> |  | CONTROLLER1 |   | CONTROLLER2 |  |
>> |  | <---> |  |
>> |  | |   | |  |
>> |  | |   | |  |
>> |  | |  SoC With Multiple EP Instances   | |  |
>> |  | |  (Configured using NTB Function)  | |  |
>> |  +-+   +-+  |
>> +-+
>>
>> Software Layering:
>>
>> The high-level SW layering should look something like below. This series
>> adds support only for RPMSG VHOST, however something similar should be
>> done for net and scsi. With that any vhost device (PCI, NTB, Platform
>> device, user) can use any of the vhost client driver.
>>
>>
>> ++  +---+  ++  +--+
>> |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |X |
>> +---^+  +-^-+  +-^--+  +^-+
>> | |  |  |
>> | |  |  |
>> | |  |  |
>> +---v-v--v--v--+
>> |VHOST CORE|
>> +^---^^--^-+
>>  |   ||  |
>>  |   ||  |
>>  |   ||  |
>> +v---+  +v--+  +--v--+  +v-+
>> |  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |X |
>> ++  +---+  +-+  +--+
>>
>> This was initially proposed here [1]
>>
>> [1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com
> 
> 
> I find this very interesting. A huge patchset so will take a bit
> to review, but I certainly plan to do that. Thanks!

Great to hear! Thanks in advance for reviewing!

Regards
Kishon

> 
>>
>> Kishon Vijay Abraham I (22):
>>   vhost: Make _fe

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Jason Wang



On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

  VHOST RPMSG VIRTIO RPMSG
   +   +
   |   |
   |   |
   |   |
   |   |
+-v--+ +--v---+
|   Linux| | Linux|
|  Endpoint  | | Root Complex |
|<->  |
|| |  |
|SOC1| | SOC2 |
++ +--+

UseCase 2:

  VHOST RPMSG  VIRTIO RPMSG
   + +
   | |
   | |
   | |
   | |
+--v--+   +--v--+
| |   | |
|HOST1|   |HOST2|
| |   | |
+--^--+   +--^--+
   | |
   | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


 ++  +---+  ++  +--+
 |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |X |
 +---^+  +-^-+  +-^--+  +^-+
 | |  |  |
 | |  |  |
 | |  |  |
+---v-v--v--v--+
|VHOST CORE|
+^---^^--^-+
  |   ||  |
  |   ||  |
  |   ||  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com


I find this very interesting. A huge patchset so will take a bit
to review, but I certainly plan to do that. Thanks!



Yes, it would be better if there's a git branch for us to have a look.

Btw, I'm not sure I get the big picture, but I vaguely feel some of the 
work is duplicated with vDPA (e.g the epf transport or vhost bus).


Have you considered to implement these through vDPA?

Thanks





Kishon Vijay Abraham I (22):
   vhost: Make _feature_ bits a property o

Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Michael S. Tsirkin
On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:
> This series enhances Linux Vhost support to enable SoC-to-SoC
> communication over MMIO. This series enables rpmsg communication between
> two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2
> 
> 1) Modify vhost to use standard Linux driver model
> 2) Add support in vring to access virtqueue over MMIO
> 3) Add vhost client driver for rpmsg
> 4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
>rpmsg communication between two SoCs connected to each other
> 5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
>between two SoCs connected via NTB
> 6) Add configfs to configure the components
> 
> UseCase1 :
> 
>  VHOST RPMSG VIRTIO RPMSG
>   +   +
>   |   |
>   |   |
>   |   |
>   |   |
> +-v--+ +--v---+
> |   Linux| | Linux|
> |  Endpoint  | | Root Complex |
> |<->  |
> || |  |
> |SOC1| | SOC2 |
> ++ +--+
> 
> UseCase 2:
> 
>  VHOST RPMSG  VIRTIO RPMSG
>   + +
>   | |
>   | |
>   | |
>   | |
>+--v--+   +--v--+
>| |   | |
>|HOST1|   |HOST2|
>| |   | |
>+--^--+   +--^--+
>   | |
>   | |
> +-+
> |  +--v--+   +--v--+  |
> |  | |   | |  |
> |  | EP  |   | EP  |  |
> |  | CONTROLLER1 |   | CONTROLLER2 |  |
> |  | <---> |  |
> |  | |   | |  |
> |  | |   | |  |
> |  | |  SoC With Multiple EP Instances   | |  |
> |  | |  (Configured using NTB Function)  | |  |
> |  +-+   +-+  |
> +-+
> 
> Software Layering:
> 
> The high-level SW layering should look something like below. This series
> adds support only for RPMSG VHOST, however something similar should be
> done for net and scsi. With that any vhost device (PCI, NTB, Platform
> device, user) can use any of the vhost client driver.
> 
> 
> ++  +---+  ++  +--+
> |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |X |
> +---^+  +-^-+  +-^--+  +^-+
> | |  |  |
> | |  |  |
> | |  |  |
> +---v-v--v--v--+
> |VHOST CORE|
> +^---^^--^-+
>  |   ||  |
>  |   ||  |
>  |   ||  |
> +v---+  +v--+  +--v--+  +v-+
> |  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |X |
> ++  +---+  +-+  +--+
> 
> This was initially proposed here [1]
> 
> [1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com


I find this very interesting. A huge patchset so will take a bit
to review, but I certainly plan to do that. Thanks!

> 
> Kishon Vijay Abraham I (22):
>   vhost: Make _feature_ bits a property of vhost device
>   vhost: Introduce standard Linux driver model in VHOST
>   vhost: Add ops for the VHOST driver to configure VHOST device
>   vringh: Add helpers to access vring in MMIO
> 

[RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-02 Thread Kishon Vijay Abraham I
This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
   rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
   between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

 VHOST RPMSG VIRTIO RPMSG
  +   +
  |   |
  |   |
  |   |
  |   |
+-v--+ +--v---+
|   Linux| | Linux|
|  Endpoint  | | Root Complex |
|<->  |
|| |  |
|SOC1| | SOC2 |
++ +--+

UseCase 2:

 VHOST RPMSG  VIRTIO RPMSG
  + +
  | |
  | |
  | |
  | |
   +--v--+   +--v--+
   | |   | |
   |HOST1|   |HOST2|
   | |   | |
   +--^--+   +--^--+
  | |
  | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


++  +---+  ++  +--+
|  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |X |
+---^+  +-^-+  +-^--+  +^-+
| |  |  |
| |  |  |
| |  |  |
+---v-v--v--v--+
|VHOST CORE|
+^---^^--^-+
 |   ||  |
 |   ||  |
 |   ||  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] -> https://lore.kernel.org/r/2cf00ec4-1ed6-f66e-6897-006d1a5b6...@ti.com


Kishon Vijay Abraham I (22):
  vhost: Make _feature_ bits a property of vhost device
  vhost: Introduce standard Linux driver model in VHOST
  vhost: Add ops for the VHOST driver to configure VHOST device
  vringh: Add helpers to access vring in MMIO
  vhost: Add MMIO helpers for operations on vhost virtqueue
  vhost: Introduce configfs entry for configuring VHOST
  virtio_pci: Use request_threaded_irq() instead of request_irq()
  rpmsg: virtio_rpmsg_bus: Disable receive virtqueue callback when
reading messages
  rpmsg: Introduce configfs entry for configuring rpmsg
  rpmsg: virtio_rpmsg_bus: Add Address Service