On Tue, Jul 18, 2023 at 12:55 PM Michael S. Tsirkin <m...@redhat.com> wrote:
>
> On Thu, Apr 20, 2023 at 10:59:56AM +0200, Eugenio Perez Martin wrote:
> > On Thu, Apr 20, 2023 at 7:25 AM Pei Li <pe...@andrew.cmu.edu> wrote:
> > >
> > > Hi all,
> > >
> > > My bad, I just submitted the kernel patch. If we are passing some generic 
> > > command, still we have to add an additional field in the structure to 
> > > indicate what is the unbatched version of this command, and the struct 
> > > vhost_ioctls would be some command specific structure. In summary, the 
> > > structure would be something like
> > > struct vhost_cmd_batch {
> > >     int ncmds;
> > >     int cmd;
> >
> > The unbatched version should go in each vhost_ioctls. That allows us
> > to send many different commands in one ioctl instead of having to
> > resort to many ioctls, each one for a different task.
> >
> > The problem with that is the size of that struct vhost_ioctl, so we
> > can build an array. I think it should be enough with the biggest of
> > them (vhost_vring_addr ?) for a long time, but I would like to know if
> > anybody finds a drawback here. We could always resort to pointers if
> > we find we need more space, or start with them from the beginning.
> >
> > CCing Si-Wei here too, as he is also interested in reducing the startup 
> > time.
> >
> > Thanks!
>
> And copying my response too:
> This is all very exciting, but what exactly is the benefit?
> No optimization patches are going to be merged without
> numbers showing performance gains.
> In this case, can you show gains in process startup time?
> Are they significant enough to warrant adding new UAPI?
>

This should have been marked as RFC in that regard.

When this was sent it was one of the planned actions to reduce
overhead. After Si-Wei benchmarks, all the efforts will focus on
reducing the pinning / maps for the moment. It is unlikely that this
will be moved forward soon.

Thanks!


>
>
> > >     struct vhost_ioctls[];
> > > };
> > >
> > > This is doable. Also, this is my first time submitting patches to open 
> > > source, sorry about the mess in advance. That being said, feel free to 
> > > throw questions / comments!
> > >
> > > Thanks and best regards,
> > > Pei
> > >
> > > On Wed, Apr 19, 2023 at 9:19 PM Jason Wang <jasow...@redhat.com> wrote:
> > >>
> > >> On Wed, Apr 19, 2023 at 11:33 PM Eugenio Perez Martin
> > >> <epere...@redhat.com> wrote:
> > >> >
> > >> > On Wed, Apr 19, 2023 at 12:56 AM <peili....@gmail.com> wrote:
> > >> > >
> > >> > > From: Pei Li <peili....@gmail.com>
> > >> > >
> > >> > > Currently, part of the vdpa initialization / startup process
> > >> > > needs to trigger many ioctls per vq, which is very inefficient
> > >> > > and causing unnecessary context switch between user mode and
> > >> > > kernel mode.
> > >> > >
> > >> > > This patch creates an additional ioctl() command, namely
> > >> > > VHOST_VDPA_GET_VRING_GROUP_BATCH, that will batching
> > >> > > commands of VHOST_VDPA_GET_VRING_GROUP into a single
> > >> > > ioctl() call.
> > >>
> > >> I'd expect there's a kernel patch but I didn't see that?
> > >>
> > >> If we want to go this way. Why simply have a more generic way, that is
> > >> introducing something like:
> > >>
> > >> VHOST_CMD_BATCH which did something like
> > >>
> > >> struct vhost_cmd_batch {
> > >>     int ncmds;
> > >>     struct vhost_ioctls[];
> > >> };
> > >>
> > >> Then you can batch other ioctls other than GET_VRING_GROUP?
> > >>
> > >> Thanks
> > >>
> > >> > >
> > >> >
> > >> > It seems to me you forgot to send the 0/2 cover letter :).
> > >> >
> > >> > Please include the time we save thanks to avoiding the repetitive
> > >> > ioctls in each patch.
> > >> >
> > >> > CCing Jason and Michael.
> > >> >
> > >> > > Signed-off-by: Pei Li <peili....@gmail.com>
> > >> > > ---
> > >> > >  hw/virtio/vhost-vdpa.c                       | 31 
> > >> > > +++++++++++++++-----
> > >> > >  include/standard-headers/linux/vhost_types.h |  3 ++
> > >> > >  linux-headers/linux/vhost.h                  |  7 +++++
> > >> > >  3 files changed, 33 insertions(+), 8 deletions(-)
> > >> > >
> > >> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > >> > > index bc6bad23d5..6d45ff8539 100644
> > >> > > --- a/hw/virtio/vhost-vdpa.c
> > >> > > +++ b/hw/virtio/vhost-vdpa.c
> > >> > > @@ -679,7 +679,8 @@ static int vhost_vdpa_set_backend_cap(struct 
> > >> > > vhost_dev *dev)
> > >> > >      uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
> > >> > >          0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
> > >> > >          0x1ULL << VHOST_BACKEND_F_IOTLB_ASID |
> > >> > > -        0x1ULL << VHOST_BACKEND_F_SUSPEND;
> > >> > > +        0x1ULL << VHOST_BACKEND_F_SUSPEND |
> > >> > > +        0x1ULL << VHOST_BACKEND_F_IOCTL_BATCH;
> > >> > >      int r;
> > >> > >
> > >> > >      if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, 
> > >> > > &features)) {
> > >> > > @@ -731,14 +732,28 @@ static int vhost_vdpa_get_vq_index(struct 
> > >> > > vhost_dev *dev, int idx)
> > >> > >
> > >> > >  static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
> > >> > >  {
> > >> > > -    int i;
> > >> > > +    int i, nvqs = dev->nvqs;
> > >> > > +    uint64_t backend_features = dev->backend_cap;
> > >> > > +
> > >> > >      trace_vhost_vdpa_set_vring_ready(dev);
> > >> > > -    for (i = 0; i < dev->nvqs; ++i) {
> > >> > > -        struct vhost_vring_state state = {
> > >> > > -            .index = dev->vq_index + i,
> > >> > > -            .num = 1,
> > >> > > -        };
> > >> > > -        vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> > >> > > +
> > >> > > +    if (!(backend_features & BIT_ULL(VHOST_BACKEND_F_IOCTL_BATCH))) 
> > >> > > {
> > >> > > +        for (i = 0; i < nvqs; ++i) {
> > >> > > +            struct vhost_vring_state state = {
> > >> > > +                .index = dev->vq_index + i,
> > >> > > +                .num = 1,
> > >> > > +            };
> > >> > > +            vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, 
> > >> > > &state);
> > >> > > +        }
> > >> > > +    } else {
> > >> > > +        struct vhost_vring_state states[nvqs + 1];
> > >> > > +        states[0].num = nvqs;
> > >> > > +        for (i = 1; i <= nvqs; ++i) {
> > >> > > +            states[i].index = dev->vq_index + i - 1;
> > >> > > +            states[i].num = 1;
> > >> > > +        }
> > >> > > +
> > >> >
> > >> > I think it's more clear to share the array of vhost_vring_state
> > >> > states[nvqs + 1], and then fill it either in one shot with
> > >> > VHOST_VDPA_SET_VRING_ENABLE_BATCH or individually with
> > >> > VHOST_VDPA_SET_VRING_ENABLE.
> > >> >
> > >> > The same feedback goes for VHOST_VDPA_GET_VRING_GROUP_BATCH in the 
> > >> > next patch.
> > >> >
> > >> > > +        vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE_BATCH, 
> > >> > > &states[0]);
> > >> > >      }
> > >> > >      return 0;
> > >> >
> > >> > This comment is previous to your patch but I just realized we don't
> > >> > check the return value of vhost_vdpa_call. Maybe it is worth
> > >> > considering addressing that in this series too.
> > >> >
> > >> > >  }
> > >> > > diff --git a/include/standard-headers/linux/vhost_types.h 
> > >> > > b/include/standard-headers/linux/vhost_types.h
> > >> > > index c41a73fe36..068d0e1ceb 100644
> > >> > > --- a/include/standard-headers/linux/vhost_types.h
> > >> > > +++ b/include/standard-headers/linux/vhost_types.h
> > >> > > @@ -164,4 +164,7 @@ struct vhost_vdpa_iova_range {
> > >> > >  /* Device can be suspended */
> > >> > >  #define VHOST_BACKEND_F_SUSPEND  0x4
> > >> > >
> > >> > > +/* IOCTL requests can be batched */
> > >> > > +#define VHOST_BACKEND_F_IOCTL_BATCH 0x6
> > >> > > +
> > >> > >  #endif
> > >> > > diff --git a/linux-headers/linux/vhost.h 
> > >> > > b/linux-headers/linux/vhost.h
> > >> > > index f9f115a7c7..4c9ddd0a0e 100644
> > >> > > --- a/linux-headers/linux/vhost.h
> > >> > > +++ b/linux-headers/linux/vhost.h
> > >> > > @@ -180,4 +180,11 @@
> > >> > >   */
> > >> > >  #define VHOST_VDPA_SUSPEND             _IO(VHOST_VIRTIO, 0x7D)
> > >> > >
> > >> > > +/* Batch version of VHOST_VDPA_SET_VRING_ENABLE
> > >> > > + *
> > >> > > + * Enable/disable the ring while batching the commands.
> > >> > > + */
> > >> > > +#define VHOST_VDPA_SET_VRING_ENABLE_BATCH      _IOW(VHOST_VIRTIO, 
> > >> > > 0x7F, \
> > >> > > +                                            struct 
> > >> > > vhost_vring_state)
> > >> > > +
> > >> >
> > >> > These headers should be updated with update-linux-headers.sh. To be
> > >> > done that way we need the changes merged in the kernel before.
> > >> >
> > >> > We can signal that the series is not ready for inclusion with the
> > >> > subject [RFC. PATCH], like [1], and note it in the commit message.
> > >> > That way, you can keep updating the header manually for demonstration
> > >> > purposes.
> > >> >
> > >> > Thanks!
> > >> >
> > >> > [1] 
> > >> > https://lore.kernel.org/qemu-devel/20220413163206.1958254-23-epere...@redhat.com/
> > >> >
> > >>
> > >>
> >
> >
>


Reply via email to