> From: Michael S. Tsirkin <m...@redhat.com>
> Sent: Tuesday, October 31, 2023 1:29 PM
> 
> On Tue, Oct 31, 2023 at 03:11:57AM +0000, Parav Pandit wrote:
> >
> >
> > > From: Michael S. Tsirkin <m...@redhat.com>
> > > Sent: Tuesday, October 31, 2023 5:02 AM
> > >
> > > On Mon, Oct 30, 2023 at 06:10:06PM +0000, Parav Pandit wrote:
> > > >
> > > >
> > > > > From: Michael S. Tsirkin <m...@redhat.com>
> > > > > Sent: Monday, October 30, 2023 9:29 PM On Mon, Oct 30, 2023 at
> > > > > 03:51:40PM +0000, Parav Pandit wrote:
> > > > > >
> > > > > >
> > > > > > > From: Michael S. Tsirkin <m...@redhat.com>
> > > > > > > Sent: Monday, October 30, 2023 1:53 AM
> > > > > > >
> > > > > > > On Sun, Oct 29, 2023 at 05:59:45PM +0200, Yishai Hadas wrote:
> > > > > > > > From: Feng Liu <fe...@nvidia.com>
> > > > > > > >
> > > > > > > > Introduce support for the admin virtqueue. By negotiating
> > > > > > > > VIRTIO_F_ADMIN_VQ feature, driver detects capability and
> > > > > > > > creates one administration virtqueue. Administration
> > > > > > > > virtqueue implementation in virtio pci generic layer,
> > > > > > > > enables multiple types of upper layer drivers such as vfio, 
> > > > > > > > net, blk
> to utilize it.
> > > > > > > >
> > > > > > > > Signed-off-by: Feng Liu <fe...@nvidia.com>
> > > > > > > > Reviewed-by: Parav Pandit <pa...@nvidia.com>
> > > > > > > > Reviewed-by: Jiri Pirko <j...@nvidia.com>
> > > > > > > > Signed-off-by: Yishai Hadas <yish...@nvidia.com>
> > > > > > > > ---
> > > > > > > >  drivers/virtio/virtio.c                | 37 ++++++++++++++--
> > > > > > > >  drivers/virtio/virtio_pci_common.c     |  3 ++
> > > > > > > >  drivers/virtio/virtio_pci_common.h     | 15 ++++++-
> > > > > > > >  drivers/virtio/virtio_pci_modern.c     | 61
> > > +++++++++++++++++++++++++-
> > > > > > > >  drivers/virtio/virtio_pci_modern_dev.c | 18 ++++++++
> > > > > > > >  include/linux/virtio_config.h          |  4 ++
> > > > > > > >  include/linux/virtio_pci_modern.h      |  5 +++
> > > > > > > >  7 files changed, 137 insertions(+), 6 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/virtio/virtio.c
> > > > > > > > b/drivers/virtio/virtio.c index
> > > > > > > > 3893dc29eb26..f4080692b351 100644
> > > > > > > > --- a/drivers/virtio/virtio.c
> > > > > > > > +++ b/drivers/virtio/virtio.c
> > > > > > > > @@ -302,9 +302,15 @@ static int virtio_dev_probe(struct device
> *_d)
> > > > > > > >         if (err)
> > > > > > > >                 goto err;
> > > > > > > >
> > > > > > > > +       if (dev->config->create_avq) {
> > > > > > > > +               err = dev->config->create_avq(dev);
> > > > > > > > +               if (err)
> > > > > > > > +                       goto err;
> > > > > > > > +       }
> > > > > > > > +
> > > > > > > >         err = drv->probe(dev);
> > > > > > > >         if (err)
> > > > > > > > -               goto err;
> > > > > > > > +               goto err_probe;
> > > > > > > >
> > > > > > > >         /* If probe didn't do it, mark device DRIVER_OK 
> > > > > > > > ourselves. */
> > > > > > > >         if (!(dev->config->get_status(dev) &
> > > > > > > > VIRTIO_CONFIG_S_DRIVER_OK))
> > > > > > >
> > > > > > > Hmm I am not all that happy that we are just creating avq
> > > unconditionally.
> > > > > > > Can't we do it on demand to avoid wasting resources if no one uses
> it?
> > > > > > >
> > > > > > Virtio queues must be enabled before driver_ok as we discussed
> > > > > > in
> > > > > F_DYNAMIC bit exercise.
> > > > > > So creating AQ when first legacy command is invoked, would be too
> late.
> > > > >
> > > > > Well we didn't release the spec with AQ so I am pretty sure
> > > > > there are no devices using the feature. Do we want to already
> > > > > make an exception for AQ and allow creating AQs after DRIVER_OK
> > > > > even without
> > > F_DYNAMIC?
> > > > >
> > > > No. it would abuse the init time config registers for the dynamic
> > > > things like
> > > this.
> > > > For flow filters and others there is need for dynamic q creation
> > > > with multiple
> > > physical address anyway.
> > >
> > > That seems like a completely unrelated issue.
> > >
> > It isn't.
> > Driver requirements are:
> > 1. Driver needs to dynamically create vqs 2. Sometimes this VQ needs
> > to have multiple physical addresses 3. Driver needs to create them
> > after driver is fully running, past the bootstrap stage using tiny
> > config registers
> >
> > Device requirements are:
> > 1. Not to keep growing 64K VQs *(8+8+8) bytes of address registers +
> > enable bit 2. Ability to return appropriate error code when fail to
> > create queue 3. Above #2
> >
> > Users of this new infrastructure are eth tx,rx queues, flow filter queues, 
> > aq, blk
> rq per cpu.
> > AQs are just one of those.
> > When a generic infrastructure for this will be built in the spec as we 
> > started
> that, all above use cases will be handled.
> >
> > > > So creating virtqueues dynamically using a generic scheme is
> > > > desired with
> > > new feature bit.
> 
> Reducing config registers and returning errors should be handled by a new
> transport.
> VQ with multiple addresses - I can see how you would maybe only support that
> with that new transport?
> 
PCI is the transport that offers unified way to create and destroy dynamic VQs. 
And modify/query VQ attributes.
Unified across PFs, VFs.

So no need for some grand transport. Virtio spec already underlying 
infrastructure that can be extended for PCI.
For example,

VQ attributes are already modified and queried by CVQ for net already.

Such create/destroy commands can easily be supported on cvq.
(cvq already exists on 6 out of 18 virtio devices).
Rest 12 devices are so small which will unlikely need dynamic vqs.

All will be neatly tied to single interface between driver and device for VQ 
create/destroy/modify/query.

Anyway, this is the work OASIS currently doing for 1.4-timeline.
This patch is based on 1.3 standard update.

> 
> I think I can guess why you are tying multiple addresses to dynamic VQs - you
> suspect that allocating huge half-megabyte VQs dynamically will fail if 
> triggered
> on a busy system. Is that the point?
Yes, it is likely. I don't have the link right now, but Eric S and/or Saeed had 
some links to it.

> 
> 
> In that case I feel it's a good argument to special-case admin VQs because
> there's no real need to make them huge at the moment - for example this
> driver just adds one at a time.
> No?
In current form creating a VQ with single outstanding command with 4 
descriptors = 64 bytes is not that huge resource wastage either.

So Linux driver can adapt to dynamic aq and with multiple PAs can be done in 
future when OASIS adapts to it.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to