On Thu, Mar 09, 2023 at 01:04:30PM +0000, Parav Pandit wrote: > > > From: Jiri Pirko <j...@nvidia.com> > > Sent: Thursday, March 9, 2023 2:31 AM > > > > Wed, Mar 08, 2023 at 10:25:32PM CET, pa...@nvidia.com wrote: > > > > > >> From: virtio-comm...@lists.oasis-open.org > > >> <virtio-comment@lists.oasis- open.org> On Behalf Of David Edmondson > > > > > >> In support of live migration, might we end up moving large amounts of > > >> device state through the admin queue? > > >> > > >Correct. > > > > > >> If so, that would seem to have some performance requirements, though > > >> I don't know if it would justify multiple admin queues. > > >DMA of the data through the proposed AQ is supported. > > > > > >If I understood Max correctly when Max said " This AQ is not aimed for > > >performance ", he means that AQ doesn't have performance requirements as > > io/network queues to complete millions of ops/sec. > > > > > >it is several hundred to maybe (on the higher side) thousand ops/sec during > > LM, provisioning use case. > > > > But isn't it good to design it for performance from start? I mean, state > > transfer > > of thousands of VFs at a time is definitelly performance related, isn't it? > > > It is. Which part of the proposed AQ doesn't cover this aspect? > The only issue that I see today is, that a given GET family of commands q > contains the read-only and write-only descriptors which require multiple dma > allocations on driver side.
BTW for a while now I wanted to add a descriptor flag that will stick data inline in the descriptor. Perfect for passing small bits of data such as vqn, or the virtio header. > > > > > > > >DMA perspective as you mentioned, AQ still has the same perf requirements > > as that of regular nw/blk io queues. --------------------------------------------------------------------- To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org