[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Parav Pandit


> From: Jason Wang 
> Sent: Monday, April 17, 2023 9:02 PM
> 
> > Isn't the size of BAR and its cap_len exposed by the device?
> 
> Somehow, it's more about how the hypervisor is going to use this, memory
> mapped or trapping. For either case, the hypervisor needs to have virtio
> knowledge in order to finish this.
>
Ok.

> > PCI BAR size of the VF can learn the system page size being different for 
> > x86
> (4K) and arm (64K).
> > PCI transport seems to support it.
> 
> Yes this is for SR-IOV but not for other cases. We could invent new 
> facilities for
> sure but the hypervisor can not have this assumption.
>
Yeah, its not assumption.
 

> > > assuming you have two generations of device
> > >
> > > gen1: features x,y
> > > gen2: features x,y,z
> > >
> > > You won't be able to do migration between gen1 and gen2 without
> mediation.
> > Gen1 can easily migrate to gen2, because gen1 has smaller subset than gen2.
> > When gen2 device is composed, feature z is disabled.
> 
> Sure, but this requires a lot of features that do not exist in the spec. E.g 
> it
> assumes the device could be composed on demand which seems to fit the idea
> of transport virtqueue. 
I don’t see how transport vq is related.
A device could be composed as PCI VF, PCI SIOV or something else.
Underlying transport will tell how it is composed.
May be underlying transport is a transport VQ, but that is not the only 
transport.

> So it adds dependencies for migration where a simple
> mediation could be used to solve this without bothering the spec.
>
Mediation of PF and hypervisor is not encouraged anymore as we move towards the 
CC.
So may be some system will do, but as we have the PCI VFs, there is clear need 
for non-mediated 1.x devices for such guest VMs.
For legacy kernel mediation is acceptable as there is no CC infrastructure in 
place on older systems.
 
> >
> > Gen2 to gen1 migration can do software-based migration anyway or through
> mediation.
> > But because gen2 may need to migrate to gen1, hence gen2 to gen2 migration
> also should be done through mediation, doesn’t make sense to me.
> 
> It really depends on the design:
> 
> 1) if you want to expose any features that is done by admin virtqueue to a
> guest, mediation is a must (e.g if you want do live migration for
> L1)
> 2) mediation is a must for the idea of transport virtqueue
>
Yes. So both transport options are there.
A PCI VF that doesn’t legacy baggage will be just fine without a mediation.
For some reason, if one wants to have mediation, may be there is some option of 
such new transport.
But such transport cannot be the only transport.
 
> > > So as mentioned in another thread, this is a PCI specific solution:
> > >
> > > 1) feature and config are basic virtio facility
> > > 2) capability is not but specific to PCI transport
> > >
> > So any LM solution will have transport specific checks and virtio level 
> > checks.
> 
> So here's the model that is used by Qemu currently:
> 
> 1) Device is emulated, it's the charge of the libvirt to launch Qemu and 
> present
> a stable ABI for guests.
> 2) Datapath doesn't need to care about the hardware details since the
> hardware layout is invisible from guest
> 
> You can see, it's more than sufficient for libvirt to check features/config 
> space,
> it doesn't need to care about the hardware BAR layout. Migration is much
> easier in this way. And we can use transport other than PCI in the guest in 
> this
> case for live migration.
>
Sure works in some use cases.
But it is not the only way to operate it as I explained above where there is 
requirement to not have mediation for non_legacy interface.

> > Solution needs to cover transport as well as transport is integral part of 
> > the
> virtio spec.
> > Each transport layer will implement feature/config/cap in its own way.
> 
> If we can avoid those hardware details to be checked, we should not go for
> that. It's a great ease of the management layer.
Those are mainly RO checks and cheap too. It largely does not involved in the 
LM or data path flow either.


[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Parav Pandit


> From: Jason Wang 
> Sent: Monday, April 17, 2023 9:09 PM

> Note that current transport virtqueue only allows the notification via MMIO. 
> It
> introduces a command to get the address of the notification area.
>
Notifications via MMIO is the obvious choice.
Command is also fine to convey that.
I haven’t seen the transport VQ proposal.
Do you have pointer to it?



[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Parav Pandit


> From: Jason Wang 
> Sent: Monday, April 17, 2023 8:37 PM

> > > > Do you mean say we have three AQs, AQ_1, AQ_2, and AQ_3;
> > > > AQ_1 of the PF used by admin work as SIOV device create, SRIOV
> > > > MSIX
> > > configuration.
> > > > AQ_2 of the PF used for transporting legacy config access of the
> > > > PCI VF
> > > > AQ_3 of the PF for some transport work.
> > > >
> > > > If yes, sounds find to me.
> > >
> > > Latest proposal simply leaves the split between AQs up to the driver.
> > > Seems the most flexible.
> > Yes. It is. Different opcode range and multiple AQs enable to do so.
> 
> Right, so it would be some facility that makes the transport commands of
> modern and legacy are mutually exclusive.

Ok. I didn’t follow the mutual exclusion part.
If a device has exposed legacy interface it will have to transport legacy 
access via its PF.
Same device can be transitional, and its 1.x interface doesn’t need to through 
this transport channel of PF, right?


[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Jason Wang
On Tue, Apr 18, 2023 at 12:59 AM Parav Pandit  wrote:
>
>
> > From: Michael S. Tsirkin 
> > Sent: Sunday, April 16, 2023 4:44 PM
> >
> > On Sun, Apr 16, 2023 at 01:41:55PM +, Parav Pandit wrote:
> > > > From: virtio-comm...@lists.oasis-open.org
> > > >  On Behalf Of Michael S.
> > > > Tsirkin
> > > > Sent: Friday, April 14, 2023 2:57 AM
> > >
> > > > Do you refer to the trick Jason proposed where BAR0 is memory but
> > > > otherwise matches legacy BAR0 exactly? Is this your preferred solution 
> > > > at
> > this point then?
> > >
> > > We look at it again.
> > > Above solution can work reliably only for a very small number of PF and 
> > > that
> > too with very special hardware circuitry due to the reset flow.
> > >
> > > Therefore, for virtualization below interface is preferred.
> > > a. For transitional device legacy configuration register transport
> > > over AQ,
> >
> > I don't get what this has to do with transitional ...
> >
> Typically, in current wordings, transitional is the device that supports 
> legacy interface.
> So, it doesn't have to be for the transitional.
>
> I just wanted to highlight that a PCI VF device with its parent PCI PF device 
> can transport the legacy interface commands.
>
> > > Notification to utilize transitional device notification area of the BAR.
> >
> > The vq transport does something like this, no?
> >
> Notifications over a queuing interface unlikely can be a performant interface 
> because one is configuration task and other is data path task.

Note that current transport virtqueue only allows the notification via
MMIO. It introduces a command to get the address of the notification
area.

Thanks

>
> > > b. Non legacy interface of transitional and non-transitional PCI device to
> > access direct PCI device without mediation.
> >
> > So VF can either be accessed through AQ of PF, or through direct mapping?
> Right. VF to access legacy registers using AQ of PF and continue non-legacy 
> registers using direct mapping as done today.
>


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Jason Wang
On Tue, Apr 18, 2023 at 1:23 AM Parav Pandit  wrote:
>
>
>
> > From: Jason Wang 
> > Sent: Sunday, April 16, 2023 11:23 PM
> >
> > On Fri, Apr 14, 2023 at 11:51 AM Parav Pandit  wrote:
> > >
> > >
> > >
> > > > From: Jason Wang 
> > > > Sent: Thursday, April 13, 2023 11:38 PM
> > >
> > > > > > 1) spec doesn't enforce the size of a specific structure
> > > > > Spec will be extended in coming time.
> > > >
> > > > It's too late to do any new restriction without introducing a flag.
> > > We are really diverging from the topic.
> > > I don’t think it is late. The work in this area of PCI VF has not even 
> > > begun fully.
> >
> > I meant it needs a new feature flag.
> >
> Ok.
> > >
> > > > Mandating size may easily end up with a architecte specific solution.
> > > >
> > > Unlikely. Other standard device types are also expanding this way.
> >
> > I think we are talking about software technologies instead of device design
> > here.
> >
> Isn't the size of BAR and its cap_len exposed by the device?

Somehow, it's more about how the hypervisor is going to use this,
memory mapped or trapping. For either case, the hypervisor needs to
have virtio knowledge in order to finish this.

>
> > For devices it works.But for hypervisor, it needs to deal with the size that
> > doesn't match arch's page size.
> >
> PCI BAR size of the VF can learn the system page size being different for x86 
> (4K) and arm (64K).
> PCI transport seems to support it.

Yes this is for SR-IOV but not for other cases. We could invent new
facilities for sure but the hypervisor can not have this assumption.

>
> PCI PF on bare-metal has to understand the highest page size anyway if for 
> some reason bare-metal host wants to map this PF to the VM.
>
> A hypevisor mediating and emulating needs to learn the system page size 
> anyway.
> If underlying device page size is smaller, hypevisor may end up mediating it.

Exactly. So what I want to say is, for whatever case, a hypervisor
needs to have virtio knowledge in order to achieve these.

>
> > I meant you can't have recommendations in features and config.
> Sure. There is none.
>
> > What's more,
> > assuming you have two generations of device
> >
> > gen1: features x,y
> > gen2: features x,y,z
> >
> > You won't be able to do migration between gen1 and gen2 without mediation.
> Gen1 can easily migrate to gen2, because gen1 has smaller subset than gen2.
> When gen2 device is composed, feature z is disabled.

Sure, but this requires a lot of features that do not exist in the
spec. E.g it assumes the device could be composed on demand which
seems to fit the idea of transport virtqueue. So it adds dependencies
for migration where a simple mediation could be used to solve this
without bothering the spec.

>
> Gen2 to gen1 migration can do software-based migration anyway or through 
> mediation.
> But because gen2 may need to migrate to gen1, hence gen2 to gen2 migration 
> also should be done through mediation, doesn’t make sense to me.

It really depends on the design:

1) if you want to expose any features that is done by admin virtqueue
to a guest, mediation is a must (e.g if you want do live migration for
L1)
2) mediation is a must for the idea of transport virtqueue

>
> > Such technologies have been used by cpu features for years.
> > I am not sure why it became a problem for you.
> >
> > > Apart from it some of the PCI device layout compat checks will be covered
> > too.
> > >
> > > > And what you proposed is to allow the management to know the exact
> > > > hardware layout in order to check the compatibility? And the
> > > > management needs to evolve as new structures are added.
> > > Mostly not mgmt stack may not need to evolve a lot.
> > > Because most layouts should be growing within the device context and not 
> > > at
> > the PCI capabilities etc area.
> > >
> > > And even if it does, its fine as large part of it standard PCI spec 
> > > definitions.
> >
> > So as mentioned in another thread, this is a PCI specific solution:
> >
> > 1) feature and config are basic virtio facility
> > 2) capability is not but specific to PCI transport
> >
> So any LM solution will have transport specific checks and virtio level 
> checks.

So here's the model that is used by Qemu currently:

1) Device is emulated, it's the charge of the libvirt to launch Qemu
and present a stable ABI for guests.
2) Datapath doesn't need to care about the hardware details since the
hardware layout is invisible from guest

You can see, it's more than sufficient for libvirt to check
features/config space, it doesn't need to care about the hardware BAR
layout. Migration is much easier in this way. And we can use transport
other than PCI in the guest in this case for live migration.

>
> > Checking PCI capability layout in the virtio management is a layer violation
> > which can't work for future transport like SIOV or adminq.
> Virtio management that will have transport level checks is not a violation.
> SIOV will define its own 

[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Jason Wang
On Tue, Apr 18, 2023 at 4:29 AM Parav Pandit  wrote:
>
>
> > From: Michael S. Tsirkin 
> > Sent: Monday, April 17, 2023 4:27 PM
> > On Mon, Apr 17, 2023 at 05:23:30PM +, Parav Pandit wrote:
> > > > Things might be simplified if we use separate queues for admin,
> > > > transport and legacy.
> > > >
> > > Do you mean say we have three AQs, AQ_1, AQ_2, and AQ_3;
> > > AQ_1 of the PF used by admin work as SIOV device create, SRIOV MSIX
> > configuration.
> > > AQ_2 of the PF used for transporting legacy config access of the PCI
> > > VF
> > > AQ_3 of the PF for some transport work.
> > >
> > > If yes, sounds find to me.
> >
> > Latest proposal simply leaves the split between AQs up to the driver.
> > Seems the most flexible.
> Yes. It is. Different opcode range and multiple AQs enable to do so.

Right, so it would be some facility that makes the transport commands
of modern and legacy are mutually exclusive.

Thanks

>
> This publicly archived list offers a means to provide input to the
> OASIS Virtual I/O Device (VIRTIO) TC.
>
> In order to verify user consent to the Feedback License terms and
> to minimize spam in the list archive, subscription is required
> before posting.
>
> Subscribe: virtio-comment-subscr...@lists.oasis-open.org
> Unsubscribe: virtio-comment-unsubscr...@lists.oasis-open.org
> List help: virtio-comment-h...@lists.oasis-open.org
> List archive: https://lists.oasis-open.org/archives/virtio-comment/
> Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
> List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
> Committee: https://www.oasis-open.org/committees/virtio/
> Join OASIS: https://www.oasis-open.org/join/
>


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



RE: [virtio-dev] [PATCH v13 03/11] content: Rename confusing queue_notify_data and vqn names

2023-04-17 Thread Parav Pandit



> From: Halil Pasic 
> Sent: Sunday, April 16, 2023 11:42 PM

> > diff --git a/notifications-le.c b/notifications-le.c index
> > fe51267..f73c6a5 100644
> > --- a/notifications-le.c
> > +++ b/notifications-le.c
> > @@ -1,5 +1,5 @@
> >  le32 {
> > -   vqn : 16;
> > +   vq_config_data: 16; /* previously known as vqn */
> 
> Is this where the union was supposed to go? I.e.
> something like:
> 
> union {
>   le16 vq_config_data;
>   le16 vq_index;
>   } vq_index_config_data;
> 
> (You said  in the v12 disussion in MID
>  12.prod.outlook.com>
> that you are going to change vqn to the union of vq_config_data and vq_index,
> although vq_notif_config_data might be preferable).
> 
Ah my bad. I again got confused when writing the patch that this structure is 
only for notify_config_data.
It is not.

I will fix the union.
I will let be unnamed.

I will rename vqn to vq_notif_config_data.

> Or did you mean to do
> 
> + vq_index_config_data: 16; /* previously known as vqn */
> 
> Of course, this has an impact on the rest of the text...
> 
Union is more readable. Will keep union.

> > next_off : 15;
> > next_wrap : 1;
> >  };
> > diff --git a/transport-pci.tex b/transport-pci.tex index
> > 5d98467..53c8ee6 100644
> > --- a/transport-pci.tex
> > +++ b/transport-pci.tex
> > @@ -319,7 +319,7 @@ \subsubsection{Common configuration structure
> layout}\label{sec:Virtio Transport
> >  le64 queue_desc;/* read-write */
> >  le64 queue_driver;  /* read-write */
> >  le64 queue_device;  /* read-write */
> > -le16 queue_notify_data; /* read-only for driver */
> > +le16 queue_notify_config_data;  /* read-only for driver */
> >  le16 queue_reset;   /* read-write */
> >  };
> >  \end{lstlisting}
> > @@ -388,17 +388,21 @@ \subsubsection{Common configuration structure
> > layout}\label{sec:Virtio Transport  \item[\field{queue_device}]
> >  The driver writes the physical address of Device Area here.  See 
> > section
> \ref{sec:Basic Facilities of a Virtio Device / Virtqueues}.
> >
> > -\item[\field{queue_notify_data}]
> > +\item[\field{queue_notify_config_data}]
> >  This field exists only if VIRTIO_F_NOTIF_CONFIG_DATA has been
> negotiated.
> > -The driver will use this value to put it in the 'virtqueue number' 
> > field
> > -in the available buffer notification structure.
> > +The driver will use this value when driver sends available buffer
> > +notification to the device.
> >  See section \ref{sec:Virtio Transport Options / Virtio Over PCI 
> > Bus / PCI-
> specific Initialization And Device Operation / Available Buffer 
> Notifications}.
> >  \begin{note}
> >  This field provides the device with flexibility to determine how
> virtqueues
> >  will be referred to in available buffer notifications.
> > -In a trivial case the device can set 
> > \field{queue_notify_data}=vqn. Some
> devices
> > -may benefit from providing another value, for example an internal
> virtqueue
> > -identifier, or an internal offset related to the virtqueue number.
> > +In a trivial case the device can set 
> > \field{queue_notify_config_data} to
> > +virtqueue index. Some devices may benefit from providing another
> value,
> > +for example an internal virtqueue identifier, or an internal offset
> > +related to the virtqueue index.
> > +\end{note}
> > +\begin{note}
> > +This field is previously known as queue_notify_data.
> >  \end{note}
> >
> >  \item[\field{queue_reset}]
> > @@ -468,7 +472,9 @@ \subsubsection{Common configuration structure
> > layout}\label{sec:Virtio Transport
> >
> >  \drivernormative{\paragraph}{Common configuration structure
> > layout}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device
> > Layout / Common configuration structure layout}
> >
> > -The driver MUST NOT write to \field{device_feature}, \field{num_queues},
> \field{config_generation}, \field{queue_notify_off} or
> \field{queue_notify_data}.
> > +The driver MUST NOT write to \field{device_feature},
> > +\field{num_queues}, \field{config_generation},
> > +\field{queue_notify_off} or \field{queue_notify_config_data}.
> >
> >  If VIRTIO_F_RING_PACKED has been negotiated,  the driver MUST NOT
> > write the value 0 to \field{queue_size}.
> 
> > @@ -1053,9 +1059,9 @@ \subsubsection{Available Buffer
> > Notifications}\label{sec:Virtio Transport Option  If
> VIRTIO_F_NOTIF_CONFIG_DATA has been negotiated:
> >  \begin{itemize}
> >  \item If VIRTIO_F_NOTIFICATION_DATA has not been negotiated, the
> > driver MUST use the -\field{queue_notify_data} value instead of the 
> > virtqueue
> index.
> > +\field{queue_notify_id} value instead of the virtqueue index.
> >  \item If VIRTIO_F_NOTIFICATION_DATA has been negotiated, the driver
> > MUST set the 

[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Monday, April 17, 2023 4:27 PM
> On Mon, Apr 17, 2023 at 05:23:30PM +, Parav Pandit wrote:
> > > Things might be simplified if we use separate queues for admin,
> > > transport and legacy.
> > >
> > Do you mean say we have three AQs, AQ_1, AQ_2, and AQ_3;
> > AQ_1 of the PF used by admin work as SIOV device create, SRIOV MSIX
> configuration.
> > AQ_2 of the PF used for transporting legacy config access of the PCI
> > VF
> > AQ_3 of the PF for some transport work.
> >
> > If yes, sounds find to me.
> 
> Latest proposal simply leaves the split between AQs up to the driver.
> Seems the most flexible.
Yes. It is. Different opcode range and multiple AQs enable to do so.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Michael S. Tsirkin
On Mon, Apr 17, 2023 at 05:23:30PM +, Parav Pandit wrote:
> > Things might be simplified if we use separate queues for admin, transport 
> > and
> > legacy.
> > 
> Do you mean say we have three AQs, AQ_1, AQ_2, and AQ_3;
> AQ_1 of the PF used by admin work as SIOV device create, SRIOV MSIX 
> configuration.
> AQ_2 of the PF used for transporting legacy config access of the PCI VF
> AQ_3 of the PF for some transport work.
> 
> If yes, sounds find to me.

Latest proposal simply leaves the split between AQs up to the driver.
Seems the most flexible.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Parav Pandit


> From: Jason Wang 
> Sent: Sunday, April 16, 2023 11:23 PM
> 
> On Fri, Apr 14, 2023 at 11:51 AM Parav Pandit  wrote:
> >
> >
> >
> > > From: Jason Wang 
> > > Sent: Thursday, April 13, 2023 11:38 PM
> >
> > > > > 1) spec doesn't enforce the size of a specific structure
> > > > Spec will be extended in coming time.
> > >
> > > It's too late to do any new restriction without introducing a flag.
> > We are really diverging from the topic.
> > I don’t think it is late. The work in this area of PCI VF has not even 
> > begun fully.
> 
> I meant it needs a new feature flag.
> 
Ok.
> >
> > > Mandating size may easily end up with a architecte specific solution.
> > >
> > Unlikely. Other standard device types are also expanding this way.
> 
> I think we are talking about software technologies instead of device design
> here.
> 
Isn't the size of BAR and its cap_len exposed by the device?

> For devices it works.But for hypervisor, it needs to deal with the size that
> doesn't match arch's page size.
> 
PCI BAR size of the VF can learn the system page size being different for x86 
(4K) and arm (64K).
PCI transport seems to support it.

PCI PF on bare-metal has to understand the highest page size anyway if for some 
reason bare-metal host wants to map this PF to the VM.

A hypevisor mediating and emulating needs to learn the system page size anyway.
If underlying device page size is smaller, hypevisor may end up mediating it.

> I meant you can't have recommendations in features and config. 
Sure. There is none.

> What's more,
> assuming you have two generations of device
> 
> gen1: features x,y
> gen2: features x,y,z
> 
> You won't be able to do migration between gen1 and gen2 without mediation.
Gen1 can easily migrate to gen2, because gen1 has smaller subset than gen2.
When gen2 device is composed, feature z is disabled.

Gen2 to gen1 migration can do software-based migration anyway or through 
mediation.
But because gen2 may need to migrate to gen1, hence gen2 to gen2 migration also 
should be done through mediation, doesn’t make sense to me.

> Such technologies have been used by cpu features for years.
> I am not sure why it became a problem for you.
> 
> > Apart from it some of the PCI device layout compat checks will be covered
> too.
> >
> > > And what you proposed is to allow the management to know the exact
> > > hardware layout in order to check the compatibility? And the
> > > management needs to evolve as new structures are added.
> > Mostly not mgmt stack may not need to evolve a lot.
> > Because most layouts should be growing within the device context and not at
> the PCI capabilities etc area.
> >
> > And even if it does, its fine as large part of it standard PCI spec 
> > definitions.
> 
> So as mentioned in another thread, this is a PCI specific solution:
> 
> 1) feature and config are basic virtio facility
> 2) capability is not but specific to PCI transport
> 
So any LM solution will have transport specific checks and virtio level checks.

> Checking PCI capability layout in the virtio management is a layer violation
> which can't work for future transport like SIOV or adminq.
Virtio management that will have transport level checks is not a violation.
SIOV will define its own transport anyway. Not to mix with ccw/mmio or pci.

> Management should only see virtio device otherwise the solution becomes
> transport specific.
> 
Solution needs to cover transport as well as transport is integral part of the 
virtio spec.
Each transport layer will implement feature/config/cap in its own way.

> >
> > > This complicates the work of
> > > management's furtherly which I'm not sure it can work.
> > >
> > Well, once we work towards it, it can work. :)
> >
> > > >
> > > > > Hypervisor needs to start from a mediation method and do BAR
> > > > > assignment only when possible.
> > > > >
> > > > Not necessarily.
> > > >
> > > > > > Cons:
> > > > > > a. More AQ commands work in sw
> > > > >
> > > > > Note that this needs to be done on top of the transport virtqueue.
> > > > > And we need to carefully design the command sets since they
> > > > > could be mutually exclusive.
> > > > >
> > > > Not sure what more to expect out of transport virtqueue compared to AQ.
> > > > I didn’t follow, which part could be mutually exclusive?
> > >
> > > Transport VQ allows a modern device to be transported via adminq.
> > >
> > May be for devices it can work. Hypervisor mediation with CC on horizon for
> new capabilities is being reduced.
> > So we don’t see transport vq may not be path forward.
> >
> > > And you want to add commands to transport for legacy devices.
> > >
> > Yes only legacy emulation who do not care about hypervisor mediation.
> >
> > > Can a driver use both the modern transport commands as well as the
> > > legacy transport commands?
> > Hard to answer, I likely do not understand as driver namespace is unclear.
> 
> Things might be simplified if we use separate queues for admin, transport and
> legacy.
> 

[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-17 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Sunday, April 16, 2023 4:44 PM
> 
> On Sun, Apr 16, 2023 at 01:41:55PM +, Parav Pandit wrote:
> > > From: virtio-comm...@lists.oasis-open.org
> > >  On Behalf Of Michael S.
> > > Tsirkin
> > > Sent: Friday, April 14, 2023 2:57 AM
> >
> > > Do you refer to the trick Jason proposed where BAR0 is memory but
> > > otherwise matches legacy BAR0 exactly? Is this your preferred solution at
> this point then?
> >
> > We look at it again.
> > Above solution can work reliably only for a very small number of PF and that
> too with very special hardware circuitry due to the reset flow.
> >
> > Therefore, for virtualization below interface is preferred.
> > a. For transitional device legacy configuration register transport
> > over AQ,
> 
> I don't get what this has to do with transitional ...
> 
Typically, in current wordings, transitional is the device that supports legacy 
interface.
So, it doesn't have to be for the transitional.

I just wanted to highlight that a PCI VF device with its parent PCI PF device 
can transport the legacy interface commands.

> > Notification to utilize transitional device notification area of the BAR.
> 
> The vq transport does something like this, no?
> 
Notifications over a queuing interface unlikely can be a performant interface 
because one is configuration task and other is data path task.

> > b. Non legacy interface of transitional and non-transitional PCI device to
> access direct PCI device without mediation.
> 
> So VF can either be accessed through AQ of PF, or through direct mapping?
Right. VF to access legacy registers using AQ of PF and continue non-legacy 
registers using direct mapping as done today.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [virtio-comment] RE: [virtio-dev] RE: [PATCH v12 03/10] content: Rename confusing queue_notify_data and vqn names

2023-04-17 Thread Halil Pasic
On Mon, 17 Apr 2023 03:04:34 -0400
"Michael S. Tsirkin"  wrote:

> On Mon, Apr 17, 2023 at 05:18:44AM +0200, Halil Pasic wrote:
> > On Tue, 11 Apr 2023 13:35:09 +
> > Parav Pandit  wrote:
> >   
> > > > From: Cornelia Huck 
> > > > Sent: Tuesday, April 11, 2023 4:56 AM
> > >   
> > > > 
> > > > Yes, please leave it as F_CONFIG_DATA, as we're just putting some "data"
> > > > there in the end (and F_CONFIG_COOKIE might indeed be confusing for the
> > > > ccw case.)
> > > 
> > > Since Halil didn't respond for 5+ days + Michel and you propose to 
> > > continue use CONFIG_DATA and this is rare used field, I will rename 
> > >   
> > 
> > Sorry, this one has fallen through the cracks.  
> 
> Well this whole patchset is just a cleanup so it's not holding up other
> work at least. But I have to say it's difficult to make progress when
> someone comes back from outer space after more than a week of silence
> while others finished a discussion and reopens it with some new
> feedback.

Sorry, this was after 6 days. I didn't know that qualifies
as 'outer space'. As pointed out below, I was monitoring the preceding
discussion, and since the way things went was and is acceptable for
me, I didn't want to muddy the waters any further.

The issue I ended up addressing got introduced in very last email, which
pre-announced the next version.

My first intention was to explain myself, and apologize, after being
called out.

But then, also looking by looking at v13 I realized that
there might have been a slip up because F_NOTIF_CONFIG_DATA got shortened to
F_CONFIG_DATA in the discussion, which is no big deal for the discussion
itself, but may have leaked in the v13 proposal. Parav has sent out the
announced next version after about 8 hours. And if it weren't for my
hypothesis why we ended up with the proposed name vq_config_data, the
right place to discuss further would have been v13.

In hindsight, I see, replying to the v12 thread wasn't a good move.

[..]

> 
> I also feel high latency is one of the reasons people are beginning to
> ask to split into subcommitees where they won't have to deal with this
> kind of thing. 
> 

I tend to agree. 

> Let's try to keep the latency low, please.

Believe me, it is not like I'm actively trying to introduce extra
latency.

Regards,
Halil

> > For
> > the preceding ones: I do not have a strong opinion. I do
> > share Michael's and Connie's assessment regarding a possible
> > clash with CCW.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-04-17 Thread Cornelia Huck
On Mon, Apr 17 2023, Alexander Gordeev  
wrote:

> Hi Alexandre,
>
> Thanks for you letter! Sorry, it took me some time to write an answer.
>
> First of all I'd like to describe my perspective a little bit because it
> seems, that in many cases we (and other people writing their feedbacks)
> simply have very different priorities and background.

Thank you for describing the environment you want to use this in, this
helps to understand the different use cases.

>
> OpenSynergy, the company that I work for, develops a proprietary
> hypervisor called COQOS mainly for automotive and aerospace domains. We
> have our proprietary device implementations, but overall our goal is to
> bring open standards into these quite closed domains and we're betting
> big on virtio. The idea is to run safety-critical functions like cockpit
> controller alongside with multimedia stuff in different VMs on the same
> physical board. Right now they have it on separate physical devices. So
> they already have maximum isolation. And we're trying to make this
> equally safe on a single board. The benefit is the reduced costs and
> some additional features. Of course, we also need features here, but at
> the same time security and ease of certification are among the top of
> our priorities. Nobody wants cars or planes to have security problems,
> right? Also nobody really needs DVB and even more exotic devices in cars
> and planes AFAIK.
>
> For the above mentioned reasons our COQOS hypervisor is running on bare
> metal. Also memory management for the guests is mostly static. It is
> possible to make a shared memory region between a device and a driver
> managed by device in advance. But definitely no mapping of random host
> pages on the fly is supported.
>
> AFAIU crosvm is about making Chrome OS more secure by putting every app
> in its own virtualized environment, right? Both the host and guest are
> linux. In this case I totally understand why V4L2 UAPI pass-through
> feels like a right move. I guess, you'd like to make the switch to
> virtualized apps as seemless as possible for your users. If they can't
> use their DVBs anymore, they complain. And adding the virtualization
> makes the whole thing more secure anyway. So I understand the desire to
> have the range of supported devices as broad as possible. It is also
> understandable that priorities are different with desktop
> virtualization. Also I'm not trying to diminish the great work, that you
> have done. It is just that from my perspective this looks like a step in
> the wrong direction because of the mentioned concerns. So I'm going to
> continue being a skeptic here, sorry.
>
> Of course, I don't expect that you continue working on the old approach
> now as you have put that many efforts into the V4L2 UAPI pass-through.
> So I think it is best to do the evolutionary changes in scope of virtio
> video device specification, and create a new device specification
> (virtio-v4l2 ?) for the revolutionary changes. Then I'd be glad to
> continue the virtio-video development. In fact I already started making
> draft v7 of the spec according to the comments. I hope it will be ready
> for review soon.
>
> I hope this approach will also help fix issues with virtio-video spec
> and driver development misalignment as well as V4L2 compliance issues
> with the driver. I believe the problems were caused partly by poor
> communication between us and by misalignment of our development cycles,
> not by the driver complexity.
>
> So in my opinion it is OK to have different specs with overlapping
> functionality for some time. My only concern is if this would be
> accepted by the community and the committee. How the things usually go
> here: preferring features and tolerating possible security issues or the
> other way around? Also how acceptable is having linux-specific protocols
> at all?

My main question is: What would be something that we can merge as a
spec, that would either cover the different use cases already, or that
could be easily extended to cover the use cases it does not handle
initially?

For example, can some of the features that would be useful in crosvm be
tucked behind some feature bit(s), so that the more restricted COQOS
hypervisor would simply not offer them? (Two feature bits covering two
different mechanisms, like the current approach and the v4l2 approach,
would also be good, as long as there's enough common ground between the
two.)

If a staged approach (adding features controled by feature bits) would
be possible, that would be my preferred way to do it.

Regarding the protocol: I think Linux-originating protocols (that can be
implemented on non-Linux setups) are fine, Linux-only protocols probably
not so much.

>
> Also I still have concerns about memory management with V4L2 UAPI
> pass-through. Please see below.
>
> On 17.03.23 08:24, Alexandre Courbot wrote:
>> Hi Alexander,
>>
>> On Thu, Mar 16, 2023 at 7:13 PM Alexander Gordeev
>>  wrote:
>>> Hi Alexandre,

[virtio-dev] Re: [PATCH] virtio-spi: add the device specification

2023-04-17 Thread Haixu Cui

Hi Cornelia Huck,
Thank you so much for your helpful comments. I have them fixed in 
another submission.


Best Regards
Haixu Cui

On 3/27/2023 7:35 PM, Cornelia Huck wrote:

On Fri, Mar 24 2023, Haixu Cui  wrote:


virtio-spi is a virtual SPI master and it allows a guset to operate and
use the physical SPI master controlled by the host.


Please spell out what SPI is the first time you use it.

I explain SPI is the abbreviation of Serial Peripheral Interface.


Also, please remember to post the separate patch that reserves the ID
for it.

I have another patch for DEVICE ID for virtio-spi.




Signed-off-by: Haixu Cui 
---
  conformance.tex |  12 +-
  content.tex |   1 +
  device-types/spi/description.tex| 153 
  device-types/spi/device-conformance.tex |   7 ++
  device-types/spi/driver-conformance.tex |   7 ++
  5 files changed, 176 insertions(+), 4 deletions(-)
  create mode 100644 device-types/spi/description.tex
  create mode 100644 device-types/spi/device-conformance.tex
  create mode 100644 device-types/spi/driver-conformance.tex


(...)


diff --git a/device-types/spi/description.tex b/device-types/spi/description.tex
new file mode 100644
index 000..0b69700
--- /dev/null
+++ b/device-types/spi/description.tex
@@ -0,0 +1,153 @@
+\section{SPI Master Device}\label{sec:Device Types / SPI Master Device}
+
+virtio-spi is a virtual SPI master and it allows a guest to operate and use
+the physical SPI master devices controlled by the host.


Here as well; it's even more important that the acronym is expanded at
least once in the spec.

Also, does this mean that the device is supposed to be an interface to
physical SPI master devices? It would be good if this could be framed
without guest/host terminology (although this can be used as an
example.) Maybe something like

"The virtio SPI master device is a virtual SPI (Serial Peripheral
Interface) master device, potentially interfacing to another SPI master
device. It allows, for example, for a host to expose access to a
physical SPI master device controlled by the host to a guest."

virtio-spi is similar to virtio-i2c, so I update the description 
referring to the virtio-i2c specification.



+
+In a typical host and guest architecture with Virtio SPI, Virtio SPI driver
+is the front-end and exists in the guest kernel, Virtio SPI device acts as
+the back-end and exists in the host. And VirtQueues assist Virtio SPI driver
+and Virtio SPI device in perform VRing operations for communication between
+the front-end and the back-end.


I'm not sure I can parse this properly -- does this mean that a
virtqueue is used for communication between a front-end and a back-end?

yes, I also update the expression more clear.


(Didn't look at the remainder of the patch yet.)



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev][PATCH 2/2] virtio-spi: add the device specification

2023-04-17 Thread Haixu Cui
virtio-spi is a virtual SPI master and it allows a guset to operate and
use the physical SPI master controlled by the host.
---
 device-types/spi/description.tex| 155 
 device-types/spi/device-conformance.tex |   7 ++
 device-types/spi/driver-conformance.tex |   7 ++
 3 files changed, 169 insertions(+)
 create mode 100644 device-types/spi/description.tex
 create mode 100644 device-types/spi/device-conformance.tex
 create mode 100644 device-types/spi/driver-conformance.tex

diff --git a/device-types/spi/description.tex b/device-types/spi/description.tex
new file mode 100644
index 000..a68e5f4
--- /dev/null
+++ b/device-types/spi/description.tex
@@ -0,0 +1,155 @@
+\section{SPI Master Device}\label{sec:Device Types / SPI Master Device}
+
+virtio-spi is a virtual SPI(Serial Peripheral Interface) master and it allows a
+guest to operate and use the physical SPI master devices controlled by the 
host.
+By attaching the host SPI controlled nodes to the virtual SPI master device, 
the
+guest can communicate with them without changing or adding extra drivers for 
these
+controlled SPI devices.
+
+In a typical host and guest architecture with Virtio SPI, Virtio SPI driver
+is the front-end and exists in the guest kernel, Virtio SPI device acts as
+the back-end and exists in the host. And VirtQueue is used for communication
+between the front-end and the back-end.
+
+\subsection{Device ID}\label{sec:Device Types / SPI Master Device / Device ID}
+45
+
+\subsection{Virtqueues}\label{sec:Device Types / SPI Master Device / 
Virtqueues}
+
+\begin{description}
+\item[0] requestq
+\end{description}
+
+\subsection{Feature bits}\label{sec:Device Types / SPI Master Device / Feature 
bits}
+
+None.
+
+\subsection{Device configuration layout}\label{sec:Device Types / SPI Master 
Device / Device configuration layout}
+
+All fields of this configuration are always available and read-only for the 
Virtio SPI driver.
+
+\begin{lstlisting}
+struct virtio_spi_config {
+u32 bus_num;
+u32 chip_select_max_number;
+};
+\end{lstlisting}
+
+\begin{description}
+\item[\field{bus_num}] is the physical spi master assigned to guset, and this 
is SOC-specific.
+
+\item[\field{chip_select_max_number}] is the number of chipselect the physical 
spi master supported.
+\end{description}
+
+\subsection{Device Initialization}\label{sec:Device Types / SPI Master Device 
/ Device Initialization}
+
+\begin{itemize}
+\item The Virtio SPI driver configures and initializes the virtqueue.
+
+\item The Virtio SPI driver allocates and registers the virtual SPI master.
+\end{itemize}
+
+\subsection{Device Operation}\label{sec:Device Types / SPI Master Device / 
Device Operation}
+
+\subsubsection{Device Operation: Request Queue}\label{sec:Device Types / SPI 
Master Device / Device Operation: Request Queue}
+
+The Virtio SPI driver queues requests to the virtqueue, and they are used by 
the
+Virtio SPI device. Each request represents one SPI transfer and it is of the 
form:
+
+\begin{lstlisting}
+struct virtio_spi_transfer_head {
+u32 mode;
+u32 freq;
+u32 word_delay;
+u8 slave_id;
+u8 bits_per_word;
+u8 cs_change;
+u8 reserved;
+};
+\end{lstlisting}
+
+\begin{lstlisting}
+struct virtio_spi_transfer_end {
+u8 status;
+};
+\end{lstlisting}
+
+\begin{lstlisting}
+struct virtio_spi_req {
+struct virtio_spi_transfer_head head;
+u8 *rx_buf;
+u8 *tx_buf;
+struct virtio_spi_transfer_end end;
+};
+\end{lstlisting}
+
+The \field{mode} defines the SPI transfer mode.
+
+The \field{freq} defines the SPI transfer speed in Hz.
+
+The \field{word_delay} defines how long to wait between words within one SPI 
transfer,
+in ns unit.
+
+The \field{slave_id} defines the chipselect index the SPI transfer used.
+
+The \field{bits_per_word} defines the number of bits in each SPI transfer word.
+
+The \field{cs_change} defines whether to deselect device before starting the 
next SPI transfer.
+
+The \field{rx_buf} is the receive buffer, used to hold the data received from 
the external device.
+
+The \field{tx_buf} is the transmit buffer, used to hold the data sent to the 
external device.
+
+The final \field{status} byte of the request is written by the Virtio SPI 
device: either
+VIRTIO_SPI_MSG_OK for success or VIRTIO_SPI_MSG_ERR for error.
+
+\begin{lstlisting}
+#define VIRTIO_SPI_MSG_OK 0
+#define VIRTIO_SPI_MSG_ERR1
+\end{lstlisting}
+
+\subsubsection{Device Operation: Operation Status}\label{sec:Device Types / 
SPI Master Device / Device Operation: Operation Status}
+
+Members in \field{struct virtio_spi_transfer_head} are determined by the 
Virtio SPI driver, while \field{status}
+in \field{struct virtio_spi_transfer_end} is determined by the processing of 
the Virtio SPI device.
+
+Virtio SPI supports three transfer types: 1) half-duplex read, 2) half-duplex 
write, 3) full-duplex read and write.
+For half-duplex read transfer, \field{rx_buf} is filled by the Virtio SPI 
device 

[virtio-dev][PATCH 0/2] virtio-spi: add virtual SPI master

2023-04-17 Thread Haixu Cui
virtio-spi is a virtual SPI(Serial Peripheral Interface) master and
it allows a guset to operate and use the physical SPI master controlled
by the host.

Patch summary:
patch 1 define the DEVICE ID for virtio-spi
patch 2 add the specification for virtio-spi

Haixu Cui (2):
  virtio-spi: define the DEVICE ID for virtio SPI master
  virtio-spi: add the device specification

 content.tex |   2 +
 device-types/spi/description.tex| 155 
 device-types/spi/device-conformance.tex |   7 ++
 device-types/spi/driver-conformance.tex |   7 ++
 4 files changed, 171 insertions(+)
 create mode 100644 device-types/spi/description.tex
 create mode 100644 device-types/spi/device-conformance.tex
 create mode 100644 device-types/spi/driver-conformance.tex

-- 
2.17.1


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-04-17 Thread Cornelia Huck
On Mon, Apr 17 2023, Alexander Gordeev  
wrote:

> Hello Cornelia,
>
> On 17.04.23 14:56, Cornelia Huck wrote:
>> On Sat, Apr 15 2023, Alexandre Courbot  wrote:
>>
>>> If nobody strongly objects, I think this can be pushed a bit more
>>> officially. Cornelia, would you consider it for inclusion if I
>>> switched the next version of the specification to use V4L2 as the
>>> host/guest protocol? This may take some more time as I want to confirm
>>> the last details with code, but it should definitely be faster to
>>> merge and to test with a real implementation than our previous
>>> virtio-video attempts.
>> Yes, please do post a new version of this spec; I agree that an existing
>> implementation is really helpful here.
>>
>> [I have proposed July 1st as a "freeze" date for new features for 1.3,
>> with August 1st as an "everything must be in" date; I'd really like
>> virtio-video to be a part of 1.3, if possible :)]
>
> I sent an email minutes ago with an alternative plan. I'd like to
> volunteer to continue the evolutionary changes to virtio-video. I'm
> already working on the v7 draft. I think it will be available next week.
> Then I'll focus on making the driver V4L2 compliant. I think all of this
> is achievable by July 1st. At least the spec part. I think the
> revolutionary changes should be in a separate namespace. The rationale
> is in the longer email. WDYT?

Seems our mails crossed mid-air... I'll go ahead and read your mail (and
probably answer there.) My ultimate goal is to have a spec everyone is
happy with, regardless on how we arrive there.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-04-17 Thread Cornelia Huck
On Sat, Apr 15 2023, Alexandre Courbot  wrote:

> If nobody strongly objects, I think this can be pushed a bit more
> officially. Cornelia, would you consider it for inclusion if I
> switched the next version of the specification to use V4L2 as the
> host/guest protocol? This may take some more time as I want to confirm
> the last details with code, but it should definitely be faster to
> merge and to test with a real implementation than our previous
> virtio-video attempts.

Yes, please do post a new version of this spec; I agree that an existing
implementation is really helpful here.

[I have proposed July 1st as a "freeze" date for new features for 1.3,
with August 1st as an "everything must be in" date; I'd really like
virtio-video to be a part of 1.3, if possible :)]


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-04-17 Thread Alexander Gordeev

Hi Alexandre,

Thanks for you letter! Sorry, it took me some time to write an answer.

First of all I'd like to describe my perspective a little bit because it
seems, that in many cases we (and other people writing their feedbacks)
simply have very different priorities and background.

OpenSynergy, the company that I work for, develops a proprietary
hypervisor called COQOS mainly for automotive and aerospace domains. We
have our proprietary device implementations, but overall our goal is to
bring open standards into these quite closed domains and we're betting
big on virtio. The idea is to run safety-critical functions like cockpit
controller alongside with multimedia stuff in different VMs on the same
physical board. Right now they have it on separate physical devices. So
they already have maximum isolation. And we're trying to make this
equally safe on a single board. The benefit is the reduced costs and
some additional features. Of course, we also need features here, but at
the same time security and ease of certification are among the top of
our priorities. Nobody wants cars or planes to have security problems,
right? Also nobody really needs DVB and even more exotic devices in cars
and planes AFAIK.

For the above mentioned reasons our COQOS hypervisor is running on bare
metal. Also memory management for the guests is mostly static. It is
possible to make a shared memory region between a device and a driver
managed by device in advance. But definitely no mapping of random host
pages on the fly is supported.

AFAIU crosvm is about making Chrome OS more secure by putting every app
in its own virtualized environment, right? Both the host and guest are
linux. In this case I totally understand why V4L2 UAPI pass-through
feels like a right move. I guess, you'd like to make the switch to
virtualized apps as seemless as possible for your users. If they can't
use their DVBs anymore, they complain. And adding the virtualization
makes the whole thing more secure anyway. So I understand the desire to
have the range of supported devices as broad as possible. It is also
understandable that priorities are different with desktop
virtualization. Also I'm not trying to diminish the great work, that you
have done. It is just that from my perspective this looks like a step in
the wrong direction because of the mentioned concerns. So I'm going to
continue being a skeptic here, sorry.

Of course, I don't expect that you continue working on the old approach
now as you have put that many efforts into the V4L2 UAPI pass-through.
So I think it is best to do the evolutionary changes in scope of virtio
video device specification, and create a new device specification
(virtio-v4l2 ?) for the revolutionary changes. Then I'd be glad to
continue the virtio-video development. In fact I already started making
draft v7 of the spec according to the comments. I hope it will be ready
for review soon.

I hope this approach will also help fix issues with virtio-video spec
and driver development misalignment as well as V4L2 compliance issues
with the driver. I believe the problems were caused partly by poor
communication between us and by misalignment of our development cycles,
not by the driver complexity.

So in my opinion it is OK to have different specs with overlapping
functionality for some time. My only concern is if this would be
accepted by the community and the committee. How the things usually go
here: preferring features and tolerating possible security issues or the
other way around? Also how acceptable is having linux-specific protocols
at all?

Also I still have concerns about memory management with V4L2 UAPI
pass-through. Please see below.

On 17.03.23 08:24, Alexandre Courbot wrote:

Hi Alexander,

On Thu, Mar 16, 2023 at 7:13 PM Alexander Gordeev
 wrote:

Hi Alexandre,

On 14.03.23 06:06, Alexandre Courbot wrote:

The spec should indeed be considerably lighter. I'll wait for more
feedback, but if the concept appeals to other people as well, I may
give the spec a try soon.

Did you receive an email I sent on February 7? There was some feedback
there. It has been already established, that V4L2 UAPI pass-through is
technically possible. But I had a couple of points why it is not
desirable. Unfortunately I haven't received a reply. I also don't see
most of these points addressed in any subsequent emails from you.

I have more to say now, but I'd like to make sure that you're interested
in the discussion first.

Sorry about that, I dived head first into the code to see how viable
the idea would be and forgot to come back to you. Let me try to answer
your points now that I have a better idea of how this would work.


If we find out that there is a benefit in going through the V4L2
subsystem (which I cannot see for now), rebuilding the UAPI structures
to communicate with the device is not different from building
virtio-video specific structures like what we are currently doing.

Well, the V4L2 subsystem is there for a reason, 

[virtio-dev] Re: Participation (was: Re: [virtio-comment] RE: [virtio-dev] RE: [PATCH v12 03/10] content: Rename confusing queue_notify_data and vqn names)

2023-04-17 Thread Michael S. Tsirkin
On Mon, Apr 17, 2023 at 10:47:47AM +0200, Cornelia Huck wrote:
> I agree that patch reposting is happening much too fast in some
> cases. Not sure how to formalize that, either. Can we please just be
> more mindful of that? Reviewing time is not free. If I'm trying to do a
> timely review of something and constantly see new versions while I'm not
> finished yet, I do not feel like my feedback is actually valued.

Imagine a group of contributors working on spec 100% of time.
What do you want them to do? they need to exchange patches,
slowing them down because part time reviewers are overwhelmed
would be a waste. A separate mailing list would be one
solution. Some tag in the subject would be another. RFC?
Though RFC is used for other things as well...

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Participation (was: Re: [virtio-comment] RE: [virtio-dev] RE: [PATCH v12 03/10] content: Rename confusing queue_notify_data and vqn names)

2023-04-17 Thread Cornelia Huck
On Mon, Apr 17 2023, "Michael S. Tsirkin"  wrote:

> I am not saying don't give feedback but I'm saying please help us
> all be more organized, feedback time really should be within a
> day or two, in rare cases up to a week.

Given that there are things like weekends, public holidays, etc. one day
does not look reasonable; while it certainly makes sense to continue if
no feedback is forthcoming for a few days, not accounting for the fact
that this is not the exclusive job for most/any of us is just a fast
track to either burnout or people dropping out of virtio standardization
altogether.

> And I'd like to remind everyone if you are going away you are supposed
> to report a leave of absence.

Well... "Each request shall be made by sending an email to the TC
mailing list at least 7 days before the Leave is to start." is probably
not going to work for many cases. (Also, in any other community I'm
participating in it is expected that you just might not be there or have
time to work on it every week -- I've always seen that leave of absence
thingy as something for a really long vacation or for something like
parental leave, for which the max 45 days is really too short...) Not to
mention that it applies to voting, not to participation on the lists.

> TC's that have meetings just take away voting rights from someone who
> does not attend two meetings in a row.  We do it by ballot so this does
> not apply, but I think we should set some limits in group's bylaws,
> anyway. Ideas on formalizing this? If not we can just have informal
> guidelines.  There's of course a flip side to this. Some patches
> seemingly go through two versions a day. Keeping up becomes a full time
> job. We'd need a guideline for that, too.

What do we actually expect from TC members? "Reply to emails" is not
part of any formal requirement AFAICS (and not all TC members do
participate on the lists on a regular basis anyway). The requirement is
only to vote on the ballots, and there you're completely free to vote
"abstain", so you can always squeeze in voting even if you're not able
to participate otherwise. I think that's fine.

"If I don't get a reply after $NUMBER_OF_WORKING_DAYS, I'll assume I can
proceed as I think fit" is a reasonable assumption to make, e.g. to
request a vote. Not sure if/how to formalize that. Also, how is this
supposed to work if the original author doesn't reply to comments?
Should the proposal be considered abandoned?

I agree that patch reposting is happening much too fast in some
cases. Not sure how to formalize that, either. Can we please just be
more mindful of that? Reviewing time is not free. If I'm trying to do a
timely review of something and constantly see new versions while I'm not
finished yet, I do not feel like my feedback is actually valued.

> I also feel high latency is one of the reasons people are beginning to
> ask to split into subcommitees where they won't have to deal with this
> kind of thing. Let's try to keep the latency low, please.

I think there's multiple things to unpack here.

- The very common strain of limited reviewer time. This seems to be an
  issue nearly everywhere. Encouraging more review helps; but if review
  and ensuing discussing turns into a time sink, it just cannot be
  handled at a reasonable activity level anymore.
- Latency due to missing feedback. Can be solved by just requesting a
  vote if no feedback is forthcoming in a reasonable time frame.
- Latency due to missing reaction to feedback. This means the proposal
  just doesn't make any progress. The onus is on the submitter here.
- Conflicting approaches favoured by different people. This cannot be
  resolved in a formal way; either people need to be convinced that a
  certain approach will work, a middle ground found, or a way worked out
  that the different approaches can co-exist. In any case, this usually
  means long discussions which can be very frustrating, but unless we
  want to bulldoze over some people this is something we'll have to
  live with. [Personally, I think this is the worst contributor to
  frustration, and not something that can be solved by subcommittees.]
- [I'm also not happy with the tone of some emails I've been seeing. I
  won't point to them in order not to stir up things that have already
  calmed down again.]


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] RE: [virtio-dev] RE: [PATCH v12 03/10] content: Rename confusing queue_notify_data and vqn names

2023-04-17 Thread Michael S. Tsirkin
On Mon, Apr 17, 2023 at 05:18:44AM +0200, Halil Pasic wrote:
> On Tue, 11 Apr 2023 13:35:09 +
> Parav Pandit  wrote:
> 
> > > From: Cornelia Huck 
> > > Sent: Tuesday, April 11, 2023 4:56 AM  
> > 
> > > 
> > > Yes, please leave it as F_CONFIG_DATA, as we're just putting some "data"
> > > there in the end (and F_CONFIG_COOKIE might indeed be confusing for the
> > > ccw case.)  
> > 
> > Since Halil didn't respond for 5+ days + Michel and you propose to continue 
> > use CONFIG_DATA and this is rare used field, I will rename 
> > 
> 
> Sorry, this one has fallen through the cracks.

Well this whole patchset is just a cleanup so it's not holding up other
work at least. But I have to say it's difficult to make progress when
someone comes back from outer space after more than a week of silence
while others finished a discussion and reopens it with some new
feedback.

I am not saying don't give feedback but I'm saying please help us
all be more organized, feedback time really should be within a
day or two, in rare cases up to a week.

And I'd like to remind everyone if you are going away you are supposed
to report a leave of absence.

TC's that have meetings just take away voting rights from someone who
does not attend two meetings in a row.  We do it by ballot so this does
not apply, but I think we should set some limits in group's bylaws,
anyway. Ideas on formalizing this? If not we can just have informal
guidelines.  There's of course a flip side to this. Some patches
seemingly go through two versions a day. Keeping up becomes a full time
job. We'd need a guideline for that, too.

I also feel high latency is one of the reasons people are beginning to
ask to split into subcommitees where they won't have to deal with this
kind of thing. Let's try to keep the latency low, please.

> For
> the preceding ones: I do not have a strong opinion. I do
> share Michael's and Connie's assessment regarding a possible
> clash with CCW.
> 
> Let me just note that the feature ain't called, "F_CONFIG_DATA" (i.e.
> with full name VIRTIO_F_CONFIG_DATA) but rather "F_NOTIF_CONFIG_DATA"
> (i.e. with full name VIRTIO_F_NOTIF_CONFIG_DATA).
> 
> > vqn to union of
> > 
> > vq_index
> > vq_config_data
> 
> In that sense vq_config_data is not good in my opinion, because
> it misses "notif" which is present in both "F_NOTIF_CONFIG_DATA"
> and "queue_notify_data".
> > 
> > Thanks. Will roll v13.
> >
> 
> I'm about tho have a look how this panned out in v13. I propose
> let us continue the discussion there.
> 
> Regards,
> Halil
>  
> > This publicly archived list offers a means to provide input to the
> > OASIS Virtual I/O Device (VIRTIO) TC.
> > 
> > In order to verify user consent to the Feedback License terms and
> > to minimize spam in the list archive, subscription is required
> > before posting.
> > 
> > Subscribe: virtio-comment-subscr...@lists.oasis-open.org
> > Unsubscribe: virtio-comment-unsubscr...@lists.oasis-open.org
> > List help: virtio-comment-h...@lists.oasis-open.org
> > List archive: https://lists.oasis-open.org/archives/virtio-comment/
> > Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
> > List Guidelines: 
> > https://www.oasis-open.org/policies-guidelines/mailing-lists
> > Committee: https://www.oasis-open.org/committees/virtio/
> > Join OASIS: https://www.oasis-open.org/join/
> > 


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org