[virtio-dev] Re: [PATCH] pci: use msix_config instead of config_msix_vector
On Fri, Oct 11, 2019 at 05:58:59AM -0400, Michael S. Tsirkin wrote: > On Fri, Oct 11, 2019 at 10:42:03AM +0100, Stefan Hajnoczi wrote: > > The struct virtio_pci_common_cfg field is called msix_config but the > > text refers to this field as config_msix_vector. This is confusing. > > > > Rename config_msix_vector to msix_config. Although config_msix_vector > > is arguably a clearer name, existing code using Linux virtio_pci.h uses > > msix_config so let's stick with that. > > > > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/41 > > Signed-off-by: Stefan Hajnoczi > > --- > > Note that Paolo already submitted a patch for the other approach and I > > even reviewed it: > > https://lists.oasis-open.org/archives/virtio/201903/msg00073.html > > > > Either way, let's merge a fix! > > OK I'll start voting on Paolo's patch as it was here earlier. > If that fails we'll vote on yours? Sounds good :). Stefan signature.asc Description: PGP signature
[virtio-dev] [PATCH] Lift "Driver Notifications" to section level
From: Jan Kiszka Currently, it slips under the Packed Virtqueues section while it is not specific to this format. At this chance, capitalize "Notifications". Signed-off-by: Jan Kiszka --- content.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content.tex b/content.tex index 2325236..f55e23f 100644 --- a/content.tex +++ b/content.tex @@ -332,7 +332,7 @@ \section{Virtqueues}\label{sec:Basic Facilities of a Virtio Device / Virtqueues} \input{packed-ring.tex} -\subsection{Driver notifications} \label{sec:Virtqueues / Driver notifications} +\section{Driver Notifications} \label{sec:Virtqueues / Driver notifications} The driver is sometimes required to send an available buffer notification to the device. -- 2.16.4 - To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
[virtio-dev] [PATCH] Fix ^= in example code
From: Jan Kiszka Trying to escaping ^ here only leaves the backslash in the output. Signed-off-by: Jan Kiszka --- packed-ring.tex | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/packed-ring.tex b/packed-ring.tex index caf47a5..ea92543 100644 --- a/packed-ring.tex +++ b/packed-ring.tex @@ -634,7 +634,7 @@ \subsubsection{Implementation Example}\label{sec:Basic Facilities of a Virtio De if (vq->next_avail >= vq->size) { vq->next_avail = 0; -vq->avail_wrap_count \^= 1; +vq->avail_wrap_count ^= 1; } } vq->sgs[id] = sgs; @@ -727,7 +727,7 @@ \subsection{Receiving Used Buffers From The Device}\label{sec:Basic Facilities o vq->next_used += sgs; if (vq->next_used >= vq->size) { vq->next_used -= vq->size; -vq->used_wrap_count \^= 1; +vq->used_wrap_count ^= 1; } free_id(vq, id); -- 2.16.4 - To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
[virtio-dev] Re: [RFC][Patch v12 1/2] mm: page_reporting: core infrastructure
On 10/10/19 4:36 PM, Alexander Duyck wrote: > On Mon, Aug 12, 2019 at 6:13 AM Nitesh Narayan Lal wrote: > > >> +static int process_free_page(struct page *page, >> +struct page_reporting_config *phconf, int count) >> +{ >> + int mt, order, ret = 0; >> + >> + mt = get_pageblock_migratetype(page); >> + order = page_private(page); >> + ret = __isolate_free_page(page, order); >> + >> + if (ret) { >> + /* >> +* Preserving order and migratetype for reuse while >> +* releasing the pages back to the buddy. >> +*/ >> + set_pageblock_migratetype(page, mt); >> + set_page_private(page, order); >> + >> + sg_set_page(>sg[count++], page, >> + PAGE_SIZE << order, 0); >> + } >> + >> + return count; >> +} >> + >> +/** >> + * scan_zone_bitmap - scans the bitmap for the requested zone. >> + * @phconf: page reporting configuration object initialized by the backend. >> + * @zone: zone for which page reporting is requested. >> + * >> + * For every page marked in the bitmap it checks if it is still free if so >> it >> + * isolates and adds them to a scatterlist. As soon as the number of >> isolated >> + * pages reach the threshold set by the backend, they are reported to the >> + * hypervisor by the backend. Once the hypervisor responds after processing >> + * they are returned back to the buddy for reuse. >> + */ >> +static void scan_zone_bitmap(struct page_reporting_config *phconf, >> +struct zone *zone) >> +{ >> + unsigned long setbit; >> + struct page *page; >> + int count = 0; >> + >> + sg_init_table(phconf->sg, phconf->max_pages); >> + >> + for_each_set_bit(setbit, zone->bitmap, zone->nbits) { >> + /* Process only if the page is still online */ >> + page = pfn_to_online_page((setbit << >> PAGE_REPORTING_MIN_ORDER) + >> + zone->base_pfn); >> + if (!page) >> + continue; >> + >> + spin_lock(>lock); >> + >> + /* Ensure page is still free and can be processed */ >> + if (PageBuddy(page) && page_private(page) >= >> + PAGE_REPORTING_MIN_ORDER) >> + count = process_free_page(page, phconf, count); >> + >> + spin_unlock(>lock); >> + /* Page has been processed, adjust its bit and zone counter >> */ >> + clear_bit(setbit, zone->bitmap); >> + atomic_dec(>free_pages); >> + >> + if (count == phconf->max_pages) { >> + /* Report isolated pages to the hypervisor */ >> + phconf->report(phconf, count); >> + >> + /* Return processed pages back to the buddy */ >> + return_isolated_page(zone, phconf); >> + >> + /* Reset for next reporting */ >> + sg_init_table(phconf->sg, phconf->max_pages); >> + count = 0; >> + } >> + } >> + /* >> +* If the number of isolated pages does not meet the max_pages >> +* threshold, we would still prefer to report them as we have already >> +* isolated them. >> +*/ >> + if (count) { >> + sg_mark_end(>sg[count - 1]); >> + phconf->report(phconf, count); >> + >> + return_isolated_page(zone, phconf); >> + } >> +} >> + > So one thing that occurred to me is that this code is missing checks > so that it doesn't try to hint isolated pages. With the bitmap > approach you need an additional check so that you aren't pulling > isolated pages out and reporting them. I think you mean that we should not report pages of type MIGRATE_ISOLATE. The current code on which I am working, I have added the is_migrate_isolate_page() check to ensure that I am not processing these pages. -- Thanks Nitesh - To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
[virtio-dev] Re: [PATCH] pci: use msix_config instead of config_msix_vector
On Fri, Oct 11, 2019 at 10:42:03AM +0100, Stefan Hajnoczi wrote: > The struct virtio_pci_common_cfg field is called msix_config but the > text refers to this field as config_msix_vector. This is confusing. > > Rename config_msix_vector to msix_config. Although config_msix_vector > is arguably a clearer name, existing code using Linux virtio_pci.h uses > msix_config so let's stick with that. > > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/41 > Signed-off-by: Stefan Hajnoczi > --- > Note that Paolo already submitted a patch for the other approach and I > even reviewed it: > https://lists.oasis-open.org/archives/virtio/201903/msg00073.html > > Either way, let's merge a fix! OK I'll start voting on Paolo's patch as it was here earlier. If that fails we'll vote on yours? > --- > content.tex | 20 ++-- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/content.tex b/content.tex > index 679391e..ae0b790 100644 > --- a/content.tex > +++ b/content.tex > @@ -843,7 +843,7 @@ \subsubsection{Common configuration structure > layout}\label{sec:Virtio Transport > The driver writes this to accept feature bits offered by the device. > Driver Feature Bits selected by \field{driver_feature_select}. > > -\item[\field{config_msix_vector}] > +\item[\field{msix_config}] > The driver sets the Configuration Vector for MSI-X. > > \item[\field{num_queues}] > @@ -1202,7 +1202,7 @@ \subsubsection{Legacy Interfaces: A Note on PCI Device > Layout}\label{sec:Virtio > \hline > Read/Write & R+W& R+W\\ > \hline > -Purpose (MSI-X) & \field{config_msix_vector} & \field{queue_msix_vector} \\ > +Purpose (MSI-X) & \field{msix_config} & \field{queue_msix_vector} \\ > \hline > \end{tabular} > > @@ -1323,11 +1323,11 @@ \subsubsection{Device > Initialization}\label{sec:Virtio Transport Options / Virti > \paragraph{MSI-X Vector Configuration}\label{sec:Virtio Transport Options / > Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / > Device Initialization / MSI-X Vector Configuration} > > When MSI-X capability is present and enabled in the device > -(through standard PCI configuration space) \field{config_msix_vector} and > \field{queue_msix_vector} are used to map configuration change and queue > +(through standard PCI configuration space) \field{msix_config} and > \field{queue_msix_vector} are used to map configuration change and queue > interrupts to MSI-X vectors. In this case, the ISR Status is unused. > > Writing a valid MSI-X Table entry number, 0 to 0x7FF, to > -\field{config_msix_vector}/\field{queue_msix_vector} maps interrupts > triggered > +\field{msix_config}/\field{queue_msix_vector} maps interrupts triggered > by the configuration change/selected queue events respectively to > the corresponding MSI-X vector. To disable interrupts for an > event type, the driver unmaps this event by writing a special NO_VECTOR > @@ -1359,7 +1359,7 @@ \subsubsection{Device Initialization}\label{sec:Virtio > Transport Options / Virti > Device MUST support unmapping any event type. > > The device MUST return vector mapped to a given event, > -(NO_VECTOR if unmapped) on read of > \field{config_msix_vector}/\field{queue_msix_vector}. > +(NO_VECTOR if unmapped) on read of > \field{msix_config}/\field{queue_msix_vector}. > The device MUST have all queue and configuration change > events are unmapped upon reset. > > @@ -1367,7 +1367,7 @@ \subsubsection{Device Initialization}\label{sec:Virtio > Transport Options / Virti > unless it is impossible for the device to satisfy the mapping > request. Devices MUST report mapping > failures by returning the NO_VECTOR value when the relevant > -\field{config_msix_vector}/\field{queue_msix_vector} field is read. > +\field{msix_config}/\field{queue_msix_vector} field is read. > > \drivernormative{\subparagraph}{MSI-X Vector Configuration}{Virtio Transport > Options / Virtio Over PCI Bus / PCI-specific Initialization And Device > Operation / Device Initialization / MSI-X Vector Configuration} > > @@ -1485,9 +1485,9 @@ \subsubsection{Notification of Device Configuration > Changes}\label{sec:Virtio Tr > >\item If MSI-X capability is enabled: > \begin{enumerate} > -\item If \field{config_msix_vector} is not NO_VECTOR, > +\item If \field{msix_config} is not NO_VECTOR, >request the appropriate MSI-X interrupt message for the > - device, \field{config_msix_vector} sets the MSI-X Table entry > + device, \field{msix_config} sets the MSI-X Table entry >number. > \end{enumerate} > \end{itemize} > @@ -1497,7 +1497,7 @@ \subsubsection{Notification of Device Configuration > Changes}\label{sec:Virtio Tr > > \devicenormative{\paragraph}{Notification of Device Configuration > Changes}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific > Initialization And Device Operation / Notification of Device
[virtio-dev] [PATCH] pci: use msix_config instead of config_msix_vector
The struct virtio_pci_common_cfg field is called msix_config but the text refers to this field as config_msix_vector. This is confusing. Rename config_msix_vector to msix_config. Although config_msix_vector is arguably a clearer name, existing code using Linux virtio_pci.h uses msix_config so let's stick with that. Fixes: https://github.com/oasis-tcs/virtio-spec/issues/41 Signed-off-by: Stefan Hajnoczi --- Note that Paolo already submitted a patch for the other approach and I even reviewed it: https://lists.oasis-open.org/archives/virtio/201903/msg00073.html Either way, let's merge a fix! --- content.tex | 20 ++-- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/content.tex b/content.tex index 679391e..ae0b790 100644 --- a/content.tex +++ b/content.tex @@ -843,7 +843,7 @@ \subsubsection{Common configuration structure layout}\label{sec:Virtio Transport The driver writes this to accept feature bits offered by the device. Driver Feature Bits selected by \field{driver_feature_select}. -\item[\field{config_msix_vector}] +\item[\field{msix_config}] The driver sets the Configuration Vector for MSI-X. \item[\field{num_queues}] @@ -1202,7 +1202,7 @@ \subsubsection{Legacy Interfaces: A Note on PCI Device Layout}\label{sec:Virtio \hline Read/Write & R+W& R+W\\ \hline -Purpose (MSI-X) & \field{config_msix_vector} & \field{queue_msix_vector} \\ +Purpose (MSI-X) & \field{msix_config} & \field{queue_msix_vector} \\ \hline \end{tabular} @@ -1323,11 +1323,11 @@ \subsubsection{Device Initialization}\label{sec:Virtio Transport Options / Virti \paragraph{MSI-X Vector Configuration}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / MSI-X Vector Configuration} When MSI-X capability is present and enabled in the device -(through standard PCI configuration space) \field{config_msix_vector} and \field{queue_msix_vector} are used to map configuration change and queue +(through standard PCI configuration space) \field{msix_config} and \field{queue_msix_vector} are used to map configuration change and queue interrupts to MSI-X vectors. In this case, the ISR Status is unused. Writing a valid MSI-X Table entry number, 0 to 0x7FF, to -\field{config_msix_vector}/\field{queue_msix_vector} maps interrupts triggered +\field{msix_config}/\field{queue_msix_vector} maps interrupts triggered by the configuration change/selected queue events respectively to the corresponding MSI-X vector. To disable interrupts for an event type, the driver unmaps this event by writing a special NO_VECTOR @@ -1359,7 +1359,7 @@ \subsubsection{Device Initialization}\label{sec:Virtio Transport Options / Virti Device MUST support unmapping any event type. The device MUST return vector mapped to a given event, -(NO_VECTOR if unmapped) on read of \field{config_msix_vector}/\field{queue_msix_vector}. +(NO_VECTOR if unmapped) on read of \field{msix_config}/\field{queue_msix_vector}. The device MUST have all queue and configuration change events are unmapped upon reset. @@ -1367,7 +1367,7 @@ \subsubsection{Device Initialization}\label{sec:Virtio Transport Options / Virti unless it is impossible for the device to satisfy the mapping request. Devices MUST report mapping failures by returning the NO_VECTOR value when the relevant -\field{config_msix_vector}/\field{queue_msix_vector} field is read. +\field{msix_config}/\field{queue_msix_vector} field is read. \drivernormative{\subparagraph}{MSI-X Vector Configuration}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / MSI-X Vector Configuration} @@ -1485,9 +1485,9 @@ \subsubsection{Notification of Device Configuration Changes}\label{sec:Virtio Tr \item If MSI-X capability is enabled: \begin{enumerate} -\item If \field{config_msix_vector} is not NO_VECTOR, +\item If \field{msix_config} is not NO_VECTOR, request the appropriate MSI-X interrupt message for the - device, \field{config_msix_vector} sets the MSI-X Table entry + device, \field{msix_config} sets the MSI-X Table entry number. \end{enumerate} \end{itemize} @@ -1497,7 +1497,7 @@ \subsubsection{Notification of Device Configuration Changes}\label{sec:Virtio Tr \devicenormative{\paragraph}{Notification of Device Configuration Changes}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Notification of Device Configuration Changes} -If MSI-X capability is enabled and \field{config_msix_vector} is +If MSI-X capability is enabled and \field{msix_config} is NO_VECTOR, the device MUST NOT deliver an interrupt for device configuration space changes. @@ -1527,7 +1527,7 @@ \subsubsection{Driver Handling Interrupts}\label{sec:Virtio Transport Options / device, to see if
Re: [virtio-dev] [PATCH] [RFC RESEND] vdec: Add virtio video decode device specification
Hi Tomasz, On Mittwoch, 9. Oktober 2019 05:55:45 CEST Tomasz Figa wrote: > On Tue, Oct 8, 2019 at 12:09 AM Dmitry Morozov > > wrote: > > Hi Tomasz, > > > > On Montag, 7. Oktober 2019 16:14:13 CEST Tomasz Figa wrote: > > > Hi Dmitry, > > > > > > On Mon, Oct 7, 2019 at 11:01 PM Dmitry Morozov > > > > > > wrote: > > > > Hello, > > > > > > > > We at OpenSynergy are also working on an abstract paravirtualized > > > > video > > > > streaming device that operates input and/or output data buffers and > > > > can be > > > > used as a generic video decoder/encoder/input/output device. > > > > > > > > We would be glad to share our thoughts and contribute to the > > > > discussion. > > > > Please see some comments regarding buffer allocation inline. > > > > > > > > Best regards, > > > > Dmitry. > > > > > > > > On Samstag, 5. Oktober 2019 08:08:12 CEST Tomasz Figa wrote: > > > > > Hi Gerd, > > > > > > > > > > On Mon, Sep 23, 2019 at 5:56 PM Gerd Hoffmann wrote: > > > > > > Hi, > > > > > > > > > > > > > Our prototype implementation uses [4], which allows the > > > > > > > virtio-vdec > > > > > > > device to use buffers allocated by virtio-gpu device. > > > > > > > > > > > > > > [4] https://lkml.org/lkml/2019/9/12/157 > > > > > > > > > > First of all, thanks for taking a look at this RFC and for valuable > > > > > feedback. Sorry for the late reply. > > > > > > > > > > For reference, Keiichi is working with me and David Stevens on > > > > > accelerated video support for virtual machines and integration with > > > > > other virtual devices, like virtio-gpu for rendering or our > > > > > currently-downstream virtio-wayland for display (I believe there is > > > > > ongoing work to solve this problem in upstream too). > > > > > > > > > > > Well. I think before even discussing the protocol details we need > > > > > > a > > > > > > reasonable plan for buffer handling. I think using virtio-gpu > > > > > > buffers > > > > > > should be an optional optimization and not a requirement. Also > > > > > > the > > > > > > motivation for that should be clear (Let the host decoder write > > > > > > directly > > > > > > to virtio-gpu resources, to display video without copying around > > > > > > the > > > > > > decoded framebuffers from one device to another). > > > > > > > > > > Just to make sure we're on the same page, what would the buffers > > > > > come > > > > > from if we don't use this optimization? > > > > > > > > > > I can imagine a setup like this; > > > > > > > > > > 1) host device allocates host memory appropriate for usage with > > > > > host > > > > > > > > > > video decoder, > > > > > > > > > > 2) guest driver allocates arbitrary guest pages for storage > > > > > > > > > > accessible to the guest software, > > > > > > > > > > 3) guest userspace writes input for the decoder to guest pages, > > > > > 4) guest driver passes the list of pages for the input and output > > > > > > > > > > buffers to the host device > > > > > > > > > > 5) host device copies data from input guest pages to host buffer > > > > > 6) host device runs the decoding > > > > > 7) host device copies decoded frame to output guest pages > > > > > 8) guest userspace can access decoded frame from those pages; back > > > > > to 3 > > > > > > > > > > Is that something you have in mind? > > > > > > > > While GPU side allocations can be useful (especially in case of > > > > decoder), > > > > it could be more practical to stick to driver side allocations. This > > > > is > > > > also due to the fact that paravirtualized encoders and cameras are not > > > > necessarily require a GPU device. > > > > > > > > Also, the v4l2 framework already features convenient helpers for CMA > > > > and > > > > SG > > > > allocations. The buffers can be used in the same manner as in > > > > virtio-gpu: > > > > buffers are first attached to an already allocated buffer/resource > > > > descriptor and then are made available for processing by the device > > > > using > > > > a dedicated command from the driver. > > > > > > First of all, thanks a lot for your input. This is a relatively new > > > area of virtualization and we definitely need to collect various > > > possible perspectives in the discussion. > > > > > > From Chrome OS point of view, there are several aspects for which the > > > guest side allocation doesn't really work well: > > > 1) host-side hardware has a lot of specific low level allocation > > > requirements, like alignments, paddings, address space limitations and > > > so on, which is not something that can be (easily) taught to the guest > > > OS, > > > > I couldn't agree more. There are some changes by Greg to add support for > > querying GPU buffer metadata. Probably those changes could be integrated > > with 'a framework for cross-device buffer sharing' (something that Greg > > mentioned earlier in the thread and that would totally make sense). > > Did you mean one of Gerd's proposals? > > I think we need
Re: [virtio-dev] Re: [PATCH] virtio-blk: document VIRTIO_BLK_F_MQ multiqueue feature
On Mon, Oct 07, 2019 at 07:29:49PM +0200, Matti Moell wrote: > I wasn't able to find any discussion about not including multiqueue in the > spec and this patch was never applied, is there any particular reason why it > wasn't added? > > Should it even be added to the spec? Yes, it should be added to the spec. Eugenio Perez has recently sent a new patch to fix this: https://lists.oasis-open.org/archives/virtio-comment/201909/msg4.html Stefan signature.asc Description: PGP signature