Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-16 Thread Oleksandr Andrushchenko

On 04/16/2018 12:32 PM, Daniel Vetter wrote:

On Mon, Apr 16, 2018 at 10:22 AM, Oleksandr Andrushchenko
<andr2...@gmail.com> wrote:

On 04/16/2018 10:43 AM, Daniel Vetter wrote:

On Mon, Apr 16, 2018 at 10:16:31AM +0300, Oleksandr Andrushchenko wrote:

On 04/13/2018 06:37 PM, Daniel Vetter wrote:

On Wed, Apr 11, 2018 at 08:59:32AM +0300, Oleksandr Andrushchenko wrote:

On 04/10/2018 08:26 PM, Dongwon Kim wrote:

On Tue, Apr 10, 2018 at 09:37:53AM +0300, Oleksandr Andrushchenko
wrote:

On 04/06/2018 09:57 PM, Dongwon Kim wrote:

On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko
wrote:

On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:

  Hi,


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?

Well, udmabuf route isn't fully clear yet, but yes.

See also gvt (intel vgpu), where the hypervisor interface is
abstracted
away into a separate kernel modules even though most of the actual
vgpu
emulation code is common.

Thank you for your input, I'm just trying to figure out
which of the three z-copy solutions intersect and how much

And what about hyper-dmabuf?

xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
in terms of these core sharing feature:

1. the sharing process - import prime/dmabuf from the producer ->
extract
underlying pages and get those shared -> return references for
shared pages

Another thing is danvet was kind of against to the idea of importing
existing
dmabuf/prime buffer and forward it to the other domain due to
synchronization
issues. He proposed to make hyper_dmabuf only work as an exporter so
that it
can have a full control over the buffer. I think we need to talk about
this
further as well.

Yes, I saw this. But this limits the use-cases so much.
For instance, running Android as a Guest (which uses ION to allocate
buffers) means that finally HW composer will import dma-buf into
the DRM driver. Then, in case of xen-front for example, it needs to be
shared with the backend (Host side). Of course, we can change
user-space
to make xen-front allocate the buffers (make it exporter), but what we
try
to avoid is to change user-space which in normal world would have
remain
unchanged otherwise.
So, I do think we have to support this use-case and just have to
understand
the complexity.

Erm, why do you need importer capability for this use-case?

guest1 -> ION -> xen-front -> hypervisor -> guest 2 -> xen-zcopy exposes
that dma-buf -> import to the real display hw

No where in this chain do you need xen-zcopy to be able to import a
dma-buf (within linux, it needs to import a bunch of pages from the
hypervisor).

Now if your plan is to use xen-zcopy in the guest1 instead of xen-front,
then you indeed need to import.

This is the exact use-case I was referring to while saying
we need to import on Guest1 side. If hyper-dmabuf is so
generic that there is no xen-front in the picture, then
it needs to import a dma-buf, so it can be exported at Guest2 side.

But that imo doesn't make sense:
- xen-front gives you clearly defined flip events you can forward to the
 hypervisor. xen-zcopy would need to add that again.

xen-zcopy is a helper driver which doesn't handle page flips
and is not a KMS driver as one might think of: the DRM UAPI it uses is
just to export a dma-buf as a PRIME buffer, but that's it.
Flipping etc. is done by the backend [1], not xen-zcopy.

Same for
 hyperdmabuf (and really we're not going to shuffle struct dma_fence
over
 the wire in a generic fashion between hypervisor guests).

- xen-front already has the idea of pixel format for the buffer (and any
 other metadata). Again, xen-zcopy and hyperdmabuf lack that, would
need
 to add it shoehorned in somehow.

Again, here you are talking of something which is implemented in
Xen display backend, not xen-zcopy, e.g. display backend can
implement para-virtual display w/o xen-zcopy at all, but in this case
there is a memory copying for each frame. With the help of xen-zcopy
the backend feeds xen-front's buffers directly into Guest2 DRM/KMS or
Weston or whatever as xen-zcopy exports remote buffers as PRIME buffers,
thus no buffer copying is required.

Why do you need to copy on every frame for xen-front? In the above
pipeline, using xen-front I see 0 architectural reasons to have a copy
anywhere.

This seems to be the core of the confusion we're having here.

Ok, so I'll try to explain:
1. xen-front - produces a display buffer to be shown at Guest2
by the backend, shares its grant references with the backend
2. xen-front sends page flip event to the backend specifying the
buffer in question
3. Backend takes the shared buffer (which is only a buffer mapped into
backend's memory, it is not a dma-buf/PRIME one) and makes memcpy from
it to a local dumb/surface

Why do you even do that? The copying here I mean - why don't you just
di

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-16 Thread Oleksandr Andrushchenko

On 04/16/2018 10:43 AM, Daniel Vetter wrote:

On Mon, Apr 16, 2018 at 10:16:31AM +0300, Oleksandr Andrushchenko wrote:

On 04/13/2018 06:37 PM, Daniel Vetter wrote:

On Wed, Apr 11, 2018 at 08:59:32AM +0300, Oleksandr Andrushchenko wrote:

On 04/10/2018 08:26 PM, Dongwon Kim wrote:

On Tue, Apr 10, 2018 at 09:37:53AM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 09:57 PM, Dongwon Kim wrote:

On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:

 Hi,


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?

Well, udmabuf route isn't fully clear yet, but yes.

See also gvt (intel vgpu), where the hypervisor interface is abstracted
away into a separate kernel modules even though most of the actual vgpu
emulation code is common.

Thank you for your input, I'm just trying to figure out
which of the three z-copy solutions intersect and how much

And what about hyper-dmabuf?

xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
in terms of these core sharing feature:

1. the sharing process - import prime/dmabuf from the producer -> extract
underlying pages and get those shared -> return references for shared pages

Another thing is danvet was kind of against to the idea of importing existing
dmabuf/prime buffer and forward it to the other domain due to synchronization
issues. He proposed to make hyper_dmabuf only work as an exporter so that it
can have a full control over the buffer. I think we need to talk about this
further as well.

Yes, I saw this. But this limits the use-cases so much.
For instance, running Android as a Guest (which uses ION to allocate
buffers) means that finally HW composer will import dma-buf into
the DRM driver. Then, in case of xen-front for example, it needs to be
shared with the backend (Host side). Of course, we can change user-space
to make xen-front allocate the buffers (make it exporter), but what we try
to avoid is to change user-space which in normal world would have remain
unchanged otherwise.
So, I do think we have to support this use-case and just have to understand
the complexity.

Erm, why do you need importer capability for this use-case?

guest1 -> ION -> xen-front -> hypervisor -> guest 2 -> xen-zcopy exposes
that dma-buf -> import to the real display hw

No where in this chain do you need xen-zcopy to be able to import a
dma-buf (within linux, it needs to import a bunch of pages from the
hypervisor).

Now if your plan is to use xen-zcopy in the guest1 instead of xen-front,
then you indeed need to import.

This is the exact use-case I was referring to while saying
we need to import on Guest1 side. If hyper-dmabuf is so
generic that there is no xen-front in the picture, then
it needs to import a dma-buf, so it can be exported at Guest2 side.

   But that imo doesn't make sense:
- xen-front gives you clearly defined flip events you can forward to the
hypervisor. xen-zcopy would need to add that again.

xen-zcopy is a helper driver which doesn't handle page flips
and is not a KMS driver as one might think of: the DRM UAPI it uses is
just to export a dma-buf as a PRIME buffer, but that's it.
Flipping etc. is done by the backend [1], not xen-zcopy.

   Same for
hyperdmabuf (and really we're not going to shuffle struct dma_fence over
the wire in a generic fashion between hypervisor guests).

- xen-front already has the idea of pixel format for the buffer (and any
other metadata). Again, xen-zcopy and hyperdmabuf lack that, would need
to add it shoehorned in somehow.

Again, here you are talking of something which is implemented in
Xen display backend, not xen-zcopy, e.g. display backend can
implement para-virtual display w/o xen-zcopy at all, but in this case
there is a memory copying for each frame. With the help of xen-zcopy
the backend feeds xen-front's buffers directly into Guest2 DRM/KMS or
Weston or whatever as xen-zcopy exports remote buffers as PRIME buffers,
thus no buffer copying is required.

Why do you need to copy on every frame for xen-front? In the above
pipeline, using xen-front I see 0 architectural reasons to have a copy
anywhere.

This seems to be the core of the confusion we're having here.

Ok, so I'll try to explain:
1. xen-front - produces a display buffer to be shown at Guest2
by the backend, shares its grant references with the backend
2. xen-front sends page flip event to the backend specifying the
buffer in question
3. Backend takes the shared buffer (which is only a buffer mapped into
backend's memory, it is not a dma-buf/PRIME one) and makes memcpy from
it to a local dumb/surface
4. Backend flips that local dumb buffer/surface

If I have a xen-zcopy helper driver then I can avoid doing step 3):
1) 2) remain the same as above
3) Initially for a new display buffer, backend calls xen-zcopy to create
a

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-16 Thread Oleksandr Andrushchenko

On 04/13/2018 06:37 PM, Daniel Vetter wrote:

On Wed, Apr 11, 2018 at 08:59:32AM +0300, Oleksandr Andrushchenko wrote:

On 04/10/2018 08:26 PM, Dongwon Kim wrote:

On Tue, Apr 10, 2018 at 09:37:53AM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 09:57 PM, Dongwon Kim wrote:

On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:

Hi,


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?

Well, udmabuf route isn't fully clear yet, but yes.

See also gvt (intel vgpu), where the hypervisor interface is abstracted
away into a separate kernel modules even though most of the actual vgpu
emulation code is common.

Thank you for your input, I'm just trying to figure out
which of the three z-copy solutions intersect and how much

And what about hyper-dmabuf?

xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
in terms of these core sharing feature:

1. the sharing process - import prime/dmabuf from the producer -> extract
underlying pages and get those shared -> return references for shared pages

Another thing is danvet was kind of against to the idea of importing existing
dmabuf/prime buffer and forward it to the other domain due to synchronization
issues. He proposed to make hyper_dmabuf only work as an exporter so that it
can have a full control over the buffer. I think we need to talk about this
further as well.

Yes, I saw this. But this limits the use-cases so much.
For instance, running Android as a Guest (which uses ION to allocate
buffers) means that finally HW composer will import dma-buf into
the DRM driver. Then, in case of xen-front for example, it needs to be
shared with the backend (Host side). Of course, we can change user-space
to make xen-front allocate the buffers (make it exporter), but what we try
to avoid is to change user-space which in normal world would have remain
unchanged otherwise.
So, I do think we have to support this use-case and just have to understand
the complexity.

Erm, why do you need importer capability for this use-case?

guest1 -> ION -> xen-front -> hypervisor -> guest 2 -> xen-zcopy exposes
that dma-buf -> import to the real display hw

No where in this chain do you need xen-zcopy to be able to import a
dma-buf (within linux, it needs to import a bunch of pages from the
hypervisor).

Now if your plan is to use xen-zcopy in the guest1 instead of xen-front,
then you indeed need to import.

This is the exact use-case I was referring to while saying
we need to import on Guest1 side. If hyper-dmabuf is so
generic that there is no xen-front in the picture, then
it needs to import a dma-buf, so it can be exported at Guest2 side.

  But that imo doesn't make sense:
- xen-front gives you clearly defined flip events you can forward to the
   hypervisor. xen-zcopy would need to add that again.

xen-zcopy is a helper driver which doesn't handle page flips
and is not a KMS driver as one might think of: the DRM UAPI it uses is
just to export a dma-buf as a PRIME buffer, but that's it.
Flipping etc. is done by the backend [1], not xen-zcopy.

  Same for
   hyperdmabuf (and really we're not going to shuffle struct dma_fence over
   the wire in a generic fashion between hypervisor guests).

- xen-front already has the idea of pixel format for the buffer (and any
   other metadata). Again, xen-zcopy and hyperdmabuf lack that, would need
   to add it shoehorned in somehow.

Again, here you are talking of something which is implemented in
Xen display backend, not xen-zcopy, e.g. display backend can
implement para-virtual display w/o xen-zcopy at all, but in this case
there is a memory copying for each frame. With the help of xen-zcopy
the backend feeds xen-front's buffers directly into Guest2 DRM/KMS or
Weston or whatever as xen-zcopy exports remote buffers as PRIME buffers,
thus no buffer copying is required.


Ofc you won't be able to shovel sound or media stream data over to another
guest like this, but that's what you have xen-v4l and xen-sound or
whatever else for. Trying to make a new uapi, which means userspace must
be changed for all the different use-case, instead of reusing standard
linux driver uapi (which just happens to send the data to another
hypervisor guest instead of real hw) imo just doesn't make much sense.

Also, at least for the gpu subsystem: Any new uapi must have full
userspace available for it, see:

https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements

Adding more uapi is definitely the most painful way to fix a use-case.
Personally I'd go as far and also change the xen-zcopy side on the
receiving guest to use some standard linux uapi. E.g. you could write an
output v4l driver to receive the frames from guest1.

So, we now know that xen-zcopy was not meant to handle page flips,
but to implement ne

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-11 Thread Oleksandr Andrushchenko

On 04/10/2018 08:26 PM, Dongwon Kim wrote:

On Tue, Apr 10, 2018 at 09:37:53AM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 09:57 PM, Dongwon Kim wrote:

On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:

   Hi,


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?

Well, udmabuf route isn't fully clear yet, but yes.

See also gvt (intel vgpu), where the hypervisor interface is abstracted
away into a separate kernel modules even though most of the actual vgpu
emulation code is common.

Thank you for your input, I'm just trying to figure out
which of the three z-copy solutions intersect and how much

And what about hyper-dmabuf?

xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
in terms of these core sharing feature:

1. the sharing process - import prime/dmabuf from the producer -> extract
underlying pages and get those shared -> return references for shared pages

Another thing is danvet was kind of against to the idea of importing existing
dmabuf/prime buffer and forward it to the other domain due to synchronization
issues. He proposed to make hyper_dmabuf only work as an exporter so that it
can have a full control over the buffer. I think we need to talk about this
further as well.

Yes, I saw this. But this limits the use-cases so much.
For instance, running Android as a Guest (which uses ION to allocate
buffers) means that finally HW composer will import dma-buf into
the DRM driver. Then, in case of xen-front for example, it needs to be
shared with the backend (Host side). Of course, we can change user-space
to make xen-front allocate the buffers (make it exporter), but what we try
to avoid is to change user-space which in normal world would have remain
unchanged otherwise.
So, I do think we have to support this use-case and just have to understand
the complexity.



danvet, can you comment on this topic?


2. the page sharing mechanism - it uses Xen-grant-table.

And to give you a quick summary of differences as far as I understand
between two implementations (please correct me if I am wrong, Oleksandr.)

1. xen-zcopy is DRM specific - can import only DRM prime buffer
while hyper_dmabuf can export any dmabuf regardless of originator

Well, this is true. And at the same time this is just a matter
of extending the API: xen-zcopy is a helper driver designed for
xen-front/back use-case, so this is why it only has DRM PRIME API

2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs
while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends
out synchronization message to the exporting VM for synchronization.

This is true. Again, this is because of the use-cases it covers.
But having synchronization for a generic solution seems to be a good idea.

Yeah, understood xen-zcopy works ok with your use case. But I am just curious
if it is ok not to have any inter-domain synchronization in this sharing model.

The synchronization is done with displif protocol [1]

The buffer being shared is technically dma-buf and originator needs to be able
to keep track of it.

As I am working in DRM terms the tracking is done by the DRM core
for me for free. (This might be one of the reasons Daniel sees DRM
based implementation fit very good from code-reuse POV).



3. 1-level references - when using grant-table for sharing pages, there will
be same # of refs (each 8 byte)

To be precise, grant ref is 4 bytes

You are right. Thanks for correction.;)


as # of shared pages, which is passed to
the userspace to be shared with importing VM in case of xen-zcopy.

The reason for that is that xen-zcopy is a helper driver, e.g.
the grant references come from the display backend [1], which implements
Xen display protocol [2]. So, effectively the backend extracts references
from frontend's requests and passes those to xen-zcopy as an array
of refs.

  Compared
to this, hyper_dmabuf does multiple level addressing to generate only one
reference id that represents all shared pages.

In the protocol [2] only one reference to the gref directory is passed
between VMs
(and the gref directory is a single-linked list of shared pages containing
all
of the grefs of the buffer).

ok, good to know. I will look into its implementation in more details but is
this gref directory (chained grefs) something that can be used for any general
memory sharing use case or is it jsut for xen-display (in current code base)?

Not to mislead you: one grant ref is passed via displif protocol,
but the page it's referencing contains the rest of the grant refs.

As to if this can be used for any memory: yes. It is the same for
sndif and displif Xen protocols, but defined twice as strictly speaking
sndif and displif are two separate protocols.

While reviewing your RFC v2 one of the comments I had [2] was that if we
can

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-10 Thread Oleksandr Andrushchenko

On 04/06/2018 09:57 PM, Dongwon Kim wrote:

On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:

On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:

   Hi,


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?

Well, udmabuf route isn't fully clear yet, but yes.

See also gvt (intel vgpu), where the hypervisor interface is abstracted
away into a separate kernel modules even though most of the actual vgpu
emulation code is common.

Thank you for your input, I'm just trying to figure out
which of the three z-copy solutions intersect and how much

And what about hyper-dmabuf?

xen z-copy solution is pretty similar fundamentally to hyper_dmabuf
in terms of these core sharing feature:

1. the sharing process - import prime/dmabuf from the producer -> extract
underlying pages and get those shared -> return references for shared pages

2. the page sharing mechanism - it uses Xen-grant-table.

And to give you a quick summary of differences as far as I understand
between two implementations (please correct me if I am wrong, Oleksandr.)

1. xen-zcopy is DRM specific - can import only DRM prime buffer
while hyper_dmabuf can export any dmabuf regardless of originator

Well, this is true. And at the same time this is just a matter
of extending the API: xen-zcopy is a helper driver designed for
xen-front/back use-case, so this is why it only has DRM PRIME API


2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs
while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends
out synchronization message to the exporting VM for synchronization.

This is true. Again, this is because of the use-cases it covers.
But having synchronization for a generic solution seems to be a good idea.


3. 1-level references - when using grant-table for sharing pages, there will
be same # of refs (each 8 byte)

To be precise, grant ref is 4 bytes

as # of shared pages, which is passed to
the userspace to be shared with importing VM in case of xen-zcopy.

The reason for that is that xen-zcopy is a helper driver, e.g.
the grant references come from the display backend [1], which implements
Xen display protocol [2]. So, effectively the backend extracts references
from frontend's requests and passes those to xen-zcopy as an array
of refs.

  Compared
to this, hyper_dmabuf does multiple level addressing to generate only one
reference id that represents all shared pages.
In the protocol [2] only one reference to the gref directory is passed 
between VMs
(and the gref directory is a single-linked list of shared pages 
containing all

of the grefs of the buffer).



4. inter VM messaging (hype_dmabuf only) - hyper_dmabuf has inter-vm msg
communication defined for dmabuf synchronization and private data (meta
info that Matt Roper mentioned) exchange.

This is true, xen-zcopy has no means for inter VM sync and meta-data,
simply because it doesn't have any code for inter VM exchange in it,
e.g. the inter VM protocol is handled by the backend [1].


5. driver-to-driver notification (hyper_dmabuf only) - importing VM gets
notified when newdmabuf is exported from other VM - uevent can be optionally
generated when this happens.

6. structure - hyper_dmabuf is targetting to provide a generic solution for
inter-domain dmabuf sharing for most hypervisors, which is why it has two
layers as mattrope mentioned, front-end that contains standard API and backend
that is specific to hypervisor.

Again, xen-zcopy is decoupled from inter VM communication

No idea, didn't look at it in detail.

Looks pretty complex from a distant view.  Maybe because it tries to
build a communication framework using dma-bufs instead of a simple
dma-buf passing mechanism.

we started with simple dma-buf sharing but realized there are many
things we need to consider in real use-case, so we added communication
, notification and dma-buf synchronization then re-structured it to
front-end and back-end (this made things more compicated..) since Xen
was not our only target. Also, we thought passing the reference for the
buffer (hyper_dmabuf_id) is not secure so added uvent mechanism later.


Yes, I am looking at it now, trying to figure out the full story
and its implementation. BTW, Intel guys were about to share some
test application for hyper-dmabuf, maybe I have missed one.
It could probably better explain the use-cases and the complexity
they have in hyper-dmabuf.

One example is actually in github. If you want take a look at it, please
visit:

https://github.com/downor/linux_hyper_dmabuf_test/tree/xen/simple_export

Thank you, I'll have a look

Like xen-zcopy it seems to depend on the idea that the hypervisor
manages all memory it is easy for guests to share pages with the help of
the hypervisor.

So, for xen-zcopy we were not trying to make it generic,
it just solves display (dumb) zero-copying use-cases for X

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-06 Thread Oleksandr Andrushchenko via Qemu-devel
ackend hypervisor-specific stuff.

I'm not super familiar with xen-zcopy


Let me describe the rationale and some implementation details of the Xen
zero-copy driver I posted recently [1].

The main requirement for us to implement such a helper driver was an ability
to avoid memory copying for large buffers in display use-cases. This is why
we only focused on DRM use-cases, not trying to implement something
generic. This is why the driver is somewhat coupled with Xen 
para-virtualized

DRM driver [2] by Xen para-virtual display protocol [3] grant references
sharing mechanism, e.g. backend receives an array of Xen grant references to
frontend's buffer pages. These grant references are then used to construct a
PRIME buffer. The same mechanism is used when backend shares a buffer 
with the

frontend, but in the other direction. More details on UAPI of the driver are
available at [1].

So, when discussing a possibility to share dma-bufs in a generic way I would
also like to have the following considered:

1. We are targeting ARM and one of the major requirements for the buffer
sharing is the ability to allocate physically contiguous buffers, which gets
even more complicated for systems not backed with an IOMMU. So, for some
use-cases it is enough to make the buffers contiguous in terms of IPA and
sometimes those need to be contiguous in terms of PA.
(The use-case is that you use Wayland-DRM/KMS or share the buffer with
the driver implemented with DRM CMA helpers).

2. For Xen we would love to see UAPI to create a dma-buf from grant 
references
provided, so we can use this generic solution to implement zero-copying 
without

breaking the existing Xen protocols. This can probably be extended to other
hypervizors as well.

Thank you,
Oleksandr Andrushchenko



  and udmabuf, but it sounds like
they're approaching similar problems from slightly different directions,
so we should make sure we can come up with something that satisfies
everyone's requirements.


Matt


On Wed, Mar 14, 2018 at 9:03 AM, Gerd Hoffmann <kra...@redhat.com> wrote:

   Hi,


Either mlock account (because it's mlocked defacto), and get_user_pages
won't do that for you.

Or you write the full-blown userptr implementation, including mmu_notifier
support (see i915 or amdgpu), but that also requires Christian Königs
latest ->invalidate_mapping RFC for dma-buf (since atm exporting userptr
buffers is a no-go).

I guess I'll look at mlock accounting for starters then.  Easier for
now, and leaves the door open to switch to userptr later as this should
be transparent to userspace.


Known issue:  Driver API isn't complete yet.  Need add some flags, for
example to support read-only buffers.

dma-buf has no concept of read-only. I don't think we can even enforce
that (not many iommus can enforce this iirc), so pretty much need to
require r/w memory.

Ah, ok.  Just saw the 'write' arg for get_user_pages_fast and figured we
might support that, but if iommus can't handle that anyway it's
pointless indeed.


Cc: David Airlie <airl...@linux.ie>
Cc: Tomeu Vizoso <tomeu.viz...@collabora.com>
Signed-off-by: Gerd Hoffmann <kra...@redhat.com>

btw there's also the hyperdmabuf stuff from the xen folks, but imo their
solution of forwarding the entire dma-buf api is over the top. This here
looks _much_ better, pls cc all the hyperdmabuf people on your next
version.

Fun fact: googling for "hyperdmabuf" found me your mail and nothing else :-o
(Trying "hyper dmabuf" instead worked better then).

Yes, will cc them on the next version.  Not sure it'll help much on xen
though due to the memory management being very different.  Basically xen
owns the memory, not the kernel of the control domain (dom0), so
creating dmabufs for guest memory chunks isn't that simple ...

Also it's not clear whenever they really need guest -> guest exports or
guest -> dom0 exports.


Overall I like the idea, but too lazy to review.

Cool.  General comments on the idea was all I was looking for for the
moment.  Spare yor review cycles for the next version ;)


Oh, some kselftests for this stuff would be lovely.

I'll look into it.

thanks,
   Gerd

___
dri-devel mailing list
dri-de...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel



--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

[1] https://patchwork.freedesktop.org/series/40880/
[2] 
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=c575b7eeb89f94356997abd62d6d5a0590e259b7
[3] 
https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h




Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-06 Thread Oleksandr Andrushchenko

On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:

   Hi,


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?

Well, udmabuf route isn't fully clear yet, but yes.

See also gvt (intel vgpu), where the hypervisor interface is abstracted
away into a separate kernel modules even though most of the actual vgpu
emulation code is common.

Thank you for your input, I'm just trying to figure out
which of the three z-copy solutions intersect and how much

And what about hyper-dmabuf?

No idea, didn't look at it in detail.

Looks pretty complex from a distant view.  Maybe because it tries to
build a communication framework using dma-bufs instead of a simple
dma-buf passing mechanism.

Yes, I am looking at it now, trying to figure out the full story
and its implementation. BTW, Intel guys were about to share some
test application for hyper-dmabuf, maybe I have missed one.
It could probably better explain the use-cases and the complexity
they have in hyper-dmabuf.


Like xen-zcopy it seems to depend on the idea that the hypervisor
manages all memory it is easy for guests to share pages with the help of
the hypervisor.

So, for xen-zcopy we were not trying to make it generic,
it just solves display (dumb) zero-copying use-cases for Xen.
We implemented it as a DRM helper driver because we can't see any
other use-cases as of now.
For example, we also have Xen para-virtualized sound driver, but
its buffer memory usage is not comparable to what display wants
and it works somewhat differently (e.g. there is no "frame done"
event, so one can't tell when the sound buffer can be "flipped").
At the same time, we do not use virtio-gpu, so this could probably
be one more candidate for shared dma-bufs some day.

   Which simply isn't the case on kvm.

hyper-dmabuf and xen-zcopy could maybe share code, or hyper-dmabuf build
on top of xen-zcopy.

Hm, I can imagine that: xen-zcopy could be a library code for hyper-dmabuf
in terms of implementing all that page sharing fun in multiple directions,
e.g. Host->Guest, Guest->Host, Guest<->Guest.
But I'll let Matt and Dongwon to comment on that.



cheers,
   Gerd


Thank you,
Oleksandr

P.S. Sorry for making your original mail thread to discuss things much
broader than your RFC...




Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-06 Thread Oleksandr Andrushchenko

On 04/06/2018 12:07 PM, Gerd Hoffmann wrote:

I'm not sure we can create something which works on both kvm and xen.
The memory management model is quite different ...


On xen the hypervisor manages all memory.  Guests can allow other guests
to access specific pages (using grant tables).  In theory any guest <=>
guest communication is possible.  In practice is mostly guest <=> dom0
because guests access their virtual hardware that way.  dom0 is the
priviledged guest which owns any hardware not managed by xen itself.

Xen guests can ask the hypervisor to update the mapping of guest
physical pages.  They can ballon down (unmap and free pages).  They can
ballon up (ask the hypervisor to map fresh pages).  They can map pages
exported by other guests using grant tables.  xen-zcopy makes heavy use
of this.  It balloons down, to make room in the guest physical address
space, then goes map the exported pages there, finally composes a
dma-buf.


On kvm qemu manages all guest memory.  qemu also has all guest memory
mapped, so a grant-table like mechanism isn't needed to implement
virtual devices.  qemu can decide how it backs memory for the guest.
qemu propagates the guest memory map to the kvm driver in the linux
kernel.  kvm guests have some control over the guest memory map, for
example they can map pci bars wherever they want in their guest physical
address space by programming the base registers accordingly, but unlike
xen guests they can't ask the host to remap individual pages.

Due to qemu having all guest memory mapped virtual devices are typically
designed to have the guest allocate resources, then notify the host
where they are located.  This is where the udmabuf idea comes from:
Guest tells the host (qemu) where the gem object is, and qemu then can
create a dmabuf backed by those pages to pass it on to other processes
such as the wayland display server.  Possibly even without the guest
explicitly asking for it, i.e. export the framebuffer placed by the
guest in the (virtual) vga pci memory bar as dma-buf.  And I can imagine
that this is useful outsize virtualization too.


I fail to see any common ground for xen-zcopy and udmabuf ...

Does the above mean you can assume that xen-zcopy and udmabuf
can co-exist as two different solutions?
And what about hyper-dmabuf?

Thank you,
Oleksandr



Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-06 Thread Oleksandr Andrushchenko

On 04/06/2018 12:07 PM, Gerd Hoffmann wrote:

   Hi,


   * The general interface should be able to express sharing from any
 guest:guest, not just guest:host.  Arbitrary G:G sharing might be
 something some hypervisors simply aren't able to support, but the
 userspace API itself shouldn't make assumptions or restrict that.  I
 think ideally the sharing API would include some kind of
 query_targets interface that would return a list of VM's that your
 current OS is allowed to share with; that list would be depend on the
 policy established by the system integrator, but obviously wouldn't
 include targets that the hypervisor itself wouldn't be capable of
 handling.

Can you give a use-case for this? I mean that the system integrator
is the one who defines which guests/hosts talk to each other,
but querying means that it is possible that VMs have some sort
of discovery mechanism, so they can decide on their own whom
to connect to.

Note that vsock (created by vmware, these days also has a virtio
transport for kvm) started with support for both guest <=> host and
guest <=> guest support.  But later on guest <=> guest was dropped.
As far I know the reasons where (a) lack of use cases and (b) security.

So, I likewise would know more details on the use cases you have in mind
here.  Unless we have a compelling use case here I'd suggest to drop the
guest <=> guest requirement as it makes the whole thing alot more
complex.

This is exactly the use-case we have: in our setup Dom0 doesn't
own any HW at all and all the HW is passed into a dedicated
driver domain (DomD) which is still a guest domain.
Then, buffers are shared between two guests, for example,
DomD and DomA (Android guest)

   * The sharing API could be used to share multiple kinds of content in a
 single system.  The sharing sink driver running in the content
 producer's VM should accept some additional metadata that will be
 passed over to the target VM as well.

Not sure this should be part of hyper-dmabuf.  A dma-buf is nothing but
a block of data, period.  Therefore protocols with dma-buf support
(wayland for example) typically already send over metadata describing
the content, so duplicating that in hyper-dmabuf looks pointless.


1. We are targeting ARM and one of the major requirements for the buffer
sharing is the ability to allocate physically contiguous buffers, which gets
even more complicated for systems not backed with an IOMMU. So, for some
use-cases it is enough to make the buffers contiguous in terms of IPA and
sometimes those need to be contiguous in terms of PA.

Which pretty much implies the host must to the allocation.


2. For Xen we would love to see UAPI to create a dma-buf from grant
references provided, so we can use this generic solution to implement
zero-copying without breaking the existing Xen protocols. This can
probably be extended to other hypervizors as well.

I'm not sure we can create something which works on both kvm and xen.
The memory management model is quite different ...


On xen the hypervisor manages all memory.  Guests can allow other guests
to access specific pages (using grant tables).  In theory any guest <=>
guest communication is possible.  In practice is mostly guest <=> dom0
because guests access their virtual hardware that way.  dom0 is the
priviledged guest which owns any hardware not managed by xen itself.

Please see above for our setup with DomD and Dom0 being
a generic ARMv8 domain, no HW

Xen guests can ask the hypervisor to update the mapping of guest
physical pages.  They can ballon down (unmap and free pages).  They can
ballon up (ask the hypervisor to map fresh pages).  They can map pages
exported by other guests using grant tables.  xen-zcopy makes heavy use
of this.  It balloons down, to make room in the guest physical address
space, then goes map the exported pages there, finally composes a
dma-buf.

This is what it does


On kvm qemu manages all guest memory.  qemu also has all guest memory
mapped, so a grant-table like mechanism isn't needed to implement
virtual devices.  qemu can decide how it backs memory for the guest.
qemu propagates the guest memory map to the kvm driver in the linux
kernel.  kvm guests have some control over the guest memory map, for
example they can map pci bars wherever they want in their guest physical
address space by programming the base registers accordingly, but unlike
xen guests they can't ask the host to remap individual pages.

Due to qemu having all guest memory mapped virtual devices are typically
designed to have the guest allocate resources, then notify the host
where they are located.  This is where the udmabuf idea comes from:
Guest tells the host (qemu) where the gem object is, and qemu then can
create a dmabuf backed by those pages to pass it on to other processes
such as the wayland display server.  Possibly even without the guest
explicitly asking for it, i.e. export the framebuffer placed by the

Re: [Qemu-devel] [Xen-devel] [PATCH] configure: introduce --enable-xen-fb-backend

2017-04-20 Thread Oleksandr Andrushchenko

On 04/14/2017 08:52 PM, Stefano Stabellini wrote:

On Fri, 14 Apr 2017, Juergen Gross wrote:

On 14/04/17 08:06, Oleksandr Andrushchenko wrote:

On 04/14/2017 03:12 AM, Stefano Stabellini wrote:

On Tue, 11 Apr 2017, Oleksandr Andrushchenko wrote:

From: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com>

For some use cases when Xen framebuffer/input backend
is not a part of Qemu it is required to disable it,
because of conflicting access to input/display devices.
Introduce additional configuration option for explicit
input/display control.

In these cases when you don't want xenfb, why don't you just remove
"vfb" from the xl config file? QEMU only starts the xenfb backend when
requested by the toolstack.

Is it because you have an alternative xenfb backend? If so, is it really
fully xenfb compatible, or is it a different protocol? If it is a
different protocol, I suggest you rename your frontend/backend PV device
name to something different from "vfb".


Well, offending part is vkbd actually (for multi-touch
we run our own user-space backend which supports
kbd/ptr/mtouch), but vfb and vkbd is the same backend
in QEMU. So, I am ok for vfb, but just want vkbd off
So, there are 2 options:
1. At compile time remove vkbd and still allow vfb
2. Remove xenfb completely, if acceptable (this is my case)

What about adding a Xenstore entry for backend type and let qemu test
for it being not present or containing "qemu"?

sounds reasonable

That is what we do for the console, using the xenstore node "type". QEMU
is "ioemu" while xenconsoled is "xenconsoled". Weirdly, instead of a
backend node, it is a read-only frontend node, see
tools/libxl/libxl_console.c:libxl__device_console_add.

Oleksandr, I am sorry to feature-creep this simple patch, but I think
Juergen is right. But we cannot do it just for one protocol. We need to
introduce a generic way to enable/disable backends in QEMU. Using a
xenstore node is OK.

agree


We could do exactly the same as the PV console, thus "type" = "ioemu",
read-only, under the frontend xenstore directory. Or we could introduce
new nodes. I would probably go for "backend-type" = "qemu" under the
backend xenstore directory. I don't have a strong opinion about this. In
the example below I'll use the PV console convention.

For starters:

* libxl needs to write the "type" node to xenstore for *all* protocols.
   The "type" is not yet configurable.
* qemu reads them for all backends, proceeds if "type" = "ioemu"

These should be two simple patches. Stage 2:

* we add options in the xl config file to configure any backend, libxl
   set "type" accordingly (Maybe not *any*, but vif, vkbd, vfb could all
   have a "type". It is OK if you only add an option for vkbd.)
* non-QEMU backends, in particular Linux backends, also read the "type"
   node and proceed if it's "linux"

Does this sound OK to you?

For the first take it does, but I'll get back to it a bit later
Actually the purpose of the change was to find a way
we can live with backends implemented in QEMU and user-space
and how they can co-exist

Thank you,
Oleksandr



Re: [Qemu-devel] [Xen-devel][PATCH] configure: introduce --enable-xen-fb-backend

2017-04-14 Thread Oleksandr Andrushchenko

On 04/14/2017 03:12 AM, Stefano Stabellini wrote:

On Tue, 11 Apr 2017, Oleksandr Andrushchenko wrote:

From: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com>

For some use cases when Xen framebuffer/input backend
is not a part of Qemu it is required to disable it,
because of conflicting access to input/display devices.
Introduce additional configuration option for explicit
input/display control.

In these cases when you don't want xenfb, why don't you just remove
"vfb" from the xl config file? QEMU only starts the xenfb backend when
requested by the toolstack.

Is it because you have an alternative xenfb backend? If so, is it really
fully xenfb compatible, or is it a different protocol? If it is a
different protocol, I suggest you rename your frontend/backend PV device
name to something different from "vfb".


Well, offending part is vkbd actually (for multi-touch
we run our own user-space backend which supports
kbd/ptr/mtouch), but vfb and vkbd is the same backend
in QEMU. So, I am ok for vfb, but just want vkbd off
So, there are 2 options:
1. At compile time remove vkbd and still allow vfb
2. Remove xenfb completely, if acceptable (this is my case)

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com>
---
  configure | 18 ++
  hw/display/Makefile.objs  |  2 +-
  hw/xen/xen_backend.c  |  2 ++
  hw/xenpv/xen_machine_pv.c |  4 
  4 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index 476210b1b93f..b805cb908f03 100755
--- a/configure
+++ b/configure
@@ -220,6 +220,7 @@ xen=""
  xen_ctrl_version=""
  xen_pv_domain_build="no"
  xen_pci_passthrough=""
+xen_fb_backend=""
  linux_aio=""
  cap_ng=""
  attr=""
@@ -909,6 +910,10 @@ for opt do
;;
--enable-xen-pv-domain-build) xen_pv_domain_build="yes"
;;
+  --disable-xen-fb-backend) xen_fb_backend="no"
+  ;;
+  --enable-xen-fb-backend) xen_fb_backend="yes"
+  ;;
--disable-brlapi) brlapi="no"
;;
--enable-brlapi) brlapi="yes"
@@ -1368,6 +1373,7 @@ disabled with --disable-FEATURE, default is enabled if 
available:
virtfs  VirtFS
xen xen backend driver support
xen-pci-passthrough
+  xen-fb-backend  framebuffer/input backend support
brlapi  BrlAPI (Braile)
curlcurl connectivity
fdt fdt device tree
@@ -2213,6 +2219,15 @@ if test "$xen_pv_domain_build" = "yes" &&
   "which requires Xen support."
  fi
  
+if test "$xen_fb_backend" != "no"; then

+   if test "$xen" = "yes"; then
+ xen_fb_backend=yes
+   else
+ error_exit "User requested feature Xen framebufer backend support" \
+" but this feature requires Xen support."
+   fi
+fi
+
  ##
  # Sparse probe
  if test "$sparse" != "no" ; then
@@ -5444,6 +5459,9 @@ if test "$xen" = "yes" ; then
if test "$xen_pv_domain_build" = "yes" ; then
  echo "CONFIG_XEN_PV_DOMAIN_BUILD=y" >> $config_host_mak
fi
+  if test "$xen_fb_backend" = "yes" ; then
+echo "CONFIG_XEN_FB_BACKEND=y" >> $config_host_mak
+  fi
  fi
  if test "$linux_aio" = "yes" ; then
echo "CONFIG_LINUX_AIO=y" >> $config_host_mak
diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
index 063889beaf4a..f5ec97ed4f48 100644
--- a/hw/display/Makefile.objs
+++ b/hw/display/Makefile.objs
@@ -5,7 +5,7 @@ common-obj-$(CONFIG_JAZZ_LED) += jazz_led.o
  common-obj-$(CONFIG_PL110) += pl110.o
  common-obj-$(CONFIG_SSD0303) += ssd0303.o
  common-obj-$(CONFIG_SSD0323) += ssd0323.o
-common-obj-$(CONFIG_XEN_BACKEND) += xenfb.o
+common-obj-$(CONFIG_XEN_FB_BACKEND) += xenfb.o
  
  common-obj-$(CONFIG_VGA_PCI) += vga-pci.o

  common-obj-$(CONFIG_VGA_ISA) += vga-isa.o
diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
index d1190041ae12..5146cbba6ca5 100644
--- a/hw/xen/xen_backend.c
+++ b/hw/xen/xen_backend.c
@@ -582,7 +582,9 @@ void xen_be_register_common(void)
  xen_set_dynamic_sysbus();
  
  xen_be_register("console", _console_ops);

+#ifdef CONFIG_XEN_FB_BACKEND
  xen_be_register("vkbd", _kbdmouse_ops);
+#endif
  xen_be_register("qdisk", _blkdev_ops);
  #ifdef CONFIG_USB_LIBUSB
  xen_be_register("qusb", _usb_ops);
diff --git a/hw/xenpv/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
index 79aef4ecc37b..b731344c3f0a 100644
--- a/hw/xenpv/xen_machine_pv.c
+++ b/hw/xenpv/xen_machine_pv.c
@@ -68,7 +68,9 @@ static void xen_init_pv(MachineState *machine)
  }
  
  xen_be_register_common();

+#ifdef CONFIG_

[Qemu-devel] [Xen-devel][PATCH] configure: introduce --enable-xen-fb-backend

2017-04-11 Thread Oleksandr Andrushchenko
From: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com>

For some use cases when Xen framebuffer/input backend
is not a part of Qemu it is required to disable it,
because of conflicting access to input/display devices.
Introduce additional configuration option for explicit
input/display control.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com>
---
 configure | 18 ++
 hw/display/Makefile.objs  |  2 +-
 hw/xen/xen_backend.c  |  2 ++
 hw/xenpv/xen_machine_pv.c |  4 
 4 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index 476210b1b93f..b805cb908f03 100755
--- a/configure
+++ b/configure
@@ -220,6 +220,7 @@ xen=""
 xen_ctrl_version=""
 xen_pv_domain_build="no"
 xen_pci_passthrough=""
+xen_fb_backend=""
 linux_aio=""
 cap_ng=""
 attr=""
@@ -909,6 +910,10 @@ for opt do
   ;;
   --enable-xen-pv-domain-build) xen_pv_domain_build="yes"
   ;;
+  --disable-xen-fb-backend) xen_fb_backend="no"
+  ;;
+  --enable-xen-fb-backend) xen_fb_backend="yes"
+  ;;
   --disable-brlapi) brlapi="no"
   ;;
   --enable-brlapi) brlapi="yes"
@@ -1368,6 +1373,7 @@ disabled with --disable-FEATURE, default is enabled if 
available:
   virtfs  VirtFS
   xen xen backend driver support
   xen-pci-passthrough
+  xen-fb-backend  framebuffer/input backend support
   brlapi  BrlAPI (Braile)
   curlcurl connectivity
   fdt fdt device tree
@@ -2213,6 +2219,15 @@ if test "$xen_pv_domain_build" = "yes" &&
   "which requires Xen support."
 fi
 
+if test "$xen_fb_backend" != "no"; then
+   if test "$xen" = "yes"; then
+ xen_fb_backend=yes
+   else
+ error_exit "User requested feature Xen framebufer backend support" \
+" but this feature requires Xen support."
+   fi
+fi
+
 ##
 # Sparse probe
 if test "$sparse" != "no" ; then
@@ -5444,6 +5459,9 @@ if test "$xen" = "yes" ; then
   if test "$xen_pv_domain_build" = "yes" ; then
 echo "CONFIG_XEN_PV_DOMAIN_BUILD=y" >> $config_host_mak
   fi
+  if test "$xen_fb_backend" = "yes" ; then
+echo "CONFIG_XEN_FB_BACKEND=y" >> $config_host_mak
+  fi
 fi
 if test "$linux_aio" = "yes" ; then
   echo "CONFIG_LINUX_AIO=y" >> $config_host_mak
diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
index 063889beaf4a..f5ec97ed4f48 100644
--- a/hw/display/Makefile.objs
+++ b/hw/display/Makefile.objs
@@ -5,7 +5,7 @@ common-obj-$(CONFIG_JAZZ_LED) += jazz_led.o
 common-obj-$(CONFIG_PL110) += pl110.o
 common-obj-$(CONFIG_SSD0303) += ssd0303.o
 common-obj-$(CONFIG_SSD0323) += ssd0323.o
-common-obj-$(CONFIG_XEN_BACKEND) += xenfb.o
+common-obj-$(CONFIG_XEN_FB_BACKEND) += xenfb.o
 
 common-obj-$(CONFIG_VGA_PCI) += vga-pci.o
 common-obj-$(CONFIG_VGA_ISA) += vga-isa.o
diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
index d1190041ae12..5146cbba6ca5 100644
--- a/hw/xen/xen_backend.c
+++ b/hw/xen/xen_backend.c
@@ -582,7 +582,9 @@ void xen_be_register_common(void)
 xen_set_dynamic_sysbus();
 
 xen_be_register("console", _console_ops);
+#ifdef CONFIG_XEN_FB_BACKEND
 xen_be_register("vkbd", _kbdmouse_ops);
+#endif
 xen_be_register("qdisk", _blkdev_ops);
 #ifdef CONFIG_USB_LIBUSB
 xen_be_register("qusb", _usb_ops);
diff --git a/hw/xenpv/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
index 79aef4ecc37b..b731344c3f0a 100644
--- a/hw/xenpv/xen_machine_pv.c
+++ b/hw/xenpv/xen_machine_pv.c
@@ -68,7 +68,9 @@ static void xen_init_pv(MachineState *machine)
 }
 
 xen_be_register_common();
+#ifdef CONFIG_XEN_FB_BACKEND
 xen_be_register("vfb", _framebuffer_ops);
+#endif
 xen_be_register("qnic", _netdev_ops);
 
 /* configure framebuffer */
@@ -95,8 +97,10 @@ static void xen_init_pv(MachineState *machine)
 /* config cleanup hook */
 atexit(xen_config_cleanup);
 
+#ifdef CONFIG_XEN_FB_BACKEND
 /* setup framebuffer */
 xen_init_display(xen_domid);
+#endif
 }
 
 static void xenpv_machine_init(MachineClass *mc)
-- 
2.7.4