Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-12 Thread Ian Molton
On 10/11/10 17:47, Anthony Liguori wrote:
 On 11/10/2010 11:22 AM, Ian Molton wrote:
 Ping ?

 I think the best way forward is to post patches.

I posted links to the git trees. I can post patches, but they are 
*large*. Do you really want me to post them?

 To summarize what I was trying to express in the thread, I think this is
 not the right long term architecture but am not opposed to it as a short
 term solution. I think having a new virtio device is a bad design choice
 but am not totally opposed to it.

Ok! (I agree (that this should be a short term solution) :) )

 you want to go for the path of integration, you're going to have to fix
 all of the coding style issues and make the code fit into QEMU. Dropping
 a bunch of junk into target-i386/ is not making the code fit into QEMU.

I agree. how about hw/gl for the renderer and hw/ for the virtio module?

 If you post just what you have now in patch form, I can try to provide
 more concrete advice ignoring the coding style problems.

I can post patches, although I dont think LKML would appreciate the 
volume! I can post them to the qemu list if you do.

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-12 Thread Anthony Liguori
On 11/12/2010 06:14 AM, Ian Molton wrote:
 On 10/11/10 17:47, Anthony Liguori wrote:
 On 11/10/2010 11:22 AM, Ian Molton wrote:
 Ping ?

 I think the best way forward is to post patches.

 I posted links to the git trees. I can post patches, but they are 
 *large*. Do you really want me to post them?

Yes, and they have to be split up into something reviewable.


 To summarize what I was trying to express in the thread, I think this is
 not the right long term architecture but am not opposed to it as a short
 term solution. I think having a new virtio device is a bad design choice
 but am not totally opposed to it.

 Ok! (I agree (that this should be a short term solution) :) )

 you want to go for the path of integration, you're going to have to fix
 all of the coding style issues and make the code fit into QEMU. Dropping
 a bunch of junk into target-i386/ is not making the code fit into QEMU.

 I agree. how about hw/gl for the renderer and hw/ for the virtio module?

That would be fine.

 If you post just what you have now in patch form, I can try to provide
 more concrete advice ignoring the coding style problems.

 I can post patches, although I dont think LKML would appreciate the 
 volume! I can post them to the qemu list if you do.

Yes, qemu is where I was suggesting you post them.

Regards,

Anthony Liguori

 -Ian

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-10 Thread Ian Molton
Ping ?

On 05/11/10 18:05, Ian Molton wrote:
 On 03/11/10 18:17, Anthony Liguori wrote:
 On 11/03/2010 01:03 PM, Ian Molton wrote:

 Why is it better than using virtio-serial?

 For one thing, it enforces the PID in kernel so the guests processes
 cant screw each other over by forging the PID passed to qemu.

 My current patch touches a tiny part of the qemu sources. It works
 today.

 But it's not at all mergable in the current form. If you want to do the
 work of getting it into a mergable state (cleaning up the coding style,
 moving it to hw/, etc.) than I'm willing to consider it. But I don't
 think a custom virtio transport is the right thing to do here.

 Hm, I thought I'd indented everything in qemus odd way... Is there a
 codingstyle document or a checkpatch-like tool for qemu?

 I'm happy to make the code meet qemus coding style.

 However, if you want something that Just Works with the least amount of
 code possible, just split it into a separate process and we can stick it
 in a contrib/ directory or something.

 I dont see what splitting it into a seperate process buys us given we
 still need the virtio-gl driver in order to enforce the PID. The virtio
 driver is probably quite a bit more efficient at marshalling the data
 too, given that it was designed alongside the userspace code.

 I
 think we can consider integrating it into QEMU (or at least simplifying
 the execution of the backend) but integrating into QEMU is going to
 require an awful lot of the existing code to be rewritten.

 Why? aside from codingstyle, whats massively wrong with it thats going
 to demand a total rewrite?

 Keeping it
 separate has the advantage of allowing something to Just Work as an
 interim solution as we wait for proper support in Spice.

 Why does keeping it seperate make life easier? qemu is in a git repo.
 when the time comes, if it reall is a total replacement, git-rm will
 nuke it all.

 I dont know why you think integrating it into qemu is hard? I've
 already done it.

 Adding a file that happens to compile as part of qemu even though it
 doesn't actually integrate with qemu in any meaningful way is not
 integrating. That's just build system manipulation.

 Uh? Either it compiles and works as part of qemu (which it does) or it
 doesnt. How is that not integrated? I've added it to the configure
 script too.

 -Ian


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-10 Thread Anthony Liguori
On 11/10/2010 11:22 AM, Ian Molton wrote:
 Ping ?

I think the best way forward is to post patches.

To summarize what I was trying to express in the thread, I think this is 
not the right long term architecture but am not opposed to it as a short 
term solution.  I think having a new virtio device is a bad design 
choice but am not totally opposed to it.

My advice is that using virtio-serial + an external tool is probably the 
least amount of work to get something working and usable with QEMU.  If 
you want to go for the path of integration, you're going to have to fix 
all of the coding style issues and make the code fit into QEMU.  
Dropping a bunch of junk into target-i386/ is not making the code fit 
into QEMU.

If you post just what you have now in patch form, I can try to provide 
more concrete advice ignoring the coding style problems.

Regards,

Anthony Liguori

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-05 Thread Ian Molton
On 04/11/10 09:13, Alon Levy wrote:
 On Wed, Nov 03, 2010 at 06:03:50PM +, Ian Molton wrote:
 On 01/11/10 13:28, Anthony Liguori wrote:
 On 11/01/2010 06:53 AM, Alon Levy wrote:

 While we (speaking as part of the SPICE developers) want to have the same
 support in our virtual GPU for 3d as we have for 2d, we just don't at
 this point of time.

 Would it be helpful to you to have /something/ that works in the
 interim? I'm happy to work with you guys so that we dont need to
 reinvent the wheel ;-)

 In case it wasn't clear, I think putting virtio-gl is a good idea, exactly 
 because it works right now.

Thanks :)

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-05 Thread Ian Molton
On 03/11/10 18:17, Anthony Liguori wrote:
 On 11/03/2010 01:03 PM, Ian Molton wrote:

 Why is it better than using virtio-serial?

For one thing, it enforces the PID in kernel so the guests processes 
cant screw each other over by forging the PID passed to qemu.

 My current patch touches a tiny part of the qemu sources. It works today.

 But it's not at all mergable in the current form. If you want to do the
 work of getting it into a mergable state (cleaning up the coding style,
 moving it to hw/, etc.) than I'm willing to consider it. But I don't
 think a custom virtio transport is the right thing to do here.

Hm, I thought I'd indented everything in qemus odd way... Is there a 
codingstyle document or a checkpatch-like tool for qemu?

I'm happy to make the code meet qemus coding style.

 However, if you want something that Just Works with the least amount of
 code possible, just split it into a separate process and we can stick it
 in a contrib/ directory or something.

I dont see what splitting it into a seperate process buys us given we 
still need the virtio-gl driver in order to enforce the PID. The virtio 
driver is probably quite a bit more efficient at marshalling the data 
too, given that it was designed alongside the userspace code.

 I
 think we can consider integrating it into QEMU (or at least simplifying
 the execution of the backend) but integrating into QEMU is going to
 require an awful lot of the existing code to be rewritten.

Why? aside from codingstyle, whats massively wrong with it thats going 
to demand a total rewrite?

 Keeping it
 separate has the advantage of allowing something to Just Work as an
 interim solution as we wait for proper support in Spice.

Why does keeping it seperate make life easier? qemu is in a git repo. 
when the time comes, if it reall is a total replacement, git-rm will 
nuke it all.

 I dont know why you think integrating it into qemu is hard? I've
 already done it.

 Adding a file that happens to compile as part of qemu even though it
 doesn't actually integrate with qemu in any meaningful way is not
 integrating. That's just build system manipulation.

Uh? Either it compiles and works as part of qemu (which it does) or it 
doesnt. How is that not integrated? I've added it to the configure 
script too.

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-04 Thread Alon Levy
On Wed, Nov 03, 2010 at 06:03:50PM +, Ian Molton wrote:
 On 01/11/10 13:28, Anthony Liguori wrote:
 On 11/01/2010 06:53 AM, Alon Levy wrote:
 
 While we (speaking as part of the SPICE developers) want to have the same
 support in our virtual GPU for 3d as we have for 2d, we just don't at
 this point of time.
 
 Would it be helpful to you to have /something/ that works in the
 interim? I'm happy to work with you guys so that we dont need to
 reinvent the wheel ;-)
 

In case it wasn't clear, I think putting virtio-gl is a good idea, exactly 
because it works right now.

[Snip]
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-03 Thread Ian Molton
On 01/11/10 15:57, Anthony Liguori wrote:

 It very much is. It supports fully visually integrated rendering (no
 overlay windows) and even compositing GL window managers work fine,
 even if running 3D apps under them.

 Does the kernel track userspace pid and pass that information to qemu?

Yes. And the qemu code tracks the PIDs and keeps multiple queues (one 
per pid).

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-03 Thread Ian Molton
On 29/10/10 12:18, Rusty Russell wrote:
 On Wed, 27 Oct 2010 11:30:31 pm Ian Molton wrote:
 On 19/10/10 11:39, Avi Kivity wrote:
 On 10/19/2010 12:31 PM, Ian Molton wrote:

 2. should start with a patch to the virtio-pci spec to document what
 you're doing

 Where can I find that spec?

 http://ozlabs.org/~rusty/virtio-spec/

 Ok, but I'm not patching that until theres been some review.

 Fair enough; it's a bit of a PITA to patch, so it makes sense to get the
 details nailed down first.

I thought so :)

 Fixed - updated patch tested and attached.

 OK. FWIW, I think this is an awesome idea.  I understand others are skeptical,
 but this seems simple and if it works and you're happy to maintain it I'm
 happy to let you do it :)

Awesoe, thanks Rusty :)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-03 Thread Ian Molton
On 01/11/10 13:28, Anthony Liguori wrote:
 On 11/01/2010 06:53 AM, Alon Levy wrote:

 While we (speaking as part of the SPICE developers) want to have the same
 support in our virtual GPU for 3d as we have for 2d, we just don't at
 this point of time.

Would it be helpful to you to have /something/ that works in the 
interim? I'm happy to work with you guys so that we dont need to 
reinvent the wheel ;-)

 Yes, but I think the point is that are two general approaches to
 supporting 3d that are being proposed. One approach is to an RPC layer
 at the OpenGL level which essentially passes through the host OpenGL
 stack. That's what virtio-gl is. This has existed for quite a while and
 there are multiple transports for it.

Well, sort of. this version is heavily modified and only the virtio 
transport is supported in it. Its quite a large code cleanup.

 It supports serial ports, TCP sockets, a custom ISA extension for x86,

The custom ISA idea is cut too since it fails to play nice with KVM. 
Virtio-gl offers better security too.

 Another approach would be to have a virtual GPU and to implement
 GPU-level commands for 3d. I have been repeated told that much of the
 complexity of Spice is absolutely needed for 3d and that that's a major
 part of the design.

I've not seen any implementations of this type that are even close to 
useable.

 GPU-level support for 3d operations has a number of advantages mainly
 that it's more reasonably portable to things like Windows since the 3d
 commands can be a superset of both OpenGL and Direct3d.

Agreed.

 Also, Spice has an abstraction layer that doesn't simply passthrough
 graphics commands, but translates/sanitizes them first. That's another
 advantage over OpenGL passthrough.

I'm not sure that thats actually needed if you are careful about what 
commands you implement on the virtual GPU (and how)

 Without a layer to sanitize commands,
 a guest can do funky things with the host or other guests.

It shouldnt be possible for the guest to hurt other guests actual GL 
rendering (or the hosts) as each PID is handed a context (or contexts) 
of its own. It is possible for the guest to cause the host problems 
though. this is a design flaw in the whole idea of GL RPC. It *isnt* 
reasonable to make it secure, but it certainly is useful right now since 
theres no alternative available.

 I think a Spice-like approach is the best thing long term. In the short
 term, I think doing the GL marshalling over virtio-serial makes a ton of
 sense since the kernel driver is already present upstream. It exists
 exactly for things like this.

The virtio driver enfoces the PID field and understands the packet 
format used. Its better than using serial. Its also just one driver - 
which doesnt have any special interdependencies and can be extended or 
got rid of in future if and when better things come along.

 In the very, very short term, I think an external backend to QEMU also
 makes a lot of sense because that's something that Just Works today.

Whos written that? The 2007 patch I've been working on and updating 
simply fails to work altogether without huge alterations on current qemu.

My current patch touches a tiny part of the qemu sources. It works today.

 I
 think we can consider integrating it into QEMU (or at least simplifying
 the execution of the backend) but integrating into QEMU is going to
 require an awful lot of the existing code to be rewritten. Keeping it
 separate has the advantage of allowing something to Just Work as an
 interim solution as we wait for proper support in Spice.

I dont know why you think integrating it into qemu is hard? I've already 
done it. I added one virtio driver and a seperate offscreen renderer. it 
touches the qemu code in *one* place. There should be no need to rewrite 
anything.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-03 Thread Anthony Liguori
On 11/03/2010 01:03 PM, Ian Molton wrote:

 The virtio driver enfoces the PID field and understands the packet 
 format used. Its better than using serial. Its also just one driver - 
 which doesnt have any special interdependencies and can be extended or 
 got rid of in future if and when better things come along.

Why is it better than using virtio-serial?


 In the very, very short term, I think an external backend to QEMU also
 makes a lot of sense because that's something that Just Works today.

 Whos written that? The 2007 patch I've been working on and updating 
 simply fails to work altogether without huge alterations on current qemu.

 My current patch touches a tiny part of the qemu sources. It works today.

But it's not at all mergable in the current form.  If you want to do the 
work of getting it into a mergable state (cleaning up the coding style, 
moving it to hw/, etc.) than I'm willing to consider it.  But I don't 
think a custom virtio transport is the right thing to do here.

However, if you want something that Just Works with the least amount of 
code possible, just split it into a separate process and we can stick it 
in a contrib/ directory or something.


 I
 think we can consider integrating it into QEMU (or at least simplifying
 the execution of the backend) but integrating into QEMU is going to
 require an awful lot of the existing code to be rewritten. Keeping it
 separate has the advantage of allowing something to Just Work as an
 interim solution as we wait for proper support in Spice.

 I dont know why you think integrating it into qemu is hard? I've 
 already done it. 

Adding a file that happens to compile as part of qemu even though it 
doesn't actually integrate with qemu in any meaningful way is not 
integrating.  That's just build system manipulation.

Regards,

Anthony Liguori

 I added one virtio driver and a seperate offscreen renderer. it 
 touches the qemu code in *one* place. There should be no need to 
 rewrite anything.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-01 Thread Anthony Liguori
On 11/01/2010 05:42 AM, Avi Kivity wrote:
  On 10/28/2010 03:52 PM, Ian Molton wrote:
 On 28/10/10 15:24, Avi Kivity wrote:
 The caller is intended to block as the host must perform GL rendering
 before allowing the guests process to continue.

 Why is that?  Can't we pipeline the process?

 No, not really. the guest may call for the scene to be rendered at 
 any time and we have to wait for that to happen before we can return 
 the data to it.

 Waiting for a response is fine, but can't the guest issue a second 
 batch while waiting for the first?

In a threaded application I think you mean but all RPCs are dispatched 
holding a global lock so even within a threaded application, only a 
single GL call will be executed at a time.

The other scenario would be multiple applications trying to use GL but 
AFAICT, this is not supported in the current model.

Regards,

Anthony Liguori


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-01 Thread Anthony Liguori
On 11/01/2010 06:53 AM, Alon Levy wrote:
 The alternative proposal is Spice which so far noone has mentioned.
 Right now, Spice seems to be taking the right approach to guest 3d
 support.

  
 While we (speaking as part of the SPICE developers) want to have the same
 support in our virtual GPU for 3d as we have for 2d, we just don't at this
 point of time.


Yes, but I think the point is that are two general approaches to 
supporting 3d that are being proposed.  One approach is to an RPC layer 
at the OpenGL level which essentially passes through the host OpenGL 
stack.  That's what virtio-gl is.  This has existed for quite a while 
and there are multiple transports for it.  It supports serial ports, TCP 
sockets, a custom ISA extension for x86, and now a custom virtio transport.

Another approach would be to have a virtual GPU and to implement 
GPU-level commands for 3d.  I have been repeated told that much of the 
complexity of Spice is absolutely needed for 3d and that that's a major 
part of the design.

GPU-level support for 3d operations has a number of advantages mainly 
that it's more reasonably portable to things like Windows since the 3d 
commands can be a superset of both OpenGL and Direct3d.

Also, Spice has an abstraction layer that doesn't simply passthrough 
graphics commands, but translates/sanitizes them first.   That's another 
advantage over OpenGL passthrough.  Without a layer to sanitize 
commands, a guest can do funky things with the host or other guests.

I think a Spice-like approach is the best thing long term.  In the short 
term, I think doing the GL marshalling over virtio-serial makes a ton of 
sense since the kernel driver is already present upstream.  It exists 
exactly for things like this.

In the very, very short term, I think an external backend to QEMU also 
makes a lot of sense because that's something that Just Works today.  I 
think we can consider integrating it into QEMU (or at least simplifying 
the execution of the backend) but integrating into QEMU is going to 
require an awful lot of the existing code to be rewritten.  Keeping it 
separate has the advantage of allowing something to Just Work as an 
interim solution as we wait for proper support in Spice.

Regards,

Anthony Liguori


 Regards,

 Anthony Liguori

  
 I understand others are skeptical,
 but this seems simple and if it works and you're happy to maintain

 it I'm
  
 happy to let you do it :)

 Cheers,
 Rusty.




___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-01 Thread Ian Molton
On 01/11/10 10:42, Avi Kivity wrote:
 No, not really. the guest may call for the scene to be rendered at
 any time and we have to wait for that to happen before we can
 return the data to it.

 Waiting for a response is fine, but can't the guest issue a second
 batch while waiting for the first?

No. Userspace doesnt do or expect that.

Multiple applications work fine, however.

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-11-01 Thread Alon Levy

- Anthony Liguori anth...@codemonkey.ws wrote:

 On 10/29/2010 06:18 AM, Rusty Russell wrote:
  Fixed - updated patch tested and attached.
   
  OK. FWIW, I think this is an awesome idea.
 
 Paravirtual OpenGL or the actual proposed implementation?  Have you 
 looked at the actual code?
 
 If I understand correctly, the implementation is an RPC of opengl 
 commands across virtio that are then rendered on the host to an 
 offscreen buffer.  The buffer is then sent back to the guest in
 rendered 
 form.
 
 But it's possible to mess around with other guests because the opengl
 
 commands are not scrubbed. There have been other proposals in the past
 
 that include a mini JIT that would translate opengl commands into a
 safe 
 form.
 
 Exposing just an opengl RPC transport doesn't seem that useful either.
  
 It ought to be part of a VGA card in order for it to be integrated
 with 
 the rest of the graphics system without a round trip.
 
 The alternative proposal is Spice which so far noone has mentioned.  
 Right now, Spice seems to be taking the right approach to guest 3d
 support.
 

While we (speaking as part of the SPICE developers) want to have the same
support in our virtual GPU for 3d as we have for 2d, we just don't at this
point of time.

 Regards,
 
 Anthony Liguori
 
 I understand others are skeptical,
  but this seems simple and if it works and you're happy to maintain
 it I'm
  happy to let you do it :)
 
  Cheers,
  Rusty.
 
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-29 Thread Rusty Russell
On Wed, 27 Oct 2010 11:30:31 pm Ian Molton wrote:
 On 19/10/10 11:39, Avi Kivity wrote:
  On 10/19/2010 12:31 PM, Ian Molton wrote:
 
  2. should start with a patch to the virtio-pci spec to document what
  you're doing
 
  Where can I find that spec?
 
  http://ozlabs.org/~rusty/virtio-spec/
 
 Ok, but I'm not patching that until theres been some review.

Fair enough; it's a bit of a PITA to patch, so it makes sense to get the
details nailed down first.

 There are links to the associated qemu and guest OS changes in my 
 original email.
 
  It doesnt, at present... It could be changed fairly easily ithout
  breaking anything if that happens though.
 
  The hypervisor and the guest can be changed independently. The driver
  should be coded so that it doesn't depend on hypervisor implementation
  details.
 
 Fixed - updated patch tested and attached.

OK. FWIW, I think this is an awesome idea.  I understand others are skeptical,
but this seems simple and if it works and you're happy to maintain it I'm
happy to let you do it :)

Cheers,
Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-29 Thread Ed Tomlinson
On Friday 29 October 2010 07:18:00 Rusty Russell wrote:
 On Wed, 27 Oct 2010 11:30:31 pm Ian Molton wrote:
  On 19/10/10 11:39, Avi Kivity wrote:
   On 10/19/2010 12:31 PM, Ian Molton wrote:
  
   2. should start with a patch to the virtio-pci spec to document what
   you're doing
  
   Where can I find that spec?
  
   http://ozlabs.org/~rusty/virtio-spec/
  
  Ok, but I'm not patching that until theres been some review.
 
 Fair enough; it's a bit of a PITA to patch, so it makes sense to get the
 details nailed down first.
 
  There are links to the associated qemu and guest OS changes in my 
  original email.
  
   It doesnt, at present... It could be changed fairly easily ithout
   breaking anything if that happens though.
  
   The hypervisor and the guest can be changed independently. The driver
   should be coded so that it doesn't depend on hypervisor implementation
   details.
  
  Fixed - updated patch tested and attached.
 
 OK. FWIW, I think this is an awesome idea.  I understand others are skeptical,
 but this seems simple and if it works and you're happy to maintain it I'm
 happy to let you do it :)

From a kvm user's perspective this is a GREAT idea.  Looks like Ian and 
friends have
though about better solutions down the line and have made their code easy
to superceed.  If this is really the case I vote to get this in qemu/kvm asap -
software opengl can be a real pain.

Thanks
Ed Tomlinson
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Avi Kivity
  On 10/27/2010 03:00 PM, Ian Molton wrote:
 On 19/10/10 11:39, Avi Kivity wrote:
 On 10/19/2010 12:31 PM, Ian Molton wrote:

 2. should start with a patch to the virtio-pci spec to document what
 you're doing

 Where can I find that spec?

 http://ozlabs.org/~rusty/virtio-spec/

 Ok, but I'm not patching that until theres been some review.

Well, I like to review an implementation against a spec.


 There are links to the associated qemu and guest OS changes in my 
 original email.

 It doesnt, at present... It could be changed fairly easily ithout
 breaking anything if that happens though.

 The hypervisor and the guest can be changed independently. The driver
 should be coded so that it doesn't depend on hypervisor implementation
 details.

 Fixed - updated patch tested and attached.
 +
 + /* Transfer data */
 + if (virtqueue_add_buf(vq, sg_list, o_page, i_page, gldata)= 0) {
 + virtqueue_kick(vq);
 + /* Chill out until it's done with the buffer. */
 + wait_for_completion(gldata-done);
 + }
 +

Better, but still unsatisfying.  If the server is busy, the caller would 
block.  I guess it's expected since it's called from -fsync().  I'm not 
sure whether that's the best interface, perhaps aio_writev is better.

-- 
error compiling committee.c: too many arguments to function

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Ian Molton
On 28/10/10 10:27, Avi Kivity wrote:
 On 10/27/2010 03:00 PM, Ian Molton wrote:
 On 19/10/10 11:39, Avi Kivity wrote:
 On 10/19/2010 12:31 PM, Ian Molton wrote:

 2. should start with a patch to the virtio-pci spec to document what
 you're doing

 Where can I find that spec?

 http://ozlabs.org/~rusty/virtio-spec/

 Ok, but I'm not patching that until theres been some review.

 Well, I like to review an implementation against a spec.

True, but then all that would prove is that I can write a spec to match 
the code.

The code is proof of concept. the kernel bit is pretty simple, but I'd 
like to get some idea of whether the rest of the code will be accepted 
given that theres not much point in having any one (or two) of these 
components exist without the other.

 Better, but still unsatisfying. If the server is busy, the caller would
 block. I guess it's expected since it's called from -fsync(). I'm not
 sure whether that's the best interface, perhaps aio_writev is better.

The caller is intended to block as the host must perform GL rendering 
before allowing the guests process to continue.

The only real bottleneck is that processes will block trying to submit 
data if another process is performing rendering, but that will only be 
solved when the renderer is made multithreaded. The same would happen on 
a real GPU if it had only one queue too.

If you look at the host code, you can see that the data is already 
buffered per-process, in a pretty sensible way. if the renderer itself 
were made a seperate thread, then this problem magically disappears (the 
queuing code on the host is pretty fast).

In testing, the overhead of this was pretty small anyway. Running a few 
dozen glxgears and a copy of ioquake3 simultaneously on an intel video 
card managed the same framerate with the same CPU utilisation, both with 
the old code and the version I just posted. Contention during rendering 
just isn't much of an issue.

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Avi Kivity
  On 10/28/2010 01:54 PM, Ian Molton wrote:
 Well, I like to review an implementation against a spec.


 True, but then all that would prove is that I can write a spec to 
 match the code.

It would also allow us to check that the spec matches the requirements.  
Those two steps are easier than checking that the code matches the 
requirements.

 The code is proof of concept. the kernel bit is pretty simple, but I'd 
 like to get some idea of whether the rest of the code will be accepted 
 given that theres not much point in having any one (or two) of these 
 components exist without the other.

I guess some graphics people need to be involved.


 Better, but still unsatisfying. If the server is busy, the caller would
 block. I guess it's expected since it's called from -fsync(). I'm not
 sure whether that's the best interface, perhaps aio_writev is better.

 The caller is intended to block as the host must perform GL rendering 
 before allowing the guests process to continue.

Why is that?  Can't we pipeline the process?


 The only real bottleneck is that processes will block trying to submit 
 data if another process is performing rendering, but that will only be 
 solved when the renderer is made multithreaded. The same would happen 
 on a real GPU if it had only one queue too.

 If you look at the host code, you can see that the data is already 
 buffered per-process, in a pretty sensible way. if the renderer itself 
 were made a seperate thread, then this problem magically disappears 
 (the queuing code on the host is pretty fast).

Well, this is out of my area of expertise.  I don't like it, but if it's 
acceptable to the gpu people, okay.


 In testing, the overhead of this was pretty small anyway. Running a 
 few dozen glxgears and a copy of ioquake3 simultaneously on an intel 
 video card managed the same framerate with the same CPU utilisation, 
 both with the old code and the version I just posted. Contention 
 during rendering just isn't much of an issue.

-- 
error compiling committee.c: too many arguments to function

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Anthony Liguori
On 10/28/2010 09:24 AM, Avi Kivity wrote:
  On 10/28/2010 01:54 PM, Ian Molton wrote:
 Well, I like to review an implementation against a spec.


 True, but then all that would prove is that I can write a spec to 
 match the code.

 It would also allow us to check that the spec matches the 
 requirements.  Those two steps are easier than checking that the code 
 matches the requirements.

I'm extremely sceptical of any GL passthrough proposal.  There have 
literally been half a dozen over the years and they never seem to leave 
proof-of-concept phase.  My (limited) understanding is that it's a 
fundamentally hard problem that no one has adequately solved yet.

A specifically matters an awful lot less than an explanation of how the 
problem is being solved in a robust fashion such that it can be reviewed 
by people with a deeper understanding of the problem space.

Regards,

Anthony Liguori

 The code is proof of concept. the kernel bit is pretty simple, but 
 I'd like to get some idea of whether the rest of the code will be 
 accepted given that theres not much point in having any one (or two) 
 of these components exist without the other.

 I guess some graphics people need to be involved.


 Better, but still unsatisfying. If the server is busy, the caller would
 block. I guess it's expected since it's called from -fsync(). I'm not
 sure whether that's the best interface, perhaps aio_writev is better.

 The caller is intended to block as the host must perform GL rendering 
 before allowing the guests process to continue.

 Why is that?  Can't we pipeline the process?


 The only real bottleneck is that processes will block trying to 
 submit data if another process is performing rendering, but that will 
 only be solved when the renderer is made multithreaded. The same 
 would happen on a real GPU if it had only one queue too.

 If you look at the host code, you can see that the data is already 
 buffered per-process, in a pretty sensible way. if the renderer 
 itself were made a seperate thread, then this problem magically 
 disappears (the queuing code on the host is pretty fast).

 Well, this is out of my area of expertise.  I don't like it, but if 
 it's acceptable to the gpu people, okay.


 In testing, the overhead of this was pretty small anyway. Running a 
 few dozen glxgears and a copy of ioquake3 simultaneously on an intel 
 video card managed the same framerate with the same CPU utilisation, 
 both with the old code and the version I just posted. Contention 
 during rendering just isn't much of an issue.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Ian Molton
On 28/10/10 15:43, Anthony Liguori wrote:
 On 10/28/2010 09:24 AM, Avi Kivity wrote:
 On 10/28/2010 01:54 PM, Ian Molton wrote:

 True, but then all that would prove is that I can write a spec to
 match the code.

 It would also allow us to check that the spec matches the
 requirements. Those two steps are easier than checking that the code
 matches the requirements.

There was no formal spec for this. The code was written to replace nasty 
undefined-instruction based data transport hacks in the (already 
existing) GL passthrough code.

 I'm extremely sceptical of any GL passthrough proposal. There have
 literally been half a dozen over the years and they never seem to leave
 proof-of-concept phase. My (limited) understanding is that it's a
 fundamentally hard problem that no one has adequately solved yet.

The code in this case has been presented as a patch to qemu nearly 3 
years ago. I've taken the patches and refactored them to use virtio 
rather than an undefined instruction (which fails under KVM, unlike my 
approach).

Its in use testing meego images and seems to be fairly reliable. it can 
handle compositing window managers, games, video, etc. We're currently 
supporting OpenGL 1.4 including shaders.

 A specifically matters an awful lot less than an explanation of how the
 problem is being solved in a robust fashion such that it can be reviewed
 by people with a deeper understanding of the problem space.

I'm not sure there is a way to prevent nefarious tasks upsetting the 
hosts OpenGL with carefully crafted strings of commands, short of 
inspecting every single command, which is insane.

Really this needs to be done at a lower level by presenting a virtual 
GPU to the guest OS but I am not in a position to code that right now.

The code as it is is useful, and can always be superceeded by a virtual 
GPU implementation in future.

At least this breaks the chicken / egg cycle of people wanting GL 
support on virtual machines, but not writing stuff to take advantage of 
it because the support isn't there. its also a neatly encapsulated 
solution - if you dont want people to have access to the passthrough, 
simply tell qemu not to present the virtio-gl device to the guest, via 
qemus existing commandline options.

If this code was invasive to qemus core, I'd say 'no way' but its just 
not. and as the GL device is versioned, we can keep using it even if the 
passthrough is replaced by a virtual GPU.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Ian Molton
On 28/10/10 15:24, Avi Kivity wrote:
 The caller is intended to block as the host must perform GL rendering
 before allowing the guests process to continue.

 Why is that?  Can't we pipeline the process?

No, not really. the guest may call for the scene to be rendered at any 
time and we have to wait for that to happen before we can return the 
data to it.

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-28 Thread Ian Molton
On 28/10/10 21:14, Anthony Liguori wrote:

 If this code was invasive to qemus core, I'd say 'no way' but its just
 not. and as the GL device is versioned, we can keep using it even if
 the passthrough is replaced by a virtual GPU.

 The virtio-gl implementation is basically duplicating virtio-serial. It
 looks like ti creates a totally separate window for the GL session. In
 the current form, is there really any advantage to having the code in
 QEMU? It could just as easily live outside of QEMU.

you could say much the same about any driver in qemu... you could 
serialise up the registers and ship the data off to be processed on 
another PC if you wanted to...

The code does not, however, create a seperate window for the GL session. 
the GL scene is rendered offscreen, and then piped back to the guest for 
display, so that it is fully composited into the guests graphical 
environment. From a user perspective, its as if the guest had hardware 
3D. Performance is very reasonable, around 40fps in ioquake3 on modest 
(host) hardware.

In theory, the code *could* use a serial transport and render in a 
seperate process, but then that would make it much harder to evolve it 
into a GPU-like system in future.

-Ian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-27 Thread Ian Molton

On 19/10/10 11:39, Avi Kivity wrote:

On 10/19/2010 12:31 PM, Ian Molton wrote:



2. should start with a patch to the virtio-pci spec to document what
you're doing


Where can I find that spec?


http://ozlabs.org/~rusty/virtio-spec/


Ok, but I'm not patching that until theres been some review.

There are links to the associated qemu and guest OS changes in my 
original email.



It doesnt, at present... It could be changed fairly easily ithout
breaking anything if that happens though.


The hypervisor and the guest can be changed independently. The driver
should be coded so that it doesn't depend on hypervisor implementation
details.


Fixed - updated patch tested and attached.

-Ian
diff --git a/drivers/gpu/Makefile b/drivers/gpu/Makefile
index 30879df..35699a0 100644
--- a/drivers/gpu/Makefile
+++ b/drivers/gpu/Makefile
@@ -1 +1 @@
-obj-y			+= drm/ vga/
+obj-y			+= drm/ vga/ misc/
diff --git a/drivers/gpu/misc/Kconfig b/drivers/gpu/misc/Kconfig
new file mode 100644
index 000..50043d3
--- /dev/null
+++ b/drivers/gpu/misc/Kconfig
@@ -0,0 +1,8 @@
+config VIRTIOGL
+tristate Virtio userspace memory transport
+depends on VIRTIO_PCI
+default n
+help
+  A Driver to facilitate transferring data from userspace to a
+  hypervisor (eg. qemu)
+
diff --git a/drivers/gpu/misc/Makefile b/drivers/gpu/misc/Makefile
new file mode 100644
index 000..d9ab333
--- /dev/null
+++ b/drivers/gpu/misc/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_VIRTIOGL)  += virtio-gl.o
diff --git a/drivers/gpu/misc/virtio-gl.c b/drivers/gpu/misc/virtio-gl.c
new file mode 100644
index 000..ad3ba14
--- /dev/null
+++ b/drivers/gpu/misc/virtio-gl.c
@@ -0,0 +1,325 @@
+/*
+ * Copyright (C) 2010 Intel Corporation
+ *
+ * Author: Ian Molton ian.mol...@collabora.co.uk
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#include linux/kernel.h
+#include linux/module.h
+#include linux/fs.h
+#include linux/dma-mapping.h
+#include linux/sched.h
+#include linux/slab.h
+#include linux/miscdevice.h
+#include linux/virtio.h
+#include linux/virtio_ids.h
+#include linux/virtio_config.h
+
+#define DEVICE_NAME glmem
+
+/* Define to use debugging checksums on transfers */
+#undef DEBUG_GLIO
+
+struct virtio_gl_data {
+	char *buffer;
+	int pages;
+	unsigned int pid;
+	struct completion done;
+};
+
+struct virtio_gl_header {
+	int pid;
+	int buf_size;
+	int r_buf_size;
+#ifdef DEBUG_GLIO
+	int sum;
+#endif
+	char buffer;
+} __packed;
+
+#define to_virtio_gl_data(a)   ((struct virtio_gl_data *)(a)-private_data)
+
+#ifdef DEBUG_GLIO
+#define SIZE_OUT_HEADER (sizeof(int)*4)
+#define SIZE_IN_HEADER (sizeof(int)*2)
+#else
+#define SIZE_OUT_HEADER (sizeof(int)*3)
+#define SIZE_IN_HEADER sizeof(int)
+#endif
+
+static struct virtqueue *vq;
+
+static void glmem_done(struct virtqueue *vq)
+{
+	struct virtio_gl_data *gldata;
+	int count;
+
+	gldata = virtqueue_get_buf(vq, count);
+
+	if (!gldata)
+		BUG();
+
+	complete(gldata-done);
+}
+
+/* This is videobuf_vmalloc_to_sg() from videobuf-dma-sg.c with
+ * some modifications
+ */
+static struct scatterlist *vmalloc_to_sg(struct scatterlist *sg_list,
+unsigned char *virt, unsigned int pages)
+{
+	struct page *pg;
+
+	/* unaligned */
+	BUG_ON((ulong)virt  ~PAGE_MASK);
+
+	/* Fill with elements for the data */
+	while (pages) {
+		pg = vmalloc_to_page(virt);
+		if (!pg)
+			goto err;
+
+		sg_set_page(sg_list, pg, PAGE_SIZE, 0);
+		virt += PAGE_SIZE;
+		sg_list++;
+		pages--;
+	}
+
+	return sg_list;
+
+err:
+	kfree(sg_list);
+	return NULL;
+}
+
+static int put_data(struct virtio_gl_data *gldata)
+{
+	struct scatterlist *sg, *sg_list;
+	unsigned int ret, o_page, i_page, sg_entries;
+	struct virtio_gl_header *header =
+(struct virtio_gl_header *)gldata-buffer;
+
+	ret = header-buf_size;
+
+	o_page = (header-buf_size + PAGE_SIZE-1)  PAGE_SHIFT;
+	i_page = (header-r_buf_size + PAGE_SIZE-1)  PAGE_SHIFT;
+
+	header-pid = gldata-pid;
+
+	if ((o_page  i_page) 
+		(o_page  gldata-pages || i_page  gldata-pages)) {
+		i_page = 0;
+	}
+
+	if (o_page  gldata-pages)
+		o_page = gldata-pages;
+
+	if (i_page  gldata-pages)
+		i_page = gldata-pages;
+
+	if (!o_page)
+		o_page = 1;
+
+	sg_entries = o_page + i_page;
+
+	sg_list = kcalloc(sg_entries, sizeof(struct scatterlist), GFP_KERNEL);
+
+	if (!sg_list) {
+