/me looks at bf9a3b69c80a6fbd289b6340b8bdc9e994630bdc, console.c
changes.
This isn't what I've meant, guess I wasn't verbose enough ..
Yeah I'mt not sure I liked that idea as you seem to associate a
QemuConsole as being a distinct console,
whereas in this case the multiple heads aren't
] RFC: Integrating Virgil and Spice
On Mi, 2013-10-09 at 08:46 +1000, Dave Airlie wrote:
That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual
Hi,
I think nearly all GPUs, Intel ones included can do on-board H264 encoding
now,
the vaapi for Intel exports this ability, not sure how to expose it on
non-intel GPUs,
or how they expose it under Windows etc.
The problem for us is the usual patent minefield around h264.
Yep.
Hi,
/ I think nearly all GPUs, Intel ones included can do on-board H264 encoding
now,
// the vaapi for Intel exports this ability, not sure how to expose it on
// non-intel GPUs,
// or how they expose it under Windows etc.
//
// The problem for us is the usual patent minefield around
Hi,
Nice summary.
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the
Hi,
On 10/10/2013 01:25 PM, Gerd Hoffmann wrote:
Hi,
Nice summary.
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the
- Original Message -
Hi,
On 10/10/2013 01:25 PM, Gerd Hoffmann wrote:
Hi,
Nice summary.
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.
afaik, opengl has been designed originally with remote rendering in
On Thu, Oct 10, 2013 at 10:50 PM, Marc-André Lureau mlur...@redhat.com wrote:
- Original Message -
Hi,
On 10/10/2013 01:25 PM, Gerd Hoffmann wrote:
Hi,
Nice summary.
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local
On Thu, Oct 10, 2013 at 11:10 PM, Gerd Hoffmann kra...@redhat.com wrote:
IIRC some high-end nvidia gfx cards (which can be partitioned for
virtual machines) can encode the guests display as H.264 stream in
hardware.
Given that there are use cases for hardware assisted video encoding in
- Original Message -
On Thu, Oct 10, 2013 at 10:50 PM, Marc-André Lureau mlur...@redhat.com
wrote:
- Original Message -
Hi,
On 10/10/2013 01:25 PM, Gerd Hoffmann wrote:
Hi,
Nice summary.
3) Virgil will render using the host gpu, using EGL to talk
- Original Message -
- Original Message -
On Thu, Oct 10, 2013 at 10:50 PM, Marc-André Lureau mlur...@redhat.com
wrote:
- Original Message -
Hi,
On 10/10/2013 01:25 PM, Gerd Hoffmann wrote:
Hi,
Nice summary.
3) Virgil will
OpenGL 1.0 maybe nobody has made any accommodation to remote rendering
in years, they haven't defined GLX protocol for new extensions in
probably 8-10 years,
The thing is 3D rendering is high bandwidth for anything non-trivial,
the amount of data apps move to GPUs is huge for most things.
On Tue, 8 Oct 2013, 23:51:13 BST, Dave Airlie airl...@gmail.com wrote:
That would be the local rendering solution I think we'd prefer,
qemu runs as qemu user, uses EGL to talk to the drm render-nodes,
has some sort of unix socket that the viewer connects to and can hand
fds across, then
On Wed, Oct 9, 2013 at 3:36 PM, Steven Newbury st...@snewbury.org.uk wrote:
On Tue, 8 Oct 2013, 23:51:13 BST, Dave Airlie airl...@gmail.com wrote:
That would be the local rendering solution I think we'd prefer,
qemu runs as qemu user, uses EGL to talk to the drm render-nodes,
has some sort
On Mi, 2013-10-09 at 08:46 +1000, Dave Airlie wrote:
That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware
Hi,
On 10/09/2013 10:44 AM, Gerd Hoffmann wrote:
snip
What is virtio-vga btw? The virgil virtual vga device
Yes, see:
http://airlied.livejournal.com/78104.html
Regards,
Hans
___
Spice-devel mailing list
Spice-devel@lists.freedesktop.org
Hi,
When the guests virtual gfx card doesn't let the gpu render into a
dma-buf we have to copy the bits anyway. Ideally just memcpy from guest
framebuffer to a dma-buf (not sure drm allows that), so we can hand out
a dma-buf handle for rendering no matter whenever the guest uses virgil
Hi All,
I realize that it may be a bit early to start this discussion,
given the somewhat preliminary state of Virgil, still I would
like to start a discussion about this now for 2 reasons:
1) I believe it would be good to start thinking about this earlier
rather then later.
2) I would like to
Hi,
The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering capability through VIRGIL),
and various display options, ie SDL, vnc and Spice.
The console layer would
Hi,
On 10/08/2013 03:18 PM, Gerd Hoffmann wrote:
Hi,
The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering capability through VIRGIL),
and various display options,
Hi,
This is mostly Dave's area of expertise, but let me try to explain things
a bit better here. The dma-buf pass-through is for the Virgil case, so
we're passing through 3D rendering commands from the guest to a real,
physcial GPU inside the host, which then renders the final image to show
Hi
Any plans for a separate UI process? Something using a unix socket for
control commands and to hand over a dma-buf handle using fd descriptor
passing maybe?
It sounds to me like this is something that an egl extension should provide,
but I can't find it yet.
I've already had a quick discussion about this with Dave Airlie, and
our ideas on this aligned perfectly.
The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering
That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware works, and is also provides a natural fallback
mode for
Ah, host dma-buf not guest dma-buf. It makes more sense then.
yes host side for the viewer.
So virgil just opens one of those new render-only drm nodes, asks the
gpu to process the rendering ops from the guest store the results in a
dma-buf, then this dma-buf must be displayed somehow,
- Original Message -
Ah, host dma-buf not guest dma-buf. It makes more sense then.
yes host side for the viewer.
So virgil just opens one of those new render-only drm nodes, asks the
gpu to process the rendering ops from the guest store the results in a
dma-buf, then
27 matches
Mail list logo