Hello,
I am currently the device model maintainer for XenClient Enterprise. As
you may or may not know, we maintain a patch queue on top of QEMU
(currently 1.3) that adds functionality needed to support XCE features.
One of the major things we add is robust multi-head support. This
includes DDC emulation for EDID data, variable VRAM size, monitor
hot-plug support, simulated VSYNC, and guest controlled display
orientation. This includes both the necessary interfaces between the hw
and ui, and a new emulated adapter (with drivers) that exercises the
interfaces.
Between QEMU 1.3 and QEMU 1.6, a lot of changes were made to the
QemuConsole and DisplayState structures that will require significant
changes to our patches. I'd like to adapt to these changes in such a way
that they might make some of our work acceptable upstream. As such, I'd
like to describe how the patches currently work, describe how I'd
propose they work in the new version, and solicit feedback as to if the
plans would be acceptable.
I've stuck our current patch queue on github for convenience. If I refer
to a patch in the below description, you can check it out there:
https://github.com/jbaboval/xce-qemu.pq
This is what we currently do:
In QEMU 1.3, there was a DisplayState list. We used one DisplayState per
monitor. The DisplayChangeListener has a new hw_add_display vector, so
that when the UI requests a second monitor the new display gets attached
to the emulated hardware. (patch: add_display_ptr)
Each DisplayState was given an ID, so that emulated hardware could keep
track of which EDID and other metadata went with with DisplayState.
(patch: display-id-allocation)
A new function, gui_start_updates, starts refresh of the new display.
This seems to be equivalent to the new gui_setup_refresh() that exists
in newer versions. (patch: gui_start_updates)
A new vector, hw_store_edid, was added to DisplayState so that UIs could
tell emulated hardware what the EDID for a given display should be.
(patch: edid-vector)
A new vector, hw_notify, was added to DisplayState so the UIs could tell
the emulated hardware about interesting events, such as monitor hotplugs
(new windows), orientation changes, availability of hardware cursor
functionality, etc... (patch: display-hw-notify)
VRAM size was made configurable, so that more could be allocated to
handle multiple high-resolution displays. (patch: variable-vram-size)
The flow is:
- The user requests a hotplug (user added a "monitor" with a menu, UI
reacted to a udev hotplug or XRandR event, or whatever)
- The UI asks the hw to allocate a new display. If the hw doesn't
support it (vector is NULL), the system continues with one display.
- The hw returns a new DisplayState to the UI with all the hw_ vectors
filled in
- The UI registers its DisplayChangeListener with the new DisplayState
- The UI provides an EDID for the new display with hw_store_edid()
- The UI notifies the guest that an event has occured with hw_notify()
- The UI starts a gui timer for the new DisplayState with
gui_start_updates()
- The guest driver does its normal thing, sets the mode on the new
display, and starts rendering
- The timer handler calls gfx_hw_update for the DisplayState
- The hw calls dpy_update for changes on the new DisplayState
.....
In the latest code, DisplayState isn't a list anymore, and many
operations that apply to a DisplayState apply to a QemuConsole instead.
Also, the DisplaySurface is a property of the QemuConsole now instead of
part of the DisplayState. I'm going to have to make some fairly
fundamental changes to our current architecture to adapt to the latest
upstream changes, and I would appreciate feedback on the options.
I don't think it makes sense to have a QemuConsole per display.
I can use a model similar to what qxl does, and put the framebuffer for
each display inside a single DisplaySurface allocated to be a bounding
rectangle around all framebuffers. This has the advantage of looking
like something that already exists in the tree, but has several
disadvantages. It was convenient - but not necessary - to have a
DisplaySurface per display, since that kept track of the display mode,
allowed different depths per monitor (not very useful, but possible),
etc. If the displays are different resolutions, it leaves a dead space
of wasted memory inside the region. If there's a single DisplaySurface,
some other structure/list/array will need to be added to track the
individual resolutions, offsets, and strides of the sub-regions in order
to be able to display them in separate windows or - more likely - as
full screen on real displays. It also makes hot-plugging more complex,
since adding/removing a display will alter the configuration of the
existing displays.
I could turn the DisplayState and DisplaySurface in QEMU console into
lists, and run much like my existing patches.
I could move the DisplaySurface back into DisplayState and have one
list... (major downside: I'd have to fix every hw and ui, also it must
have been moved out of there for a reason)
Moving my new vectors into the nicely cleaned up
DisplayChangeListenerOps and GraphicsHardwareOps structures seems like a
no-brainer.
......
I'm flexible on how to implement this stuff if it means it gets done in
a way that could be pushed. Ideally I'd like to be able to re-use as
much of the code I already have as possible. But really I can re-do all
of it as long as the end result is acceptable upstream and meets my list
of feature requirements. Which is: Hot-plugging displays, ability to
pass EDIDs, variable VRAM size.
Are these features something that people would want to see in the tree?
If so, I'd appreciate input so I can be working towards something
acceptable to as many people as possible.