Hi, > As far as the EDID is concerned, there can only be one EDID for a > display+hw pair, or the guest won't know what to do. In my use-case, I > simply pass real EDIDs through, and create a full-screen window for each > real monitor.
Ok, makes sense. > If you wanted to have two UIs displaying the same > DisplaySurface, the EDID would have to come from one of them, and the > other would have to clip, or scale. Yes. > > Why not? That is exactly my plan. Just have the virtual graphic card > > call graphic_console_init() multiple times, once for each display > > connector it has. > > > > Do you see fundamental issues with that approach? > Currently only one QemuConsole is active at a time, so that would have > to change.... That isn't mandatory any more. It is still the default behavior of a DisplayChangeListener to follow the active_console, for compatibility reasons. SDL and VNC still behave that way. You can explicitly bind a DisplayChangeListener to a QemuConsole though, by setting DisplayChangeListener->con before calling register_displaychangelistener(). gtk binds to QemuConsole #0. spice creates a display channel per (graphical) console. Each display channel has a DisplayChangeListener instance, and each DisplayChangeListener is linked to a different QemuConsole. For your UI you probably want follow the spice model. Have a DisplayChangeListener for each physical monitor of the host, have a fixed QemuConsole bound to each DisplayChangeListener. DisplayChangeListeners can come and go at runtime just fine, so you should be able to create/destroy them on monitor plug/unplug events on the host. cheers, Gerd