Hi, > In QEMU 1.3, there was a DisplayState list. We used one DisplayState per > monitor. The DisplayChangeListener has a new hw_add_display vector, so > that when the UI requests a second monitor the new display gets attached > to the emulated hardware. (patch: add_display_ptr)
I don't think we want actually add/remove stuff here. On real hardware your gfx card has a fixed set of display connectors, and I think we are best of mimic that. Support for propagating connect/disconnect events and enabling/disabling displays needs to be added properly. Currently qxl/spice can handle this, but it uses a private side channel. > A new vector, hw_store_edid, was added to DisplayState so that UIs could > tell emulated hardware what the EDID for a given display should be. > (patch: edid-vector) Note that multiple uis can be active at the same time. What happened with the edids then? > VRAM size was made configurable, so that more could be allocated to > handle multiple high-resolution displays. (patch: variable-vram-size) upstream stdvga has this meanwhile. > I don't think it makes sense to have a QemuConsole per display. Why not? That is exactly my plan. Just have the virtual graphic card call graphic_console_init() multiple times, once for each display connector it has. Do you see fundamental issues with that approach? > I can use a model similar to what qxl does, and put the framebuffer for > each display inside a single DisplaySurface allocated to be a bounding > rectangle around all framebuffers. This has the advantage of looking > like something that already exists in the tree, but has several > disadvantages. Indeed. I don't recommend that. It is that way for several historical reasons (one being that the code predates the qemu console cleanup in the 1.5 devel cycle). > Are these features something that people would want to see in the tree? Sure. One of the reasons for the console cleanup was to allow proper multihead support. cheers, Gerd