Hi, > It might be a good idea to get rid > of DisplayAllocator altogether.
After some digging in the source code: Yes, I think so. Look, we have *two* concepts for avoiding memcpy: The first is the DisplayAllocator. Only implemented by SDL, which is scheduled to be downgraded by anthonys gtk patches. Doesn't really fit into the concept of displaychangelisteners coming and going at runtime, and also not of having multiple displaychangelisteners (like sdl+vnc at the same time). It allows vga emulation to render directly into a SDL buffer. The second is qemu_create_displaysurface_from(). It allows vga emulation hand out a surface with direct pointer to the guests video memory for displaychangelisteners to read from. You can't have both (i.e. the guest will never ever write directly into the SDL buffer), there will always be at least one memcpy. So what happens in practice? In any graphics mode relevant today vga emulation will use qemu_create_displaysurface_from(). Whenever a DisplayAllocator is present or not doesn't make any difference then. In case vga emulation has to render something because the guests video memory can't be represented directly as displaysurface (text mode, gfx modes with <= 256 colors) it will allocate a surface where it will render the screen to and will use the SDL DisplayAllocator if registered. I somehow doubt text mode acceleration is worth the complexity+confusion DisplayAllocator adds to the picture. Also I'd like to have reference counting for display surfaces because I can offload the display scaling to a separate thread then. Guess this is easier to implement when zapping DisplayAllocator first. cheers, Gerd