>Here's my current take:
>Use VESA's VBE (http://www.vesa.org/vbe3.pdf) to talk to the host from
>within guest space.
>Advantages:
>VBE maps simply onto SDL.
>VBE is a well-defined hardware interface.
>A fair number of apps will support VBE without us writing a driver at all.
>All the special handling needed is INT 10 being redirected to the user-space
[...]

VESA/VBE is a realmode thing... it's of no use to us as far as
compatibility goes!

As for the API side of things, it's probably a bad idea to use a
"finalised" standard API such as VESA.  The problem is that you're 100%
sure that the API will never change.  OTOH, the SDL API is changing
together with the advances in the backends.  If you adopt SDL as a
standard, you can be pretty sure that the latest version will be up to
date.

Some more ideas (unrelated):  I spewed this at the plex workshop in
Erlangen, but I'll repeat it here:  as guest<-->host switches are
expensive, you want to minimise these.  My idea would be to pass graphics
calls in a call structure:

struct plex_sdl_call
{
    int  function;
    void *parms;     /* Or something similar? Fixed parameters? */
    struct plex_sdl_call *next;
}

struct plex_sdl_call *plex_sdl_call_chain;

Basically, you have in the guest a set of SDL call stubs that mirror the
host's "real" SDL implementation.  The only things the stubs to is insert
the appropiate call in the call chain.  Then, calling plex_sdl_flush() will
pass
the head of the call chain to the host:

void
plex_sdl_flush (void)
{
    asm
    (
        "outl %%eax, %%dx\n"
        : /* no outputs */
        : "a" (plex_sdl_call_chain), "d" (PLEX_SDL_PORT)
    );

    /* free the call chain here */
}

The host has direct access to the guest memory, so it can directly traverse
the call chain and handle all the operations at once.

The trick is to call plex_sdl_flush() as little as possible.  I think that
most SDL functions will not need to call plex_sdl_flush(); only a limited
amount will need this.  This buffering of SDL calls will highly increase
performance.

-- Ramon



Reply via email to