> I don't know how GGI handles event processing internally, but the API
> interface uses a function very similar to select(). I consider that to be
> a Good Thing.
        Depending on whether or not our program will be given control back
in another location.  If ggi is at all like gtk, I think that sleep() will
be better as it should guarantee that nothing else happens in our program
unless ggi is broken.

> However, you really shouldn't interface to the graphics subsystem
> directly- the calls are encapsulated nicely (in order to make it easy to
> add additional targets).
        What counts as directly?  Using ggi calls or doing memcpy() on the
buffer or what?

> We need a new file for that, maybe "core/tools.c" or something similar.
> This file should also contain the getInt16() implementation for big endian
> machines and the memtest() function (which is used sporadically for
> debugging memory corruption- it's not used anywhere right now, though).
        Ok, I'll get to this within the next week or so, I hope.

> libggi provides rectangle fills, and libggi2d gives more sophisticated
> operations. Still, using GGI directly is a bad idea, as it would break the
> DirectX target.
        Oh no.  Not another project with "we can't do that because windows
can't do it."  Can I just write the function for unix and stick them inside
of
#ifndef I_AM_IN_HELL

#else
#define sci_malloc malloc
#endif

Unless you've already written an XP method for rendering strings which I
could use.  Have you?  And why are we using directX instead of ggi?  Isn't
there a ggi target for windows?

> Just draw the image to s->pic->view (which points to the 320x200 bytes
> area containing the graphics) and call
> s->gfx_driver->Redraw(s, GRAPHICS_CALLBACK_REDRAW_ALL, 0, 0, 0, 0);
> for the changes to take effect.

> 
> The graphics subsystem, as you can see, isn't optimized for peak
> performance right now. Since we're getting close to having all of the
> graphics stuff documented sufficiently, I'd suggest a complete re-design
> of the graphics subsystem for 0.3.x; my suggestion would be to make it
> look like this:
> 
> /SCI engine/
>    |
>    V
> /Widget set/ <----> Widget buffer (for save/restore)
>    |
>    V
> /Graphics API/
>    |
>    V
> /GFX driver/
> 
> Yes, the "widget buffer" was your idea originally. I didn't think we
> should to this, but since graphics need to be re-written anyway (it's
> simply too slow), we'd get it as a bonus.
Shouldn't it be something like

/SCI engine/ => Widget set => GGI/Gnome/Gtk/etc.
So that we don't have to re-implement GGI/Gnome/Gtk, etc?  There are plenty
of options, and even cross platform ones which work on windows.  Is there
really a need to re-invent another one?
        What about the SDL?  Maybe that would be something we should look
into as it was made for things like this and has a windows target too.
        And your idea about openGL is a decent one too as it would work
pretty well.  It would definitely require a faster computer, but it would
make resizing easier, as well as color depth independence.  As more and more
3D cards are supported for Linux, this might well be fairly doable at this
point.  You don't need a good 3D card to get decent 2D performance out of
openGL, and in many places you don't need a 3D card.  I think that there are
different openGL rendering modes, and there may well be a 2D one with
textures which is faster than a 3D one using just a 2D plane.  I'll check it
out in my openGL book.

[...]
> If we have sci_alloc(), we need sci_realloc() as well- handling this in
> the code would require that code to take care of the "low memory" window
> manually.
> 
> realloc() could still be used, though, like this:
> 
>       newbuffer = realloc(buffer, bufsize * 2);
> 
>       if (newbuffer) {
>               bufsize *= 2;
>               buffer = newbuffer;
>       } else
>               buffer = sci_realloc(buffer, ++bufsize);
>       
> Maybe "int sci_realloc(void **buffer, int newsize, int block_if_lowmem)"
> (takes care of *buffer, blocks on low memory if (block_if_lowmem), returns
> 0 on success, returns 1 if (!block_if_lowmem) and out of memory) would be
> a nice way to handle this, too; after all, you wouldn't have to use a
> "newbuffer" as in the example above...
I like this, myself.  Not quite as ellegant in the sense that it can't be
just #ifdef'd into place, but more functional, I think.  Once I start work,
I'll take care of this.
        -Chris
P.S. Until we resolve our graphics targets, I'll just print the message out
to standard output.  There should be enough indications that the system is
low on memory without it being printed to the screen, and people can read
the HOWTO, though printing a message to the screen would definitely be
optimal, we need to figure out our graphics route first.

-- 
[EMAIL PROTECTED]
"If I had had more time I would have written a shorter letter." - Pascal
Linux Programs: http://cs.alfred.edu/~lansdoct/linux/
Linux - Get there. Today.
Evil Overlord Quote of the Day (www.eviloverlord.com):
99. Any data file of crucial importance will be padded to 1.45Mb in size.

Reply via email to