Hi,

On Mon, 6 Dec 1999, Christopher T. Lansdown wrote:

> > I don't know how GGI handles event processing internally, but the API
> > interface uses a function very similar to select(). I consider that to be
> > a Good Thing.
>       Depending on whether or not our program will be given control back
> in another location.  If ggi is at all like gtk, I think that sleep() will
> be better as it should guarantee that nothing else happens in our program
> unless ggi is broken.

ggi breaks sleep() on the X target. Well, it doesn't really break it, it
just interrupts it with signals; the effect is the same. If you want to
avoid any ggi interaction, you'll have to use select() or stop the signals
before we're attacked by them.

> > However, you really shouldn't interface to the graphics subsystem
> > directly- the calls are encapsulated nicely (in order to make it easy to
> > add additional targets).
>       What counts as directly?  Using ggi calls or doing memcpy() on the
> buffer or what?

Directly means calling anything that is graphics library dependant, like
ggiCrossBlit().
The "right" way to do it is to memcpy into s->pic->view and then call
s->gfx_driver->Redraw() (where s is the state_t pointer you're using).

> > We need a new file for that, maybe "core/tools.c" or something similar.
> > This file should also contain the getInt16() implementation for big endian
> > machines and the memtest() function (which is used sporadically for
> > debugging memory corruption- it's not used anywhere right now, though).
>       Ok, I'll get to this within the next week or so, I hope.

Ok, thanks.

> > libggi provides rectangle fills, and libggi2d gives more sophisticated
> > operations. Still, using GGI directly is a bad idea, as it would break the
> > DirectX target.
>       Oh no.  Not another project with "we can't do that because windows
> can't do it."

It would break any other targets as well. I started with ggi because I
wanted to learn it and because it had the ability to display to do
fullscreen graphics at very low resolutions, which SDL could not do on
Matrox cards (X server limitation) (there's a ggi backend for SDL now,
though). The graphics system is abstracted, and the reason behind this is
to make it as portable as possible; if someone wants to run it on IRIX,
he'll be able to implement an xlib or glx target (or use the existing one
if it's useable by then) without too much work.
(I didn't get ggi to compile on IRIX the last time I tried).

>  Can I just write the function for unix and stick them inside
> of
> #ifndef I_AM_IN_HELL
> 
> #else
> #define sci_malloc malloc
> #endif

Yes, if it has to be, but I don't think that's neccessary.

> Unless you've already written an XP method for rendering strings which I
> could use.  Have you?

There is one (console/console_font.c) that doesn't require SCI resources;
it's the one used for the on-screen console. The function is called
drawString, which means that it ought to be renamed to con_draw_string(),
since all other console functions start with con_, but that's a cosmetic
problem.

>  And why are we using directX instead of ggi?  Isn't
> there a ggi target for windows?

Yes there is. Dmitry wanted to do it that way, though, and since I wanted
abstraction in the graphics subsystem anyway, we agreed on an interface to
abstract the whole thing. Have a look at gfx_driver_t in 
src/include/graphics.h for the details.

> > Just draw the image to s->pic->view (which points to the 320x200 bytes
> > area containing the graphics) and call
> > s->gfx_driver->Redraw(s, GRAPHICS_CALLBACK_REDRAW_ALL, 0, 0, 0, 0);
> > for the changes to take effect.

Yep. BTW, 0 is black, 0xff is white. Bright red should be 0xcc, if you
want to increase the dramatical effect.
The whole buffer is palettized, which means that it has to be translated
each time we do a screen update. That's one of the many reasons why we're
so slow right now.

> > /SCI engine/
> >    |
> >    V
> > /Widget set/ <----> Widget buffer (for save/restore)
> >    |
> >    V
> > /Graphics API/
> >    |
> >    V
> > /GFX driver/
> > 
> > Yes, the "widget buffer" was your idea originally. I didn't think we
> > should to this, but since graphics need to be re-written anyway (it's
> > simply too slow), we'd get it as a bonus.
> Shouldn't it be something like
> 
> /SCI engine/ => Widget set => GGI/Gnome/Gtk/etc.
> So that we don't have to re-implement GGI/Gnome/Gtk, etc?  There are plenty
> of options, and even cross platform ones which work on windows.  Is there
> really a need to re-invent another one?

I'm not sure if I unterstood you correctly there...

Let me re-iterate, for the sake of clarity.
The functions mentioned under "Graphics API" are graphics primitives:
- Draw line
- Draw filled box
- Draw text
- Draw image and scale
- Draw image and scale, with transparency key
and stuff like that.
They're the functions that /could/ be accellerated on the graphics target.
We'll leave it to the individual graphics target implementation to handle
them.
Widget set functions will be stuff like "draw_control_button()" or
"draw_image0()" that use the abstracted graphics API functions to
construct the picture. We can't use Gtk for that without some additional
work, since Gtk uses callbacks to signal button events; we'd have to
fork() again and provide an event queue, or clone(), or something like
that. I'm not sure if that would be easier than drawing those buttons by
hand.

Also, handling those widgets ourselves would provide a way to handle
themeing portably.

>       What about the SDL?  Maybe that would be something we should look
> into as it was made for things like this and has a windows target too.

SDL might be another option for a target.
I don't want to depend on a specific graphics or sound library; generic
libraries (like glib) can be ported to almost any platform, but graphics
and sound libraries usually have restrictions when it comes to
portability.

>       And your idea about openGL is a decent one too as it would work
> pretty well.  It would definitely require a faster computer, but it would
> make resizing easier, as well as color depth independence.  As more and more
> 3D cards are supported for Linux, this might well be fairly doable at this
> point.  You don't need a good 3D card to get decent 2D performance out of
> openGL, and in many places you don't need a 3D card.  I think that there are
> different openGL rendering modes, and there may well be a 2D one with
> textures which is faster than a 3D one using just a 2D plane.  I'll check it
> out in my openGL book.

graphics_glx.c currently uses
  glMatrixMode(GL_PROJECTION);
but that target is too slow at the moment; it copies parts of the
s->pic->view buffer but then has to glXSwapBuffers() everything.


llap,
 Christoph

Reply via email to