On Wed, 28 Feb 2001, Christoph Egger wrote:
> > process syncronously, then the user would themselves clone another thread t
> > do it. Cloning a thread inside a library should only be done if it is
> > absolutely necessary, and libraries should be written to be functional
> > where threads are not available whenever possible.
>
> But you forget the case, that the user can allocate additional
> resources (i.e. sprites) during run-time. ggiDoom is a good example,
> that can be rewritten to use libBSE. When the player finished one
> level and comes to the next one, then there might be a new kind of
> enemy, which has to be allocated... You see, what I mean?
Yes, but I don't see why the low level way prevents this, or why
threads really need to be considered. Ahh.. I see maybe now what you
are getting at, which is that we haven't left a way to tell GAset not to
touch the features you are still using while you deallocate the features
you don't want anymore and get new ones. But that is just a flag -- we
can use the same field that tells it not to touch the video mode (res_state).
So in this case, you would do a GAget and get a copy of the "live"
features, we could have the library automatically flag those
fetaures when they are set(). Delete the old resource (it is only
a copy) for the unwanted feature. Add a resource for the wanted feature.
Run check(). Deallocate the higher level structures associated with the
feature you don't want by calling the high level free funtion on the
object, and run set(), and reallocate a new high level object if the
set succeeds.
For the "simple" API you would assume this behavior -- if you
don't want a feature, you free it with the higher level library's
free function, which deletes the resource from the current mode.
If you want another one later, you just call the normal higher
level allocation function. This calls GAlloc to check/set the
"current mode" against the added feature, and GAlloc won't touch "live"
features but knows the unwanted one has been deleted so it can fit
the new one, and returns sucess to the higher level function.
If you do something stupid like request a sprite with GGI_AUTO,GGI_AUTO
resolution and your target happens to do full screen sprites and it
eats all your VRAM and you don't check to see that and de/realloc
it with a lower resolution, well that's your own fault.
> My example can handle with this case. But I want to archieve that
> WITHOUT threads. I suppose, you are too low-level, and I am too
> high-level. We have to find the golden middle way... :)
>
> Any suggestions for this?
I'm still not really compreheding where threads come into this at all.
Most calls to GAlloc and the higher level functions should be
pretty much instantaneous and can be done syncronously. If you
have a situation where you are going to change from FPS play
mode to game menu mode, for instance, using check() functions
you should be able to preprogram the entire menu system video mode and
features while the FPS mode is still running under the lower
level API. Doing this under the higher level API would require
us to add check() functions and way to keep track of whether
you are operating on a hypothetical new mode, or the current one.
Maybe this would be a good thing, or maybe anyone who really needs
to do something this complicated should be using the lower level
API. In either case, if they want to make a thread to do this,
they should make it themselves.
The "Low level API" doesn't look that bad once you massage it
with convenience functions, and my examples assume you
are using it for something that is actually difficult to do.
If you were using it for simpler things it could look like this:
ggiGAresource_t mymode, mynewmode;
ggiGAEmptyResource(&mymode);
ggiGAAddSimpleMode(mymode, 640,480,2 /*frames*/, GT_AUTO);
ggiBSEAddSimpleSprite(mymode, 30, 60, GT_AUTO);
ggiBLTAddSimple2dTexture(mymode, 100, 100, GT_AUTO);
if (ggiGACheck(vis, mymode) == NULL) {
fprintf(stderr,"I won't run without my crap. Waaah.\n");
ggiExit();
}
if ((mynewmode = ggiGASet(vis, mymode)) == NULL) {
fprintf(stderr,"Someone screwed the pooch.\n");
ggiExit();
}
sprite = ggiBSETieResource(vis, mynewmode, 0);
texture = ggiBLTTieResource(vis, mynewmode, 0);
/* Wow, that was relatively painless */
/* Get a bigger sprite */
ggiGAEmptyResource(&mymode);
ggiGAAppendTiedResources(mymode, mynewmode);
ggiBSEDelSprite(vis, mymode, 0);
ggiBSEAddSimpleSprite(mymode, 100, 100, GT_AUTO);
if (ggiGACheck(vis, mymode) == NULL) { /* Sprite was too big */
fprintf(stderr,"I should have checked this earlier. Doh!\n");
/* wait until we need the new mode */
MyCleanUpAndExit();
}
/* wait until we need the new mode */
ggiBSEUntie
ggiGAset(vis, mymode);
ggiBSETieSprite(vis, mymode, 0);
Now as for stuff that isn't in GAlloc because it is handled entirely by the
extension and never conflicts with anything else, there is no need for
special treatment -- say the above sprite was an X pointer handled by libBSE.
GAlloc would simply ignore the resource, and BSE would know to allocate
the object when the Tie() function is called.
One thing I would add though is that a lot of the features will
sometimes be needed in large quantity, so we should have two quantity
feilds in the resource -- the amount absolutely needed, and the
maximum amount needed. That or a "SpriteSet" object to request.
--
Brian