Re: MPEG in GGI?

2000-04-19 Thread Rubén

On 2000/Apr/19, Andrew Apted wrote:

 Like SDL's "surfaces" ?  I.e. non-visible places to draw stuff, maybe
 blitting them to the visible screen/window at some stage, right ?
 
 LibGGI has nothing like that yet (I hope it will someday).  The
 closest it comes is either using a large virtual area and drawing in
 the off-screen parts, or using multiple frames and drawing in the
 non-visible frame(s) using ggiSet{Read,Write}Frame.

It isn't so simple. In most cases the offscreen video memory won't
be enough for you, so, some bitmaps will be there and some other not. There
should be some cache-like system that keeps in the video mémory the most
frequently used bitmaps, and some algorithms to optimize the use of the
offscreen memory, too, because you will have external fragmentation... It's
not as trivial as it seems, this is the reason why GGL doesn't have support
for this stuff yet (although you can do manually if you are brave enough, of
course).

It would be nice if GGI can do this some day...
-- 
 __
 )_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]




MPEG in GGI?

2000-04-16 Thread Rubén

I've downloaded libsmpeg, and after seeing the API, I have seen two
things that I dislike:
* it needs SDL
* it is in C++

Is somebody porting the code of this library to a pure C version
based only on GGI maybe into an extension or something?
-- 
 __
 )_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]




Re: kgi-0.9-20000222

2000-02-22 Thread Rubén

On 2000/Feb/22, Steffen Seeger wrote:

Hi,

 a new KGI-0.9 snapshot is available from

What about acceleration? I've downloaded this version but it has
nothing under the accel directory... Is it only available at KGIcon yet?

 - added S3 ViRGE driver framework by Jos Hulzink

Cool!

-- 
 __
 )_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: SGI OpenGL freely avalibale.

2000-01-29 Thread Rubén

On 2000/Jan/28, Steffen Seeger wrote:

   The GGL code in KGI-0.9 was a code-study how to dynamically 
^^^
  What do you refer to with GGL? :)
 
 This was meant to be a experimental Generalized GL implementation.

In KGI? Is there something working for an S3 Virge card that I can
test on my machine?

 It's development is currently halted due to lack of time on my part. (I know of

Where can I found more documentation on this project?

 the GGL Project, so I will probably have to choose another 3-letter word.)

Well, it would be very difficult to convince GGL guys to change our
name, only to convince them to change the logo was very very difficult
(before we had a penguin taken from the GGI page and all the people confused
it with a crow :))
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Line Clipping

2000-01-29 Thread Rubén

On 2000/Jan/28, Dmitry Semenov wrote:

 Also I found that Matrox G100 acceleration is slowly than software drawing.

It's impossible! On my machine the S3 Virge acceleration is _always_
faster than software drawing, even taking into account that it means lots of
ioctl's on /dev/fb. I have an PII 333 MHz. And AFAIK S3 Virge is much slower
than Matrox G100, maybe I'm wrong?
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: SGI OpenGL freely avalibale.

2000-01-27 Thread Rubén

On 2000/Jan/27, Steffen Seeger wrote:

  Dynamic assembly code generation for rasterization is not yet included, 
  making software rendering performance slow.
 
 Interesting, we came about the same technique as SGI:
 The GGL code in KGI-0.9 was a code-study how to dynamically 
  ^^^

What do you refer to with GGL? :)
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]
  ^^^



Re: LibGGI3D

2000-01-25 Thread Rubén

On 2000/Jan/25, teunis wrote:

 Running a seperate 3D-server is a good way to go.  But not multithreaded.
 
 Advantages:
   Won't kill your hardware if your main program crashes.

This is one of the reasons why the 3D hardware support must be into
the kernel (of course, the main one is that "it's hardware, you shouldn't
talk directly to the hw on a unix machine!!").

   Isn't hurt by input or other processing/signals/...

My opinion is that it's more usefull, interesting and important to
have the stuff that you want to put on this server, into the kernel, at
least partially, and If i'm right it's what is being done with KGI... It
doesn't crashes the hardware, it doesn't have security problems, it can
avoid easily multiple tasks accessing to the hardware, and assign the
display to the focused one (the one that is on the current console), etc.
How can you control from this server which task should show it's
graphics and which one you should ignore commands from?
From the kernel, instead, it's very easy...

My opinion is this, let the kernel do it's work (avoid tasks
problems, security problems, hardware abstraction, etc.) and use servers for
the things that they should be used (of course, neither hardware
abstraction, nor tasks controlling, I hate X-Window system design, most of
it should be at the kernel).

 Graphics libs + modules -must- be bug-free.  X is a beautiful example of
 why.  (one way or another).

X is beautiful? You mean XGGI, do you? :)

 Agreed here too.  But a good debugger is -hard- to find.

gdb and electric-fence are your friends (at least are mine ;)

  You are not serious, are you ? Ever tried to debug a multithreaded
  application ? It's a nightmare.
 
 It's worse than that...  Especially if graphical.

Well, if you have only one monitor and video card that's true, but,
for example, if have the luck of have your computer connected to another
one, you can launch the gdb remotely and launch the graphics on the local
video card.

 Oh, and one last point.  Communications between threads or processes is
 evil.  Really evil.  Case in point : locking of shared data within the

I think that you should abstract the communications as much as you
can, and centralize them as much as you can also, this should reduce the
bugs to the minimum, and should make it easier to localize them. It can
reduce the speed a lot, but I think that the first step is to make it
running with quite beautiful code, and when running, optimize to the
maximun. The inverse way always makes you lose a lot of time.
-- 
Regards
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



GGL 0.2.0 is out

2000-01-10 Thread Rubén

GGL is a project based on top on GGI, wich will provide a very high
level API to game programmers, not only for graphics, but also for devices,
sound, AI, real-time video, loading of graphic formats, and more.
You can see our web at:

http://ceu.fi.udc.es/gpul/proyectos/ggl

Unfortunatelly the english version hasn't the news as updated as the
spanish one, but the rest is equal.

We have updated from 0.1.2 version to 0.2.0, you can download from
the web page (if not now, in a few hours, as it must be put on the server by
the admin).

The most important changes are:

* New 2D engine, supporting layers, and more efficient than previous
  one. It has been proved exporting the display through ethernet
  from a 486/66 and it seems to be quite fast now (thanks to
  ggiFlushRegion ;)

* The devices support is almost done, there is only a little problem
  between joystick and keyboard buttons, but some people in this
  list has said to me how to solve it (Thanks also to them).

* We are now 26 developers!

* API documentation is almos synchronized with the actual API.

* We have now a complete tutorial (complete with the things that we
  have implemented now, of course) !! (Downloadable from the web)

* Network system is completelly specified, now only needs
  implementation.

* Realtime video is in the same state than network :)

* Other systems, although quite thought, are still neither
  implemented nor specified.

Thats all. Although there is an example with GGL, it's too much
simple. I've made another, it's at ftp://ceu.fi.udc.es/os/linux/gpul/ the
name may change as I'm making snapshots quite often, but it starts by
"blockout".

Have fun!
-- 
"The artifficial intelligence will never exceed the natural stupidity" 
 Santiago Souto
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Keyboard vs Joystick buttons

2000-01-04 Thread Rubén

On 2000/Jan/04, [EMAIL PROTECTED] wrote:

Before all, I must say that I've understood much more seeing the
gic.h include file than seeing the demos. 

  little explanation on what this extension is, please?
 
 It is basically a trainable event mapper. You register the "program events"
 (like turn left, activate menu, shoot) with it, and it will establish a 
 mapping between the program events and GII events.

I've seen the examples and it seems a great idea, and of course it
should save a lot of work to GGL and the users of GGL.

But I have a few comments and questions (maybe you could start a FAQ
with them)

* If I understood well, control - context - head make a tree in a
very ugly way. They have exactly the same methods, but Context is a high
level in a tree over Control. And Head has less methods, but has basically
the same relationship with context that context has with control. Aren't
there cleaner ways of making trees? :) I think that it would be better if
you remove Control, Context and Head, and create a tree, for example,
something like this:

   gicTreeAllocate
   gicTreeFree
   gicTreeAttachFeature
   gicTreeDetachFeature
  gicTreeAttachTree
  gicTreeDetachTree
   [...]

What do you think?

* What are the S and L that you put in the recognizers (in the
config files)? And how can I figure out which value should I put at the righ
side of these strings?

* It seems that LibGIC can wait a gii event and attach it to an
action, but with very smooth movements of my joystick, it sends lots of
events to GGI. Is there any way of saying LibGIC that is should only take
into accound the significant movements of the axis?

PS: I could make standard debian packages of LibGIC to install in my system
without _any_ problem, and it is still version 0.1, it was incredible! :)
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



BIG bug in GGI (think)

2000-01-04 Thread Rubén

Well, I will report two bugs really. The first one is the most
important, you have very good code to handle multi-threading, but you didn't
thought in the posibility of one only process reentrant, did you?

I've a GTK application that has a ggi_visual inside it (-inwin
rules). All seems to work nice, the GGL sprites walk through the screen as
they should. First problem, ggi doesnt receive events from this window. I
tried to simulate the expose_event with ggiEventSend, but it doesn't seem to
work. Well this isn't the bug I'm surely doing something wrong in this. To
avoid this, I catch with GTK the expose event and instead of simulate the
gii one, I do myself the ggiFlush. But GGL uses alarms to synchronize with
the required framerate, and gtk don't know well what uses, but its the same,
when I call ggiFlushRegion from gtk it can be drawing a BIG area, which
takes a bit, and if in this moment the GGL alarm signal is received, the
ggiFlushRegion is interrupted to do ¡other ggiFlushRegion!. You then assume
that there is another process accesing the visual and lock the process. But
there is only one process reentrant!

Is this really a bug or am I doing another thing wrong? If it is,
it would help me a lot if you fix it as soon as possible. I can use tricks
to avoid this while developing in my house, but I can't distribute the
program whith this tricks...

And the second bug, is that when I call ggiClose, if the visual is
an X-Window, it closes the window ALWAYS, even if the window isn't created
by ggi (-inwin). So I can't call ggiClose and I'm losing memory each time I
close a gtk window that contains the ggl widget. But this bug doesn't worry
me much, the other one is much more dangerous.

Thanks
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Keyboard vs Joystick buttons

2000-01-03 Thread Rubén

I'm giving Joystick support to GGL, and it's almost done, but I
found a little problem. The joystick buttons are reported in a evKey event,
so, the GGL keyboard device gets its own events and also the joystick
ones...

I open the joystick by opening a gii_input_t of type "linux-joy",
and then calling ggiJoinInputs with the ggi_visual_t and the new
gii_input_t.

I know that I can access to event-any.origin to distingish between
keyboard and joystick events, but how can I know which value is the keyboard
one and which is the joystick one? In a few proofs that I've made the
keyboard events had the 0100 value and the joystick the 0200, but I don't
know if it is always this way.

Can anybody help me, please?
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Keyboard vs Joystick buttons

2000-01-03 Thread Rubén

On 2000/Jan/04, Andreas Beck wrote:

   I know that I can access to event-any.origin to distingish between
   keyboard and joystick events, but how can I know which value is the keyboard
   one and which is the joystick one? In a few proofs that I've made the
   keyboard events had the 0100 value and the joystick the 0200, but I don't
   know if it is always this way.
 
 No, this is not guaranteed. You need to query the device capabilities, if
 you want to distinguish them. You could as well distinguish by the keys that
 get sent, but that's not "the nice way".

I know. I can't do such thing in a library :)

 Wouter already answered, how the nicer way works.

Thanks to both.

 If GGL can stand a bit of redesign in the input subsystem, I'd however
 suggest that you have a look at LibGIC. It enforces descent-style
 configurability for all game controls with little effort on the programmer
 side, which might be a nice thing to have ... :-).

I couldn't find any document about libgic, I have a december cvs
snapshot and I could only see the includes, which didn't gave me much
information. Where can I find docs? I there aren't, could you give to me a
little explanation on what this extension is, please?
Anyway, the GGL input system is really easy, we have only four kind
of events, the axis event, the button event, the collision event and the
finish event. Only the first two ones are really related to GII. And There
are four kind of devices (will be much more), the GglKeyDev, GglMouseDev,
GglJoyDev and GglArbiter. You can have as many instances of these devices as
you want, and it's possible and usual to have a GglKeyDev and a GglJoyDev
related to the same ggi_visual_t. I only need to filter events to receive at
the first one only keyboard events and at the second one only joystick
events. Hmm, I've found another problem, if I have two joysticks, when one
reads an event and it's not for it, the second one won't never receive this
event, maybe we should add an intermediate layer that processes ggi events
and allows other ways of filtering :(

There is another concept on GGL, the Observer. An Observer can
receive events from any device, and a device can have a number of observers
attached to it.

Can we take advantage from LibGIC without losing the concepts of
Device and Observer?

I like very much to have different device objects because it allows
to add GGL new interesting capabilities by only adding more code (more
classes), or extending existent one (sub-classes) without touching existent
one at all. For example, I can add a joystick emulator that uses the
keyboard for those games that only need the axis sense, and not the integer
value. It would be a subclass of GglJoyDev.

-- 
In the beggining, Turing created the machine...
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Inserting visuals into other apps.

1999-12-20 Thread Rubén

Hi, 

I'm trying to start tinking on inserting GGL games into X
applications, and if I'm right this would be very easy because you allow
parameters in the display-x target. I need two different things, for one
side, to call ggi routines from (for example) a GTK application to draw into
a GTK DrawingArea, and for another side, I need to call an independent
GGL/GGI program changing the environment variable GGI_DISPLAY.

I don0t know how I can do the first thing, but for the second one,
as it's said at the libggi manpage, I should use "display-x: -inwin window
id". It doesn't run at all. -inroot yes, but -inwin not, ggiOpen returns an
error code.

Anybody can help me, please?

-- 
In the beggining, Turing created the machine...
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: GGI leak

1999-12-13 Thread Rubén

On 1999/Dec/12, Marcus Sundberg wrote:

 I was deleting a lot of leaks in GGL, and I think I've found a
   leak into GGI, here is the stack info:
 
 What tool did you use for that btw?

memprof. Very usefull, seems to read symbol tables and let you
launch xemacs, fte or what you like at the line where the leak was detected,
see the memory usage of your program dinamically, and the memory usage of
each function of your program. It also generates files with the info, and of
course, it comes with Debian (potato) :) (Yes, surely there are more tools,
but this one [+ gdb + efence] is enough for me now).
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



GGI leak

1999-12-12 Thread Rubén

I was deleting a lot of leaks in GGL, and I think I've found a leak
into GGI, here is the stack info:

Leaked 0x806ecf0 (2048 bytes)
_default_zero(): /tmp/degas/lib/ggi2-1.99.2.0b2.1/ggi/visual.c:46
alloc_fb(): /tmp/degas/lib/ggi2-1.99.2.0b2.1/display/memory/mode.c:91
_GGIdomode(): /tmp/degas/lib/ggi2-1.99.2.0b2.1/display/memory/mode.c:145
_GGIdomode(): /tmp/degas/lib/ggi2-1.99.2.0b2.1/display/memory/mode.c:169
_ggiCheck4Defaults(): /tmp/degas/lib/ggi2-1.99.2.0b2.1/ggi/mode.c:68
ggl_visual_set_mode(): /home/ryu/devel/GPUL/ggl/src/graphics/gglvisual.c:105
ggl_timeout_start(): /home/ryu/devel/GPUL/ggl/src/ggltimeout.c:152
ggl_timeout_start(): /home/ryu/devel/GPUL/ggl/src/ggltimeout.c:152
ggl_timeout_start(): /home/ryu/devel/GPUL/ggl/src/ggltimeout.c:152
ggl_timeout_start(): /home/ryu/devel/GPUL/ggl/src/ggltimeout.c:152
(???)
_start(): (null):0

I thought that maybe this leak were produced because of GGL code,
but I couldn't find more leaks on it.

The code at line 105 of ggl_visual_set_mode() is:
error= ggiSetMode(visual-visual,gmode);

And visual-visual is a successfully opened visual with the type
shown below:

struct _GglVisual{
  GglBase base; /* Base class */
  int id;   /* Class Identifier */
  ggi_visual_t visual;  /* GGI Visual*/
[...]

The code at line 152 of ggl_timeout_start does nothing but return a
number (no mallocs). The reason why the stack looks like this is that I
catch alarm signal and use linux interval timers [setitimer()].

The example that I've tested creates 24 visuals (1 with target X and
23 with target memory) and never loses pointers to any of them.

I wish this is enough information :)
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Problem with lastest versions

1999-12-06 Thread Rubén

I've problems both with lastest debian package of GGI, all programs
die with SEGV at ggiOpen, see an example:

#includeggi/ggi.h
int main()
{
 ggi_visual_t *v;
 ggiInit();
 v=ggiOpen(NULL);
 ggiSetGraphMode(v,GGI_AUTO,GGI_AUTO,GGI_AUTO,GGI_AUTO,GT_AUTO);
 ggiClose(v);
 return 0;
}

I execute this simple program in _any_ target except of memory, and
it crashes with this backtrace:

#0  0x804c6c0 in ?? ()
#1  0x40060c55 in vfprintf () from /lib/libc.so.6
#2  0x4006812d in fprintf () from /lib/libc.so.6
#3  0x40109aaf in ggDPrintf () from /usr/lib/libgg.so.0
#4  0x40109b40 in ggLoadModule () from /usr/lib/libgg.so.0
#5  0x4001b7b8 in ggiDBGetBuffer () from /usr/lib/libggi.so.2
#6  0x4001baca in _ggiAddDL () from /usr/lib/libggi.so.2
#7  0x4001bd08 in _ggiOpenDL () from /usr/lib/libggi.so.2
#8  0x4001ca82 in ggiOpen () from /usr/lib/libggi.so.2
#9  0x4001c92e in ggiOpen () from /usr/lib/libggi.so.2
#10 0x80485e5 in main ()

I've tried also to compile the lastest snapshot (99/12/05), and I
got errors in all places where appeared EXPORTVAR and IMPORTVAR, for
example:
init.c:40: syntax error before `uint32'
init.c:41: syntax error before `int'
init.c:42: syntax error before `void'

I need GGI to continue development of GGL, I finished the new 2D
engine and I can't test if it works :( 
-- 
Regards
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Problem with lastest versions

1999-12-06 Thread Rubén

On 1999/Dec/06, Marcus Sundberg wrote:

  #includeggi/ggi.h
  int main()
 int main(void)

It was only an example to show you that it crashes at ggiOpen, it
wasn't intended to compile with -pedantic :) (even -pedantic doesn't warn if
you don't use (void).

Here you have the new ANSI, -pedantic compatible and pretty printed
code:

#includeggi/ggi.h

int main()
{
ggi_visual_t v;
if (ggiInit()  0) {
fprintf(stderr,"ggiInit failed\n");
exit(1);
}
v=ggiOpen(NULL);
if (v == NULL){
ggiPanic("ggiOpen failed\n");
}

if (ggiSetGraphMode(v, GGI_AUTO, GGI_AUTO, GGI_AUTO, GGI_AUTO, GT_AUTO)0){
ggiPanic("ggiSetGraphMode failed\n");
}
ggiClose(v);
return 0;
}

And here you have the result:

Segmentation Fault (core dumped)

The backtrace is the same than the other mail one.

 If the problem persists then, please give me the urls of the
 GGI-related .debs you are using, and I'll check it out at the
 Debian system at work.

ftp://ceu.fi.udc.es/debian/dists/potato/main/binary-i386/libs/...

  init.c:40: syntax error before `uint32'
  init.c:41: syntax error before `int'
  init.c:42: syntax error before `void'
 
 If you use a snapshot you must take all the libraries from the same
 snapshot - your LibGG is too old.

Oh! right, I forgot to compile libgg and libgii of this snapshot,
thanks.
-- 
"The artifficial intelligence will never exceed the natural stupidity" 
 Santiago Souto
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Vertical retrace

1999-10-22 Thread Rubén

On 1999/Oct/22, Niklas Höglund wrote:

 I think there is an easy solution for the first thing. Why don't you
   share a flag variable between the kernel and the process, that becomes 1 at
   the vertical retrace and 0 when it finishes? It would be for programmers
   even more efficient than reading from the IO port directly (which is
   impossible under Linux, obviously). It could be managed in a similar way
   than the graphics context, couldn't it?
  
  I think that putting this flag in the context map is the way to
  go.  The flag can be quickly toggled by the interrupt handler, and nothing
  else is necessary.  This doesn't get around the scheduling Hz problem, but
  that is a separate issue anyway, and needs to be fixed for a lot more than
  just GGI.
 
 It'd be even better to increase a value every vblank so applications
 can find out if it has missed any frames, and also to make sure
 applications don't redraw more than once per frame.

I'm not agree, applications doesn't have to use the same refresh
than the monitor. Your monitor can be refreshing at 100 Hz, and 50 frames
per second is more than enough to see an animation smooth.
And if you do this, some people would synchronize the applications
in infinite loops testing this number, instead of using alarms (the right
method, IMHO).
Other reason is that a flag is tested much faster than a number (at
least on intel).

I think that the number isn't needed at all, your alarm system
should figure out when a frame is losed (GGL does this way the frame
skipping), and It may not worry about real card-monitor frames.
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Vertical retrace

1999-10-20 Thread Rubén

On 1999/Oct/20, Brian S. Julin wrote:

 Basically there are two situations to worry about.  One is trying
 to find the ray position from userspace when the process is running.
 The other is trying to get the scheduler to run the process 
 promptly at a certain ray positions.

I think there is an easy solution for the first thing. Why don't you
share a flag variable between the kernel and the process, that becomes 1 at
the vertical retrace and 0 when it finishes? It would be for programmers
even more efficient than reading from the IO port directly (which is
impossible under Linux, obviously). It could be managed in a similar way
than the graphics context, couldn't it?
-- 
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Doesn't need vertical retrace!

1999-10-07 Thread Rubén

On 1999/Oct/06, Andreas Beck wrote:

  screen blinking a lot, I think. Anyway, there is another bigger problem,
  IMO, that switching to kernel mode, copying data structures, and returning
  back into user mode, may be too much time, and maybe when the ioctl returns,
  you haven't enough time to copy your buffer.
 
 No. That shouldn't matter much. I once measured a full ioctl round trip on
 my old 486 to take 600 cycles. Given even that machine (486/66), this gives
 an extra delay of about 10 microseconds. Shouldn't be the crucial point.

If you are drawing objects with little polygons (i.e. a torus with
many polygon resolution), where each polygon can take 300 cycles or less,
it's very crucial, I think.
In fact, in 2D accel, I only draw hlines by hardware when they are
more than 100 pixels in length, because if they are shorter, drawing by
hardware is mucho more slow (because of the ioctl call?).
-- 
Come to GPUL http://ceu.fi.udc.es/GPUL
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]



Re: Hardware cursor

1999-09-30 Thread Rubén

On 1999/Oct/01, Marcus Sundberg wrote:

  rules) with fbdev and I want to use page flipping, but I doesn't know how to
  wait the vertical retrace to flip, can I do so? and how?
 
 There is support in the fbcon API for that:
 
 #define FB_ACTIVATE_VBL  16   /* activate values on next vbl  */
 but currently no drivers implement it. :(
 You're welcome to hack support for that into KGIcon. ;)

I need it very much, so... I will try. After so much time anoying
with questions, it's time to do something usefull :)))
-- 
Regards
  _
 /_) \/ / /  email: mailto:[EMAIL PROTECTED]
/ \  / (_/   www  : http://programacion.mundivia.es/ryu
[ GGL developer ]  [ IRCore developer ]  [ GPUL member ]