Hi !
> I am sorry to make troubles to you with my carzy outlook software. Is this
> message ok?
Perfectly nice. Thanks !
> About my question. Look at my code attentivily. That piece of shit has to
> run not more not less than 10 sec. It should nod deppend on the target. What
> is strange that function clock returned wrong result depending on GGI mode.
Oops - that is weird. I admit I hadn't looked closely enough at your code.
Just saw some timing loop and thought it would be a run-this-x-times-and-
measure-the-time type speed check.
Sorry, I have already deleted your mail form the incoming folder, so I can't
look at it in detail anymore, but I can try to give a few possible
explanations. I suppose you do a run-this-for-xx-seconds-and-count-how-often-
you-get-it-done type check - right ?
a) For X in sync mode, we are heavily messing with strange stuff like sending
signals to the main process from a helper child to allow for the asynchronous
(with respect to program execution) redraws.
However this should not interfere with the clock() function. It does interfere
with sleep() and friends, however.
b) For X in ASYNC mode this cannot be the cause. If we don't have an array
overflow or something, I suspect it is c).
c) from man clock: clock() returns the amount of ***CPU time*** ...
The time reported is the sum of the user and system times of the
calling process ...
In X, we have a client-server model. That means you do not get the whole CPU
for your program anymore, as the X server has to do the actual drawing.
That will account for _the_X_Server's_ CPU time.
Thus it is possible for the loop to take >10 seconds of _real_ time,
while the process only has the CPU for 10 seconds of that time.
Letting top i run in a second xterm should allow to check that theory.
It should show about 50-70% CPU for the X server and the rest for your
program (and a bit for top and other stuff of course).
CU, ANdy
--
Andreas Beck | Email : <[EMAIL PROTECTED]>