On 29 Nov 2004, at 09:11, Richard Gaskin wrote:

Scott Rossi wrote:

Both of the above routines provide the same output. However, when viewing
the %CPU use on a Mac OSX system with the Activity Monitor, CPU usage is
clearly dependent on the frequency of the "send in" message: with 100
milliseconds frequency, the first handler runs at about 15% usage, and with
50 milliseconds frequency runs at about 30% usage (makes sense).
Amazingly, the "wait x with messages" handler runs at less than 1% usage.
And because "with messages" does not block other messages from being sent,
this seems a very efficient way to run a timer.
Obviously the above is useful only in situations requiring accuracy of 1
second or less, but at first glance I can't see any drawback to using this
method. Can you?

None that I can see, but I managed to get myself confused on the issue: if you only want a time sent once a second, why not just send it in 1 second rather than polling several times a second?


I guess Scott was concerned about the smoothness of the time display ticking over. If you send every 1 second, and there is something holding up message processing, the timer may be late to update. Increasing the frequency increases the chance of getting it right (but doesn't guarantee it).

One uncertainty about the "wait <condition> with messages" is that it isn't documented how frequently the condition is evaluated. I was told once that "wait ... with messages" was inefficient because it was constantly evaluating the condition. However, this doesn't seem to be the case. From some simple testing (calling a function in the condition), I find that the time between evaluations varies, ranging from about 5 to 256 milliseconds (when no other activity is taking place).

So for 1-second accuracy, I don't see any drawbacks.

Cheers
Dave

_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to