I think a little background would be good.

We write software for vision scientists where accurate reporting of
stimulus changes (visual, auditory, or otherwise) is critical. Very
often the programs will make changes to the display and then wait for
an event to happen before doing something else, like changing the
display again or play a sound or send a TTL signal to another device
etc. This means that there are periods where the refresh rate of the
display can gate the frequency of polling (like calling tick() or
calling dispatch_events), but much of the time polling can be done
many times a millisecond.

This is the type of situation where having a mechanism for accurately
scheduling a callback for a future time is very useful.

I believe the issue with the current implementation of schedule is
that if you call schedule_* during a period of low tick() call
frequency then this effects the ability of the scheduler to accurately
schedule the callback for a specific time even when the time in the
future has a very high tick() call frequency. Again, this is all fixed
in the current implementation if I call tick() right before calling
schedule, so no biggy here.

Thanks.




On Dec 3, 8:30 am, "Alex Holkner" <[EMAIL PROTECTED]> wrote:
> On Dec 4, 2007 12:14 AM, sol <[EMAIL PROTECTED]> wrote:
>
>
>
> > IMHO, the outcome of the current implementation is a bit unintuitive,
> > but that could be fixed by documentation updates that explain the
> > implementation a bit.
>
> I'll revisit the documentation and try to clear it up a little.
>
> > The only other issue I see with the current implementation is that it
> > does not make it easy to accurately schedule an event for a specific
> > time in the future, unless you remember  to call tick() before calling
> > schedule_*.
>
> I disagree that this is actually an issue.  The current behaviour
> means that the error in calling a scheduled function is within one
> period of the tick() rate.  Assuming you're calling tick() as often as
> you're calling flip() and dispatch_events(), this is the actual limit
> in responsiveness of the application anyway.  The only reason it could
> noticeable to the user is if the tick() rate very low, say, below
> 10fps -- in this case the application is likely to respond poorly in
> any case, and would probably be better implemented using separate
> threads or processes.
>
> I get the feeling that you're actually after some very accurate timing
> data, beyond merely scheduling functions for animation and similar
> purposes.  If this is the case then pyglet.clock is probably
> unsuitable -- high resolution timers that exceed the 10ms resolution
> of the system clock are available on Linux and Mac (and, with some
> care, on Windows).
>
> > This could be addressed by adding a schedule_at() method
> > that takes the absolute time that the callback should be accurately
> > called at (subject to tick() call frequency etc).
>
> The current clock class does not expose an absolute time property, so
> that would also need to be added if schedule_at() were considered.
>
> Alex.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pyglet-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pyglet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to