Both of you are right.
The basis for OSC timetags are of course the (NTP) system time, because
that's usually the only shared time source between different apps.
However, if you schedule several events in a DSP tick, you don't want to
get the current ystem time for each event, because this will cause
unnecessary jitter.
What you can do instead is get the system time *once* per DSP tick and
use that as the basis for scheduling/dispatching events within the tick.
This is more or less what Supercollider does, BTW.
However, since Pd DSP tick computation itself can be very jittery for
large hardware buffer sizes, this is not sufficient. There are basically
two solutions, afaict:
a) use some dejittering/smoothing algorithm. Scsynth, for exampple, uses
a DLL to filter the system time.
b) only get the system time for the very first DSP tick and for all
subsequent DSP ticks increment by the *logical* block duration. This
allows for sample accurate *relative* timing, but the absolute timing
can suffer from clock drift. This is the default behavior of Supernova
and some people actually experience problems in longer performances.
---
Generally, time synchronization between apps is a fundamental (unsolved)
problem in computer music. See the following discussion for a starter:
https://github.com/supercollider/supercollider/issues/2939.
Christof
On 18.04.2021 22:32, IOhannes m zmölnig wrote:
On 4/18/21 17:06, Martin Peach wrote:
On Sun, Apr 18, 2021 at 6:06 AM IOhannes m zmölnig <zmoel...@iem.at>
wrote:
I don't really like the timestamp implementation in mrpeach (as it
uses real time, rather than logical time), but better this than
nothing...
Logical time timestamps would only be accurate inside of the Pd
instance.
i tend to disagree.
there are basically two use-cases for timetags:
- reducing jitter when synthesising events on the receiver
e.g. i want to trigger a drum-synth exactly every 100ms
- reducing jitter when analysing events from the sender
e.g. i want to measure the period between two mocap frames
neither of these use-cases warrant system time.
here's a real world example:
if i use Pd to send events to my drum-synth, and i want these events
to be exactly 100ms apart so I'm driving it with a [metro 100], the
real time of these ticks will be very jittery (depending on all sorts
of things, starting with the audio buffer of Pd), up to dozens of ms.
if i codify this jitter in the timestamps, then any law abiding
receive will have to do their best to reproduce this jitter.
what is the value in that?
the only way to schedule two events at exact times I see is to use
some "ideal" time - in Pd this is the logical time.
but it would not conform to
any OSC specification.
i checked and double checked the specs but could not find anything
about this.
where do you get the idea that the OSC specs mandate wall clock time?
OSC-1.0 speaks about "NTP format" (but this is just the structure of
the 64 bits data chunk) and "the number of seconds since midnight on
January 1, 1900" (but it doesn't say whether this is supposed to be
wallclock or idealized)
> It could be added as an option
a flag or similar would be great.
there probably are use-cases where real time makes sense, why not be
able to cater for both.
f,dst
IOhannes
_______________________________________________
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management ->
https://lists.puredata.info/listinfo/pd-list
_______________________________________________
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management ->
https://lists.puredata.info/listinfo/pd-list