> >[...] > >> UST can be used for timestamping, but thats sort of useless, since the > >> timestamps need to reflect audio time (see below). > > > >I\\\'d like to have both a frame count (MSC) and a corresponding system time > >(UST) for each buffer (the first frame). That way I can predict when (UST) > >a certain performance time (MSC) will occur and use this to schedule MIDI, > >i.e. through a MIDI API also supporting UST. > > but you also need \"transport time\". frame count time is generally > irrelevant. transport time is non-monotonic.
> >> >But JACK doesn\'t provide timestamps, or does it? > >> > >> it doesn\'t timestamp buffers, because i firmly believe that to be an > >> incorrect design for streamed data. > > > >Why is this an incorrect design? I don\\\'t understand. > > because its based on prequeuing data at the driver level at the API level, not at the driver level. >, which (1) > destroys latency I agree. The way OpenML works it is harder to get low latency audio. For video I think it can provide low latency. > and (2) puts more complexity into the driver. That depends on the implementation. Certainly the complexity is not in the kernel driver, but in the user space part. > its my belief that if you have an essentially real-time streaming > hardware interface, then the abstraction exported to the application > should reflect this reality, even if it hides the complexity of > controlling the hardware. creating an API that lets you queue up > things to be rendered at arbitrary times certainly seems useful for > certain classes of application, but to me, its clearly a high level > API and should live \"above\" an API that forces the programmer to deal > with the reality of the hardware model. I agree that certainly for audio the OpenML API is fairly high level. > >[...] > >> CLOCK_MONOTONIC doesn\\\'t change the scheduling resolution of the > >> kernel. its not useful, therefore, in helping with this problem. > > > >Not useful right now. CLOCK_MONOTONIC scheduling resolution will get > >better I hope. > > How can it? UST cannot be the clock that is used for scheduling ... Why not? > >For MIDI output this resolution is of importance whether > >you use a UST/MSC approach or not. > >Is the clock resolution for Linux in clock_gettime() also 10ms right now? > > I don\'t know anybody who uses this call to do timing. clock_gettime() > could have much better resolution, since it can use the same timebase > as gettimeofday(), which is based (these days) on the cycle counter. Then that will give the same result as a clock_gettime() using CLOCK_REALTIME. There is nothing wrong with clock_gettime/clock_nanosleep, they are the modern POSIX clocks and I think they are best for RT applications. > >What is the correct clock to use for timestamping if not CLOCK_MONOTONIC? > > there isn\'t one. thats why i agree with you that UST will be a > valuable addition to linux. i just don\'t agree on the scope of its > contribution. Then what is the difference between an accurate CLOCK_MONOTONIC and UST? I know a UST isn\'t CLOCK_MONOTONIC, but CLOCK_MONOTONIC can be the UST, given it is accurate enough, rihgt? [...] > >No, and I\'ve tried firm timers patch and it performs great. But it doesn\'t > >add CLOCK_MONOTONIC IIRC, and thus using CLOCK_REALTIME you still run the > >risk of having the clock adjusted. > > don\'t use either. sigitimer, or nanosleep, use the kernel jiffies > value, and when i last looked, this is monotonic. i could be wrong > about that, however. I would rather use an absolute sleep using CLOCK_MONOTONIC and clock_nanosleep than a relative nanosleep. I\'m not sure how nanosleep is supposed to behave when the system clock is adjusted. --martijn Powered by ASHosting
