On Sat, 2004-05-29 at 09:31, Tim Goetze wrote:

> depends on what 'system' we're talking about, which isn't really clear
> in the first place (nor is it too important, but here goes anyway ...
> :)
> you can see the whole setup as the system, then latency is the time
> from keypress to voltage change at the DAC out. or you can just look
> at the kernel side as as you do. or you can look at the time from MIDI
> interrupt to the audio DAC converting the first affected audio sample.
> all examples of valid 'systems' to look at in this context, depending
> on whether you assume the musician's, the kernel- or the audio
> application-programmer's view.
        Agreed.  But the problem that keeps popping up on the lists is that
people who are not doing live sound, have cards that do hardware
monitoring, and don't need to use a tiny buffer size waste their time
trying to get the minimum buffer size because they think they need to
get 2-3ms latency.  So, they get xruns out the wazoo and wonder how to
fix it.  In this case he *needs* a small buffer size though since he's
doing, essentially, a live application.


This SF.Net email is sponsored by: Oracle 10g
Get certified on the hottest thing ever to hit the market... Oracle 10g. 
Take an Oracle 10g class now, and we'll give you the exam FREE.
Alsa-devel mailing list

Reply via email to