> I would've thought though that if you had a software buffer of X
> milliseconds, then you should be sending that buffer to the card every X
> milliseconds (according to some other timer in the system.)  If it takes
> longer before the card is ready for the next buffer then you need to
> drop some samples before sending the buffer to the card, and if the card
> wants the next buffer sooner than X milliseconds after the last one
> you'll need to add some samples.

Dropping samples is bad.  It would be acceptable for something like
monitoring, but not for actual recording, mixing and the like.

> Sure, it wouldn't be audiophile-level quality, but it'd be enough for
> anyone who's happy with a couple of $10 sound cards in their system...

It should really resample all streams to the same clock as the CPU, I think.

> If a card is playing N samples buffer with actual sampling frequency Fs, then
> time between interrupts per buffer empty is (N / Fs), i.e. for two cards it 
> will be
> (N /Fs1), (N/Fs2) respectively.
>
> That is, average interrupt request frequency will be (Fs1/N), (Fs2/N)
> respectively.

Measuring the frequency of the card once isn't enough.  For one thing,
the measurement from one interrupt to the next wouldn't be accurate
enough.  Second, the frequency drifts and changes over time, too. 
(For instance, after you turn on your USB audio device, its power
regulators heat up, the temperature inside the box increases and the
crystal vibrates slower or faster.)  For recording or playback on a
single device, the effect is negligible, but for matching up multiple
devices you'd have to take it into account.  A system that does this
would have to constantly monitor interrupt request times and slowly
vary the amount it is resampling to keep everything in sync, sort of
like an electronic PLL circuit, in which the frequency value being
generated is low-pass filtered to smooth out irregularities in
frequency, but still prevent buffer over- or under-run.

> The device to measure the time can be the computer RTC (Real Time Clock)
> which is independent from both Fs1, Fs2.

Is it as stable as an audio device's clock, though?  CPU stuff doesn't
require low jitter or low drift, but ADC does.  I'm imagining all the
clocks would need lots of smoothing and averaging to keep everything
nice.

This has some info about syncing up device clocks.  In this case it's
an electronic PLL-type tracking; a TI USB chip that syncs to the USB
bus packet signals, so that it stays in sync with the clock that's
driving the USB bus (probably the same clock as the CPU?):

http://www.planetanalog.com/features/OEG20020220S0017

I was reading about the equivalent functionality in CoreAudio a
little, too, on the Apple developer website, but I can't find any
links right now.  It seems that CoreAudio feeds your program the data
about interrupt times and such, and your program has to do the
resampling?  I'm not sure.  (And I'm not even a programmer, so I don't
know much about this.)  :-)


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_idv37&alloc_id865&op=click
_______________________________________________
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user

Reply via email to