You really have to ask yourself what are the goals for a new audio system and 
what "use cases" do you want to cover. I have some experience in this area, but 
I'm not a true expert. Here are 5 thoughts to be considered and an anecdote:

1. audio
I thought Russ's response was pretty good and about what I would do as a first 
step. "High end" or "Pro" audio is more than 16 bit these days, although 16 
bits probably covers at least 80 or 90% of the "the market".
http://pro.sony.com/bbsc/ssr/cat-audio/cat-recorders/product-PCMD1/
http://pro.sony.com/bbsc/ssr/cat-audio/resource.latest.bbsccms-assets-cat-audio-latest-pcmd50.shtml

2. clocks
This is the most important thing to think about. What is your model for how the 
clocking will work? Beyond the simple case that Russ outlined of sending data 
to an old 16 bit Sound Blaster lay more complex cases that have to do with 
audio data and its associated clocks. If you desire to do complex things with 
audio at some point you will have to consider clocking.

3. video (when I say video I mean movies and not graphics)
If you think you are ever going to want to use your new audio system with a 
corresponding video system, you need to consider that from the outset. Audio 
and video need to be kept in sync with a common clock and perhaps adjustments 
need to be made for different latencies through each path. The lack of sync 
between audio and video that is so common today is a direct result of engineers 
that either don't understand the clocking issues, got it wrong, or ignored it. 
I see this over and over again. See the anecdote below.

4. muxing
Once you have audio and video the next thing you have to worry about is muxing. 
I think muxing can be kept out of the kernel but there are subtle interactions 
with clocking and other device stacks (see below) so I am not so sure.

5. complexity and modern life
It's one thing to blast some audio out to an old 16 bit Sound Blaster. It's 
another thing to capture A/V data from a Firewire camcorder and run the audio 
out to a set of powered USB speakers and the video to the screen or someplace 
else and have everything be in sync and work properly. 

anecdote:
I once tried to fire an engineer that was writing and A/V driver for Windows. 
The CEO wouldn't let me. The engineer almost had it working but the root cause 
of all the audio artifacts from that driver was caused by his code keeping two 
separate clocks in software, one for video and one for audio. He was told that, 
told which lines of code to change, and even how to change them, but he refused 
to make the changes. He got so stressed out over all this that he walked off 
the job. He literally said, "I am going for out for a haircut" and never came 
back. We changed about 10-15 lines of code in his driver to derive one clock 
from the other and it all worked perfectly. Talk about timing.
-- 
[email protected]


Reply via email to