Greg, 
just a few thoughts about the things you said:

I do agree that these days it pays off to use a general purpose PC to perform 
dsp stuff rather trying to build expensive custom tailored hw solutions.
Especially now that almost all CPUs do have SIMD instructions. 

The only problem with PCs is that you have to be careful when wanting to get 
low latencies.
You won't get 1 sample latency ( eg ADC -> process one samples -> DAC).

With PC based audio solutions you always need to use use this approach:
get N samples from soundcard -> process N samples -> output N samples to 
soundcard

Because: if the PCI bus architecture, transfers always to occur in blocks 
thus even if the operating system could handle single samples processing, the 
PCI bus would add some latency anyway. (I believe it's in the 0.3-0.7msec 
range).

As for the low latency capabilities of Linux+lowlat patches, as said you can
achieve sub-2msec latencies on a highly loaded system. (this means high disk 
activity , etc etc).

If you are careful to shut down all other stuff, not using virtual memory , 
letting you application run SCHED_FIFO (realtime scheduling) and using 
busywait memory mapping audio I/O (which means that you app will never call 
any syscall and never sleep, but only writing to the soundcard's mem while 
monitoring the capture and playback pointer in order to know where to 
read/write the samples),  
then I believe you can get pretty close to the lower limit which I estimate 
around <1msec effective latency.

Not sure if it's enough for you, but since most pro-audio equipement has some 
intrinsic latency in it, then I think it's not worth the trouble to try to 
get single sample latency.
In theory it's possible but then you probably do need some special hardware 
that can interrupt you once per sample (eg 44100 times per second) possibly
triggering an RTLinux module (without Linux cannot guarantee that your 
routine gets called in time) that does the actual DSP stuff.
You can guess the difficulty of achieving this from a programming effort 
point of view.
It would require you to write your own audio drivers, write an rtlinux kerel 
module, be careful what you do with the FPU (FPU flags need to be 
saved/restored while you are in kernel space) etc etc.

When you chose to stay in userspace (sacrificing single sample latency) you 
things get very confortable for you:
you write a simple LADSPA host once (eg one that simply loads one/more plugins
and then sits in a loop reading from the soundcard, calling the dsp stuff 
contained in plugins and then output it to the soundcard).

For the rest you can focus entirely on writing of DSP code without worrying 
about the audio IO stuff.

I guess there are already some realtime capabe LADSPA hosts out here.
For example: I'm not familiar with ecasound, but I guess that if you run it 
with realtime privileges (SCHED_FIFO) and avoid to do weird things it is 
suitable for realtime DSP stuff.  ( Ask Kai)
Same applies to the other LADSPA hosts. 

PS: Greg, your help for DSP stuff will be very welcome, especially when trying
to squeeze out the maximum performance of applications.

BTW: at LinuxTag we made some experiments with Muse driving the disksampler
and on Frank's 1Ghz machine (setting was 2.1msec latency (3x128))  
70voices (sample playback with linear interpolation) causes a CPU load 
of about 30-35% which is quite acceptable IMHO. 

cheers,
Benno.
http://www.linuxaudiodev.org


On Friday 06 July 2001 20:40, Greg Berchin wrote:
> Hi, folks;
>
> I've been subscribed for a few weeks, sitting on the sidelines hoping that
> I would come up to speed with time, but I find that it's not happening.  I
> just recently set up my first Linux system, (finally) installed the ALSA
> drivers, and seem to have everything up and running.  (Some UNIX experience
> from long, long ago helped a lot.)  But all of this is preliminary, and the
> knowledge that I've acquired still hasn't prepared me for what I am
> ultimately trying to do:  incorporate my own realtime audio applications
> into the PC.
>
> Writing the applications is the easy part for me; I have a Master's degree
> in electrical engineering and have been writing DSP-based audio
> applications in assembly and C for nearly a decade.  So I know what I'm
> doing, algorithmically speaking.  But the part I'm having trouble with is
> the specifics of how to retrieve the raw audio samples from the sound card
> and how to send the processed audio samples to the sound card.  I am not
> familiar enough with how a Linux PC operates to even know where to look.  I
> see references to "threaded applications" and "callback routines"; I have
> to admit that I don't know what they mean.  I have read every FAQ and HOWTO
> that I could find, but they either don't contain the information that I
> seek or I don't understand them if they do.
>
> Is there a resource somewhere that describes this process to someone who is
> learning from the ground up?  As I said, I have no trouble with the
> processing routines; it's the I/O that's giving me fits.
>
> Many thanks,
> Greg Berchin

Reply via email to