Nash Nipples wrote:
i have experienced a delay when i was recording voice
and sort of laying it on a track. but removing a few
miliseconds off the beginning didnt make a big trouble for
me and thats when i have discovered this phenomenom but
thought it was rather because my sound card is old. is this
any close to something that you mean?
it is related, but what you mentiioned is a slightly
different area in computer audio than what i mean with respect to kernels.

An old card is likely to produce greater delays under any
setup, but that is more a factor of its capabilities and quality of its parts than just purely because it is old. If you were using JACK (linux) or ASIO (windows) the delay could be minimized to some extent, but no miracles there, and that is about the most you can do.

i'm only at entry level on audio engineering, but in a
nutshell here is an overview to explain your delay, (latency) and my kernel
related question.

first of all, computers are always prone to latencies, or
delays, because of the time it takes the cpu to process the audio data, read/write from disks, convert between audio signals and digital data, etc, etc. All take up cycles and valuable milliseconds.

when playing normal music like mp3s in itunes and such, you
wouldn't notice any delay because the data is simply read from disk and output thru speackers as they come, though technically there would be a short period between the time the data was read off disk and then heard out the speakers. in audio production that slight delay, or latency, is everything, especially when you have multi-track recording and syncing with external gear. (latencies range from under 10ms to
over 100ms)

Latency would be more noticeable, say when recording, and
would sound like an echo if you were talking into a mic and listening thru headphones or speakers, on an average setup at least. roughly becase of the time needed to process the signal.

Also, If you were to use a midi controller/keyboard
connected via midi to play a software instrument, you would notice a delay between the time you hit a key on the keyboard and when you actually heard the sound. The key press transmittes midi data to the software instrument, the software triggers the desired sound immediately, but there is that slight delay to create the sound and push it out the speakers.

Audio interfaces can have a latency around 5ms, which is
very good, yet may still be faintly noticable, and that also depends on the power of the pc and the actual load ( number of tracks, effects, instruments playing), and latency is still a *phenomenon* and something you *have to live with* in computer audio production; as opposed to hardware gear like samplers, drum machines, synths, etc. and short of buying a $10K protools "soundcard". a _hobbyist_ audio interface can be decent with latencies under 20ms.

In application, multi-track  recording programa like
Cubase, Logic, Ardour, Ableton, Cakewalk etc, do fairly well (with a good setup) when all material is confined within the computer and not communicating with the outside world. When you are recording say a vocalist, chances are you're playing back all the other tracks (drums, bass, etc) at the same time for the vocalist to listen and sing to, plus possible playing back the vocalist with added effects likde reverb; a mixture of delays going in, and the same amount going out.

Simillarly, if you had one or more audio tracks, (playing
back audio recordings or samples off hard disk), and had any midi track there being sent to external hardware like a drum machine ,to play drum parts from there, or a hardware synth playing a bass or pad track, then the practice is to compensate for latency by having the midi track play or transmit with a 5ms delay, to give the computer that little time to process the audio, and so the whole mix would sound in sync (since midi to external gear to sound heard would be instant). 5ms for a sytem with 5ms latency, 20ms would require 20, 100 would require 100,

There is a lot more to computer audio production, but
basically the efforts are always, or mainly, to reduce that latency as much as possible. Better audio interfaces not only have better components and AD/DA converters, they are also intented to take as much load off the cpu as possible, and handle what they can themselves. Also, the more power a computer has never eliminates latency or delay completely, it just generally means you can play more tracks at once and use low latencies more comfortable without dropouts, glitches,
clicks and pops.

( Latency is more or less adjustable by configuring the
audio buffer size and the number of said buffers to find the best setting on your system. That is, you stop at the lowest setting that works without glitches and gaps in the music playback or recording. The best way to demonstrate or understand that is by using propellerheads reason or rebirth's settings on windows. Also, when you install kde3 on freebsd, you might notice system sounds play with a delay, especially when they are part of an error message with a dialog box - sometimes the sound comes out even a second after the dialog box is displayed. This is mainly because the default buffer size in kde3 is set way too high using about 10 buffers. from ocntrol centre you can set the number of buffers to 5 or less and still work well))

Further to all that,  JACK on linux/unix, Coreaudio on OSX
, or ASIO on Windows are technologies developed for low latency audio performance, and basically handle the communication between software and audio hardware. You would definately have to use these for serious audio use ( protools also have DAE, but not the point)

With common or old audio hardware, you could still get some
benefit using them, but i think you would be lucky if you got anywhere close to 50ms, with ASIO on windows at least, and as opposed to over 100ms with the default windows sound system. something like that.

Finally, what i was inquiring about was this. In windows
xp, to optimize the system for better performance and squeeze more juice, the most you can do is a few system settings, stopping services you don't need, and similar stuff. in OSX there is no real need , particularly on intel macs,. and in linux there are a few projects that specialize on low-latency/ rt kernels such as planet ccrma on fedora, studio64 on ubuntu, and a couple of others on suse and debian of knoppix that i have heard of.

At face value, most would think they are just distributions
that come prepackaged with all the best open-source audio/video production software available, but their real strength and claim to fame is their rt kernels. Technically i have no real clue what work goes into these,but have heard something like real-time scheduling ( i could be off) , and just imagine that they are made to dedicate most of their resource and attention to assure an uninterrupted audio *path*. At the mment i have just started to use studio64 and haven't really been working on audio yet (just checking my miail infact). About a week or two ago, on their mailing list, the developers were asking users whether they would prefer to stick to the current rt kernel, or go with the latest and greatest non-rt intrepid kernel as part of ubuntu's next major release. This was because the rt kernel developement took alot of work and rt kernels are not as easy as they sound from what i can make of it. 100% of replies were in favour of current rt kernel.

Freebsd has most of the software you would need, that linux
has, in ports, but what i was inquiring about was whether anyone knew of a similar project or group working on freebsd, or at least i wanted to know if fbsd kernel had the potential, in which case i would rather have a audio-wise sluggish fbsd and watch it mature, then switch to linux at this point.

I hope all this was not that boring and gives a basic idea,
and just think that if fbsd had the potential and had a project based on it one day, it would be a winner.

I just really dont understand people who are trying to force capacitors and 
disregard the hard disk seeking and writing time.

Sound needs to travel some distance over the wire it takes time.
Sound needs to be proccessed to apply filters that takes time.
Something goes in something goes out the girl starts to sing.
And who expects that to happen instantly?

I think that a successful developer would be someone who understands the 
physical part of it. someone who can make an illustration of connected devices 
and count numbers and then actually calculate the timings and synchronize them 
with a recording software. The keyword is synchronizing. Not minimizing the 

There are many ways to go from here. My copy of FreeBSD is open. Some values 
can be changed on the fly. Everything else can be changed and recompiled.

Does it have a potential? In my opinion no. But it surely does not have limits.

fair enough and that is all i wanted to know. for the record it's all going to matter when you're jamming or doing something serious, live or studio, not when feeling inspired and artistic. i have had good systems which seem to completely freeze the display, can't move the mouse or move any windows for five minutes, but the music keeps going uninterrupted till all else come to their senses. Other systems have stop ped playing for a while, while his highness writes some cache, checks on memory, and pops by the network card to see how things are doing, or whatever happens.
And speaking of soundcards, most of them still have to be shielded to reduce the noise that other electronic and mechanical components want to add to the process.

i have a tc-electronics 24d that does not know the meaning of noise, and an m-audio 410 that does have a hint, but nothing a gate wouldn't fix, and i could always use spdif and work digital all the way to the amp.

moving on. i like fbsd more than any other os, but if it can't it can't.
_______________________________________________ mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to