I would like to clarify a few details on low frequency localization.

Low frequencies are not difficult to localize with regard to natural sounds. 
The brain is perfectly capable of detecting phase differences for sounds 
originating in a real, three-dimensional space, and arriving at each ear at 
different times. It’s actually the middle frequencies where the human hearing 
system has difficulty. For the average size head, frequencies above 800 Hz 
start to overlap and errors occur in the phase difference calculations. From 
there to about 1600 Hz, localization is difficult. So, we’re directionally deaf 
for about an octave in the middle, although the psychoacoustics are quite 
complex.

Very low frequencies are difficult to localize, but we’re talking about 
frequencies below 80 Hz - the lowest octave of the human hearing range.

Non-natural sounds can be different, especially for music mixed with a 
standard, amplitude-only panning system. Some people refer to this as 
multi-mono recording, even though it’s typically called stereo. This is by far 
the most common way to mix music, and the low frequencies have no inter-channel 
delay at all, only amplitude changes which are useless. So, in that sense, low 
frequencies in stereo music sources can be impossible to locate because the 
necessary localization information is simply not present in the signal.

Finally, to bring this back to the topic of CoreAudio, Apple provides a 3D 
Mixer (deprecated) and Spacial Mixer AudioUnits which do allow for 
inter-channel delays. In addition, the output system of CoreAudio will map 
spacial sound source onto the speakers you have according to their physical 
placement (as set by the user in Audio MIDI Setup). This works for stereo, 
binaural outputs as well as quadrophonic, 5.1 surround, or higher. Thus, for 
audio from video games or music produced and mixed with CoreAudio, it would 
indeed be possible to localize low frequency sounds. Of course, frequencies 
below 80 Hz will be difficult to localize, but most sound sources have 
relatively less content that low, except for electronic and experimental music.

Thus, it is more accurate to say that we only need one subwoofer in a surround 
system because music has been produced for decades with no inter-channel delay. 
But things are changing. Depending upon the frequency range of your subwoofer 
(100 Hz?) and the slope of the crossover filters, there could be a reasonable 
amount of directional information in surround content, depending upon the 
manner in which the audio was produced. Of course, this assumes that your main 
speakers do not extend below 80 Hz well enough to cover the directional 
information in that range.

Brian Willoughby
Sound Consulting


On Jan 25, 2018, at 12:47 AM, Richard Dobson <[email protected]> wrote:
> That's a very cool idea ... but also very difficult. The overall topic is 
> generally referred to as "localisation". You are describing "dummy head 
> recording". Spectrum analysis will be an important tool, though in some cases 
> inspecting the waveform may show how phase differences between the ears 
> contribute to localisation. This "inter-aural difference" is an important 
> element.  It is known, for example, that localisation depends not only on 
> distance, but also on pitch - low frequencies are virtually impossible to 
> localise as the phase is not sufficiently different between the ears. Which 
> is why we may need 5 speakers, in just the right positions, to hear music in 
> "surround", but just one sub-woofer, which can be placed just about anywhere.

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to