On 02/22/2013 03:25 AM, Eric Carmichel wrote:
Regarding localization, home theatre sound and the 80-Hz xover
point:
I’ll confess ignorance when it comes to knowledge of a separate or
unique physiological mechanism used to localize (or omni-ize)
ultra-low frequencies.
same here. if there is indeed a special mechanism at work, i'd like to
learn what it might be.
Within the context of room reflections, music
listening, home theatre, and the like, I’m fully aware that
frequencies below 80 Hz are near impossible to localize. Add to the
overall auditory scene the constituent frequencies that provide
unambiguous sound-source information, the need for a surround of subs
really goes out the window. Some spout a slightly higher cut-off
frequency, but I gather that 80 Hz is the accepted standard for home
theatre.
if you need to be compatible with the n.1 LFE, your sub should go up to
120hz. otherwise, LFE content might be split up between fullrange
speakers and sub. (not that it is a problem as such, it's just that it
complicates the bass management and cinema setups tend to avoid it).
I have seen literature on 5.1 and 7.1 referring to subs and
low-frequency enhancement for the sole purpose of effects--the
surround speakers are still operated full-range.
yes, that would be said low-frequency effects channel. orthogonal to the
whole issue of subwoofers really. all main channels are meant to be
fullrange, and the bass management can decide to offload some bass to
dedicated subs.
some people (even classical producers) even mix low-frequency content of
the main channels into the LFE, but if you do that, you need to be very
sure of what you are doing and how well it translates to different
systems. most music productions are actually 5.0, maybe with some
unimportant LFE stuff so that the customers don't complain that one
meter stays off (they paid for six channels after all).
[Although I haven't
tried this, I imagine we can accurately lateralize a sub-80 Hz tone
under earphones. If so, then a lot of our (in)ability to localize low
frequencies in the sound field is mostly a consequence of physical
variables such as long wavelengths, head diffraction, room
reflections, etc., and not a unique mechanism or deficiency of the
brain, mid-brain, or peripheral sensory organ.]
yeah, the "inability to localise bass sounds" is a very persistant urban
myth. "in rooms", ok, but anybody who has been near an open-air
rocknroll stage during subwoofer calibration will have no trouble
localising the sound :)
Regardless of
accepted protocol, I do have reason for using multiple subwoofers,
and this reason purposely ignores psychoacoustics.
<snip>
interesting research project! if you can, please share the results.
When it comes time to construct a sound system for accessing sound
quality (or simply for musical enjoyment), I will most certainly use
a single sub as you suggested. But for my proposed system, the
subwoofer's crossover frequency and filter order does become
something of a choice based on speaker performance. Because I will be
filtering/processing the four B-formatted wave files before decoding
(none of the processing will be done in real-time), I have a lot of
choices for filter types--and perhaps the addition of group delay. I
have numerous MATLAB Toolboxes for processing wav files in addition
to the Advanced Signal Processing and Digital Filter Design Toolkits
in LabVIEW. Thankfully, I’m no longer limited to the *bouncy*
8th-order elliptic filters I used to construct. The problem nowadays
is that there are way too many choices that are relatively easy to
implement. Responses to my last post provide clearer
direction--thanks.
why are you doing this off-line? the processing power required will be
modest, and real-time processing will speed up the calibration by at
least a factor of ten.
for the actual experiment, i can see how you would use pre-rendered
files for extra robustness, repeatability and foolproof documentation,
but while finding the setup, i'd really recommend to use real-time
filtering. insert usual "linux/jack/ardour/fons' stuff" plug here.
A separate 8-channel A-D could certainly be used with ADAT. I suppose
there’s no reason to worry about inter-channel timing issues when
using dissimilar components (meaning a MOTU optically linked to an
D-A device).
usually not. you may have a few samples fixed offset. if that's
bothersome to you, it's easily measured and compensated.
that said, i'd suspect _very strongly_ that even the built-in i/o of two
daisy-chained MOTUs will have a relative offset, and if they are
anything like RME stuff, this offset might not even be constant between
reboots.
Similar to the MOTU interface, my M-Audio ProFire 2626
provides a lot of input and output options (two ADAT ports), but a
D-A converter would still be needed for > 8 analog outs.
I like the robustness (and XLR) connects of the MOTU 896; I’ll admit
that much of this is a personal choice but it's not meant to promote
or discount any single piece of gear. When it comes to configuring
hardware and software, I don’t know whether all DAWs provide the
option of assigning tracks to all of the physically available ports
(for example, one of my USB interfaces permits a choice of digital OR
analog, but not both simultaneously).
that's a hardware limitation, probably due to limited usb bandwidth. DAW
software that is unable to play out to arbitrary hardware channels
should be returned to the maker as defective, together with your
choicest expletives.
Furthermore, I want the
presentation of stimuli to be glitch-free. I imagine most modern
high-end DAWs and interfaces provide crash-free performance, but
mixing 48 mono tracks to stereo isn’t the same as providing 48
discrete analog out channels when it comes to stable performance.
from the POV of a piece of software, it's actually quite similar.
Regarding speaker arrays: Thanks, Jörn, for suggesting a large
(ear-level) ring with smaller rings above and below the larger ring.
My original idea (two large rings) stemmed from the notion that the
12 speakers would lie on the surface of a large (virtual) sphere
whose poles would extend beyond the room dimensions, thus giving the
impression of a *bigger* listening space. Of course, a sense of
distance and spaciousness is intrinsic to the recording, not how far
the actual speakers are from the listener (well, we could get into a
wave-front curvature discussion, but I’m not ready for that).
the thing with dual stacked rings is: sources on the equator have very
low rE already, being "vertical phantom sources". nothing against rings
as such, but make sure you have a good part of your speakers at ear
level. it sure is a psychoacoustic decision, but if it really bothers
you, you should go for an even distribution on the sphere anyways.
The
other reason had to do with placement of video monitors. If the video
doesn’t interfere with the speaker array, I’ll make drawings for the
speaker layout you suggested.
yeah, the eternal problem :(
RE mics: The idea of going *second-order* occurred to me, but I’m too
ignorant on this topic to say much. I’ve overheard discussions
(mostly at trade shows) where first-order mics could be *stretched*
to give second-order performance (and further stretched to give HOA
performance or approximations). As I understand, Ambisonics can be
enhanced by the addition of U and V components, and these are
extensions of the B-format components derived from a first-order mic
(based on the raw, or A-format, data). From what I've read, U and V
contribute to horizontal, not vertical, image stability. With regard
to live recordings, only a HOA-specific mic (e.g., VisiSonics or
Eigenmic) can provide true HOA performance. Adding more playback
channels to recordings obtained with a first-order mic may give
better room coverage/distribution, but nothing is gained in terms of
accurate *wave-field* reconstruction. Am I correct here? Although I
could create higher-order stimuli via modeling, the plan is to use
complex and dynamic stimuli (that includes moving objects such as
automobile traffic) obtained via live recordings. The emphasis is on
*real-world* reconstruction of representative environments, not
listening assessments based on movie-goers expectations (movie sound
designers provide us with ample conditioning; heck, you can even hear
sounds in the vacuum of deep space!).
but it's quite easy to render, say, a passing car, together with its
characteristic floor reflection in case that's relevant for cars, in
higher order.
sure, getting a complete ambient recording in HOA is hard and involves
something like the eigenmike, but "artificial" stimuli can just be
panned, exploiting the improved focus of HOA.
best,
jörn
--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT
http://stackingdwarves.net
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound