Eric Seaberg;205660 Wrote: 
> The difference in 'time' is what gives us STEREO PERCEPTION.  Look at
> the space between your two ears, and I'm NOT saying the empty space
> ;-).  It's the phase relationship between the left and right ear and
> the timing/phase difference that allows you to 'place' a sound you hear
> with your eyes closed. 

Sorry, I think we're miscommunicating.  I'm worried about what happens
when you mix down several mics into one channel, and several more into
the other.  I agree about "placing" a sound, but that's exactly the
point - we only have two ears!  So when you use multiple mics scattered
around the recording venue and then mix them together, I just can't see
how that can be done without badly messing up all that timing
information.

> 
> The same holds true in multi-microphone recording techniques.  Your
> example of a 20Hz wavelength being recorded to two mics 30-feet apart
> is not really valid, unless you're recording Bach's Concerto for Solo
> 20Hz Oscillator.... music is multi-frequency with varied
> frequencies/wavelengths happening all the time.  

Sorry, it wasn't 20Hz, it was 1kHz.  The point was that the wavelength
of all the most important frequencies in music is LESS than a foot or
two, so mics farther apart than that are completely out of phase for
essentially every sound of interest.  Using your example of 30 feet
they would only be in phase for frequencies well below 20Hz.

> IMHO, there is no possible way to have absolute phase coherency at all
> frequencies at all times in an acoustic recording environment. 
> Actually, if you could, it would probably sound lifeless and sterile
> because our ears and brain don't work that way.

Really?  Have you ever listened to a binaural recording (that's a
recording made with two mics, one per channel, spaced about as far
apart as someone's ears)?  Through headphones, a good binaural
recording sounds eerily lifelike - more realistic and alive than
anything I've ever heard played over a stereo system.  I'm wondering
whether these issues are part of the reason.

> Many studio reference monitor manufacturers have incorporated phase
> coherency (or TIME ALIGNMENT) into their monitors, but it's usually
> done by placing the voice-coil of each individual driver in the same
> physical plane, allowing the combined sound to leave the 'box' as a
> phase aligned source.  What happens in the room before it hits your
> ears is part of the ear/brain experience that can't be compensated for
> UNLESS the music was recorded, mixed and listened to in an anechoic
> chamber.  Doesn't sound very exciting to me.

I don't really see the point in worrying about that when the music your
speakers are playing already has hopelessly distorted phase
information.

I have a small collection of recordings that sound extremely realistic
to me played through stereo speakers (Patricia Barber is a good
example) - I wonder whether those were made using minimal mic-ing?  But
I'm also confused why nearly everybody uses many mics when recording -
I'm sure there's a good reason!  Is it just to allow the engineer more
control over the balance of the recording?


-- 
opaqueice
------------------------------------------------------------------------
opaqueice's Profile: http://forums.slimdevices.com/member.php?userid=4234
View this thread: http://forums.slimdevices.com/showthread.php?t=35708

_______________________________________________
audiophiles mailing list
[email protected]
http://lists.slimdevices.com/lists/listinfo/audiophiles

Reply via email to