Re: [Sursound] Naive question on MS and Ambisonics

2013-05-22 Thread Robert Greene


Didn't Lauridsen propose and experiment with
stereo playback done this way--with a mono signal
in the center and a diffference signal produced by a edgeon mounted
dipole?
Robert

On Wed, 22 May 2013, J?rn Nettingsmeier wrote:


Hi Ray,

On 05/22/2013 01:24 AM, revery wrote:

Hello j?rn,

Thinking about what you say here, is this working by having pure M
from the front and S from 90 degrees to the side, effectively
'mixing' the M S signals in the air as they reach the ears/brain?
(Maybe I'm thinking about this too much, my brain is hurting.)


Maybe :)

Think about it this way: MS is a subset of Ambisonics, effectively missing 
the front-back and up-down information. So we can use it as an Ambi mic:
The Mid signal gets panned where I need it. The Side signal is then used to 
give it a bit of width. For a frontal source, it will be fed to the Y channel 
only. Note there is no pressure component W from this signal.
The only slight complication is that your side signal is not coincident with 
the main microphone, so you have to watch out for your overall image.



If so,
is there significant distortion/corruption of the effect from the two
ears receiving different variations of the M and S signals?


There is no coloration other than what's inherent in Ambisonics.


I realise
that ear crosstalk effect is an issue with standard two speaker
stereo as well, but the consequences with this kind of signal
presentation seem to me to be quite different. As part of this, if
the head turns say 45 degrees to the left, the ear difference would
seem to be at a maximum, with the left ear receiving a significant
amount of the opposite lobe of the figure 8 with little cancellation
effect from the M in front. Perhaps this is all part of the plan?.?


Like I said, there is no magic mixing in the air.
Ear crosstalk is not an issue in Ambisonics - we try to recreate a sound 
field, and the head is in there like it would be in the original field at the 
concert. So "ear crosstalk" is very much part of the experience.


What you will hear is pretty much a widened version of the M signal. It's not 
strictly orthodox, but it works, as long as you don't overdo it and you get 
your delays right.


For a less confusing way of mixing MS spots into Ambisonics, you can render 
them to Left-Spot and Right-Spot with a conventional MS matrix, and then pan 
those individually. Because they are coincident, there will be no comb filter 
artefacts, no matter how close you pan them.


The reason I treat the S signal separately is because I usually work in third 
order, so the M mic is pretty sharp, and the S mic is then fed to first order 
only.
The problem with this kind of mixed-order hackery is that the sound might 
shift when you truncate orders on playback (if no 3rd order system is 
available), but I already have this problem: my main mic is a first-order 
tetrahedron anyways. So I just watch out for it while mixing and frequently 
cross-check at lower orders to arrive at a useful middle ground.


Best,

J?rn




--
J?rn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

Meister f?r Veranstaltungstechnik (B?hne/Studio)
Tonmeister VDT

http://stackingdwarves.net

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] [allowed] Re: Recreating a 3d soundfield with lots of mics.....

2013-05-22 Thread Robert Greene

Sorry! I read the wrong volume! RFH is actually 21,960.
This gives critical distance ~ 7 meters.
(not that this changes my basic point but just for the record)
Robert

On Wed, 22 May 2013, Robert Greene wrote:



No. But the fact that a hall sounds
anechoic or nearly so does not mean it is!
To the extent that I could find out on line
in a quick search, it seems that the
reverb time was about 1.4 seconds. This
is much too short to sound satisfactory and
moreover the rise of RT in the bass was
not much--this is something that makes
a hall sound thin and cold(like Disney in LA--
there is a lot of reverb there, 2 sec time but
it is unifomr with respect to frequency--
the thing sounds like a bad audio system)

However, while this is surely too dry to be
a good hall, such a reverb time will still
lead to the reverberant sound field dominating
the total energy received. One just has to back up
a bit further before this happens--but it will
still happen at all but extremely close locations.

For a fixed volume, the critical distance(beyond which
reverb is more than half the sound) varies reciprocally with
the square root of the reverb time. If the hall had a
reverb time of 2.8 seconds(super wet) then the critical distance
would be changed only by a factor of 1.4. ALLL halls that
are not open to the out of doors have a critical distance that
is smaller than the distance to most audience locations.

A quick seat of the pants calculation for RFH (volume 11,600
cu m, rt 1.4) gives that the critical distane is around 5.5 meters.
Not that far! Beyond that distance, reverb field is more than direct arrival.

Because of the precedence effect, the sound seems to come straight from
the players. But if is an illusion!
CF
www.regonaudio.com
"Records and Reality"

The relevance to the live versus speaker demo is that at distance,
the power response of the speaker dominates the scene--the specific
radiation pattern is not so important in detail. Which is why
the AR demos worked! (and presumably the Wharf. ones as well)

Robert

On Tue, 21 May 2013, David Pickett wrote:


At 12:16 21-05-13, Robert Greene wrote:


Even "dead" concert halls in the relative sense
have a lot of reverberation. A really dead hall
still has a 1 second reverberation time say
and most of what you hear in the audience is still
reverberant sound.


Did you ever hear an orchestra playing in the RFH pre 1960???

David

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Adding stereo to monophonic audio

2013-05-22 Thread Rev Tony Newnham
Hi

I seem to remember reading a review of kit that could do this - and/or
construct 5.1 from stereo - aimed at TV broadcasters.  That was just a
couple of years ago - not the inverse comb filter system that was sometimes
(mis)used to convert mono to stereo in the early days of stereo records.

Every Blessing

Tony

> -Original Message-
> From: sursound-boun...@music.vt.edu [mailto:sursound-boun...@music.vt.edu]
On
> Behalf Of Andrew Castiglione
> Sent: 22 May 2013 17:49
> To: Surround Sound discussion group
> Subject: [Sursound] Adding stereo to monophonic audio
> 
> Interesting
http://hackaday.com/2013/05/22/adding-stereo-to-monophonic-
> audio/

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Naive question on MS and Ambisonics

2013-05-22 Thread Jörn Nettingsmeier

Hi Ray,

On 05/22/2013 01:24 AM, revery wrote:

Hello jörn,

Thinking about what you say here, is this working by having pure M
from the front and S from 90 degrees to the side, effectively
'mixing' the M S signals in the air as they reach the ears/brain?
(Maybe I'm thinking about this too much, my brain is hurting.)


Maybe :)

Think about it this way: MS is a subset of Ambisonics, effectively 
missing the front-back and up-down information. So we can use it as an 
Ambi mic:
The Mid signal gets panned where I need it. The Side signal is then used 
to give it a bit of width. For a frontal source, it will be fed to the Y 
channel only. Note there is no pressure component W from this signal.
The only slight complication is that your side signal is not coincident 
with the main microphone, so you have to watch out for your overall image.



If so,
is there significant distortion/corruption of the effect from the two
ears receiving different variations of the M and S signals?


There is no coloration other than what's inherent in Ambisonics.


I realise
that ear crosstalk effect is an issue with standard two speaker
stereo as well, but the consequences with this kind of signal
presentation seem to me to be quite different. As part of this, if
the head turns say 45 degrees to the left, the ear difference would
seem to be at a maximum, with the left ear receiving a significant
amount of the opposite lobe of the figure 8 with little cancellation
effect from the M in front. Perhaps this is all part of the plan….?


Like I said, there is no magic mixing in the air.
Ear crosstalk is not an issue in Ambisonics - we try to recreate a sound 
field, and the head is in there like it would be in the original field 
at the concert. So "ear crosstalk" is very much part of the experience.


What you will hear is pretty much a widened version of the M signal. 
It's not strictly orthodox, but it works, as long as you don't overdo it 
and you get your delays right.


For a less confusing way of mixing MS spots into Ambisonics, you can 
render them to Left-Spot and Right-Spot with a conventional MS matrix, 
and then pan those individually. Because they are coincident, there will 
be no comb filter artefacts, no matter how close you pan them.


The reason I treat the S signal separately is because I usually work in 
third order, so the M mic is pretty sharp, and the S mic is then fed to 
first order only.
The problem with this kind of mixed-order hackery is that the sound 
might shift when you truncate orders on playback (if no 3rd order system 
is available), but I already have this problem: my main mic is a 
first-order tetrahedron anyways. So I just watch out for it while mixing 
and frequently cross-check at lower orders to arrive at a useful middle 
ground.


Best,

Jörn




--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT

http://stackingdwarves.net

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] [allowed] Re: Recreating a 3d soundfield with lots of mics.....

2013-05-22 Thread Robert Greene


No. But the fact that a hall sounds
anechoic or nearly so does not mean it is!
To the extent that I could find out on line
in a quick search, it seems that the
reverb time was about 1.4 seconds. This
is much too short to sound satisfactory and
moreover the rise of RT in the bass was
not much--this is something that makes
a hall sound thin and cold(like Disney in LA--
there is a lot of reverb there, 2 sec time but
it is unifomr with respect to frequency--
the thing sounds like a bad audio system)

However, while this is surely too dry to be
a good hall, such a reverb time will still
lead to the reverberant sound field dominating
the total energy received. One just has to back up
a bit further before this happens--but it will
still happen at all but extremely close locations.

For a fixed volume, the critical distance(beyond which
reverb is more than half the sound) varies reciprocally with
the square root of the reverb time. If the hall had a
reverb time of 2.8 seconds(super wet) then the critical distance
would be changed only by a factor of 1.4. ALLL halls that
are not open to the out of doors have a critical distance that
is smaller than the distance to most audience locations.

A quick seat of the pants calculation for RFH (volume 11,600
cu m, rt 1.4) gives that the critical distane is around 5.5 meters.
Not that far! Beyond that distance, reverb field is more than direct 
arrival.


Because of the precedence effect, the sound seems to come straight from
the players. But if is an illusion!
CF
www.regonaudio.com
"Records and Reality"

The relevance to the live versus speaker demo is that at distance,
the power response of the speaker dominates the scene--the specific
radiation pattern is not so important in detail. Which is why
the AR demos worked! (and presumably the Wharf. ones as well)

Robert

On Tue, 21 May 2013, David Pickett wrote:


At 12:16 21-05-13, Robert Greene wrote:


Even "dead" concert halls in the relative sense
have a lot of reverberation. A really dead hall
still has a 1 second reverberation time say
and most of what you hear in the audience is still
reverberant sound.


Did you ever hear an orchestra playing in the RFH pre 1960???

David

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Measurement, Analysis, and System Implementation of the Head-Related Transfer Function

2013-05-22 Thread Andrew Castiglione
http://people.ece.cornell.edu/land/courses/ece5030/FinalProjects/s2013/pmd68
_ecs227_hl577/pmd68_ecs227_hl577/index.html 

-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20130522/76dfcc33/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Adding stereo to monophonic audio

2013-05-22 Thread Andrew Castiglione
Interesting   
http://hackaday.com/2013/05/22/adding-stereo-to-monophonic-audio/

-Original Message-
From: Hack a Day [mailto:comment-re...@wordpress.com] 
Sent: Wednesday, May 22, 2013 8:00 AM

Subject: [New post] Adding stereo to monophonic audio

Post   : Adding stereo to monophonic audio
URL: http://hackaday.com/2013/05/22/adding-stereo-to-monophonic-audio/
Posted : May 22, 2013 at 8:00 am
Author : Brian Benchoff
Tags   : binarual recording, binaural, head transfer function, recording, 
stereo
Categories : digital audio hacks

http://hackadaycom.files.wordpress.com/2013/05/board.jpg

A lot of awesome stuff happened up in [Bruce Land]'s lab at Cornell this last 
semester. Three students - [Pat], [Ed], and [Hanna] put in hours of work to 
come up with a few algorithms that are able to simulate stereo audio with 
monophonic sound 
(http://people.ece.cornell.edu/land/courses/ece5030/FinalProjects/s2013/pmd68_ecs227_hl577/pmd68_ecs227_hl577/index.html)
 . It's enough work for three semesters of [Dr. Land]'s ECE 5030 class, and 
while it's impossible to truly appreciate this project with a YouTube video, 
we're assuming it's an awesome piece of work.

The first part of the team's project was to gather data about how the human ear 
hears in 3D space. To do this, they mounted microphones in a team member's ear, 
sat them down on a rotating stool, and played a series of clicks. Tons of 
MATLAB later, the team had an average of how their team member's heads heard 
sound. Basically, they created an algorithm of how binaural recording 
(http://en.wikipedia.org/wiki/Binaural_recording)  works.

To prove their algorithm worked, the team took a piece of music, squashed it 
down to mono, and played it through an MSP430 microcontroller. With a good pair 
of headphones, they're able to virtually place the music in a stereo space.

The video below covers the basics of their build but because of the limitations 
of [Bruce]'s camera and YouTube you won't be able to experience the team's 
virtual stereo for yourself. You can, however, put on a pair of headphones and 
listen to this (http://www.youtube.com/watch?v=IUDTlvagjJA) , a good example of 
what can be done with this sort of setup.

Read more of this post 
(http://hackaday.com/2013/05/22/adding-stereo-to-monophonic-audio/#more-98294) 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound