Re: [Sursound] A higher standard of standardness

2013-07-03 Thread David Pickett

At 06:31 3/7/2013, Robert Greene wrote:


Variations from reality ought surely to be based on knowing
how to reproduce the reality first and then introducing the
variations. One does not bend pitches for artistic effect
until one is able to play in tune, so to speak.


Yes, indeed; but such question begging exposes 
the problem per analogiam.  What does one define 
as in tune?  What you are asking for is the 
ability to reproduce a complete soundfield with 
100% accuracy, and then to introduce 
variations.  We have not yet progressed to this level.



If people want to treat recording as a pure art form
where one simply judges the results on aesthetic grounds.
it would be hard to say that was wrong. But it surely
takes recording out of the realm of science.


I am not sure that many of its practitioners 
(even Blumlein) regarded recording as a science: 
it is rather an exercise in engineering combined 
with aesthetics and as such intrinsically hard to theorize about.



To my mind, offensive or no, it remains startling to me
that there is no recorded demo of how various stereo mike
techniques reproduce the sound of a pink noise source at
various spots around the recording stage, for example.


I cannot imagine that anyone would want to listen 
to a CD of pink nose or that anyone can believe 
that objective determinations can be made by 
doing so for longer than a few minutes.  The ear 
adjusts to what it is hearing, as the eye does to 
colours under different lighting conditions and 
there is no equivalent to grey cards for white 
balance. Even doing A/B comparisons with the 
flick of a switch is fraught with self-deception, 
unless the levels are controlled and enough time 
is allowed to accustom oneself to A before assessing B.



Surely people might want to know whether the mike
technique was changing the perceived frequency response of sources
depending on where the sources were?
How can people NOT want to know this?


There is a book by Jürgen Meyer (Acoustics and 
the Performance of Music).  The blurb on Amazon 
says: This classic reference on musical 
acoustics and performance practice begins with a 
brief introduction to the fundamentals of 
acoustics and the generation of musical sounds. 
It then discusses the particulars of the sounds 
made by all the standard instruments in a modern 
orchestra as well as the human voice, the way in 
which the sounds made by these instruments are 
dispersed and how the room into which they are projected affects the sounds.


I have had this book for over 30 years.  It 
contains polar diagrams of most orchestral 
instruments plotted for different 
frequencies.  Nobody that I know has ever found 
much use for the data in making a recording, 
beyond those generalizations that are obvious to the ear.



I agree with EC that a complete analysis of
the relationship between recording and musical sound
 would be a tremendous
task, perhaps one that is not even well defined.


I think that is a conceit: there are far too many 
independent variables and the exercise would 
probably become what Glen Gould would describe as centipedal.



This is how science works. One works out simple cases
first. The fact that no one knows if there are infinitely
many primes pairs with difference 2(eg 17 and 19) does
not make it irrelevant to know that there are infinitely many
primes. One answers simple questions first.


Again: recording is not a science.  If anything 
it is a craft with elements of engineering.  I 
have been teaching it for over 30 years at 
university level and the number of textbooks that 
are of any use whatsoever, and those with 
caveats, can be counted on one hand.  Take, for 
instance, the excellent book on Stereo by 
Streicher: most of the information is either 
theoretical (e.g. the combination of unrealizable 
polar diagrams) or else cannot be used without 
extensive empirical experimentation.



Personally, I would just like to know which mike technique
does what to the tonal character of sources at different
locations around the recording stage. If you don't care, you
don't care. But I wish I had a disc where I could listen
and find out. I find it hard to believe that other people
are not interested in this.


As I am sure you know, active listening is a very 
tiring process that most people are not trained 
to participate in.  If one cannot identify 
differences within seconds it is best to take a 
long rest and try again much later.  Few have the 
patience for this and professionals cannot afford 
the time when musicians are waiting to perform.



Years ago I decided to learn the piano(I am a violinist!)
just to see how it would go, by learning the Rachmaninoff 3rd
piano concerto --a measure at a time. As you can imagine I
did not get very far! (the first statement of the theme
went ok but soon, no soap). Of course this was a joke!
I knew from experience of learning to play the violin
that one learns the basics step by step and builds
up to the complex 

Re: [Sursound] A higher standard of standardness

2013-07-03 Thread Dave Malham
On 3 July 2013 07:37, David Pickett d...@fugato.com wrote:

 At 06:31 3/7/2013, Robert Greene wrote:

  Variations from reality ought surely to be based on knowing
 how to reproduce the reality first and then introducing the
 variations. One does not bend pitches for artistic effect
 until one is able to play in tune, so to speak.


 Yes, indeed; but such question begging exposes the problem per analogiam.
  What does one define as in tune?  What you are asking for is the ability
 to reproduce a complete soundfield with 100% accuracy, and then to
 introduce variations.  We have not yet progressed to this level.


And I doubt if we will ever even begin to be able to do this. Ambisonics
theoretically can get it exactly right at just one point (and does get
close-ish) but it is still relatively far off when you factor in the
effects of the microphones  timbrally (because of the individual characters
of the capsules), in terms of the disturbance of the  sound field in the
original space by the presence of the microphone and because of the
non-coicidence of the capsules.  Then, of course, theres the limitations of
the loudspeakers and the playback space acoustics.


If you want to assess, in a controlled manner, what is the best recording
technique, this would be a huge experiment since it would surely be
different for every type of music - and every different ethnic group,
multiplied by their different life experiences - stay-at-home, traveller,
immigrant (multiplied by their generation), refugee, etc., differentiated
by their experience of music, both short (did they go to a nice concert
last night?) and long term (do they go to concerts/gigs/discos regularly?).
This doesn't even begin to include things like the fact that the current
generation of students find it difficult to hear (some of the) obvious
defects in recorded sounds that us antiques find glaringly obvious because
they have grown up listening to mp3's and have had to learn how to tune out
the rubbish generated by the encoding/decoding algorithms.

But, assuming someone manages to get a sufficiently, stupendously,
ginormously huge grant and actually want to take this on, here are some
suggestions;

Under no circumstances use artificial test signals like pink noise for the
main work - use these only for later, detailed work if absolutely
necessary.  Record real sounds anechoically and replay them from the best
possible loudspeakers, one per sound source** on stage in an appropriate
venue (concert hall for  DWMM, outside in a square for Gamelan, in a pokey
little club for jazz and so on) which, importantly, will also have to have
an excellent replay room attached to it.

In turn, set up the differing record/replay set ups, recording the same
piece of music each time. Have your experimental subjects listen to the
live replay in the venue and the recording of it in the replay room. Now
repeat the experiment with the same venues, music and equipment but
different replay rooms n order to remove the effects of the replay room
acoustics and other perceptual effects like comfort and lighting. Repeat
until you can't stand it any more  then analyse listener preferences. After
five years and several million euros you might have some answers.


Dave.

PS I am available for consultancy if one of you gets the grant, at
appropriate rates, of course :-)

** I am aware this isn't perfect as ideally it should be the case that the
speakers also reproduce the directional effects of the instruments but it
is, at least (a) reproducible and (b) not a totally artificial signal.



-- 
As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/9f46c0ae/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] A higher standard of standardness

2013-07-03 Thread Richard G Elen

On 03/07/2013 05:31, Robert Greene wrote:

If people want to treat recording as a pure art form
where one simply judges the results on aesthetic grounds.
it would be hard to say that was wrong. But it surely
takes recording out of the realm of science. 


I am not sure that recording is a science per se. That's not to say that 
there isn't, or cannot be, or shouldn't be, such a thing as a science 
of recording, but it's not what most of us actually do.


What we actually do is fundamentally artistic, though it uses an array 
of more or less technical tools and relies on a good deal of engineering 
to produce those tools. This of course is true of virtually any art: all 
rely on some kind of technology, whether it's what makes a hammer hit a 
tuned string or the materials that are combined to make a paint of a 
certain colour. But technology is not simply applied science, and in 
these areas we are not, generally, interested in how it works (the 
science) though those who make the tools no doubt are: we are interested 
in how it can be worked - how you use the tools to get what you are 
looking for, and then, most importantly the art of using them to get 
something that communicates emotionally and effectively at the end.


If we are communicating emotion, there is a path along which that 
emotion travels. Perhaps it is from the performance of musicians in a 
certain acoustic environment, captured in a certain way and designed to 
be listened to in a certain way, as determined at least partially by the 
musicians and the team in the control room. If they decide, arbitrarily 
or otherwise, that what they are hearing (and thus, ideally, what you 
will hear at home - the destination of that emotional communication) 
communicates the emotion they wish to communicate, then that's it - its 
closeness to what you might hear acoustically in the vicinity of the 
musicians is irrelevant as far as the emotion is concerned (although it 
might be relevant to the techniques used to create and capture the 
performance).


--R

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Moritz Fehr
Dear Members of Sursound,

i am using the VVMicVst Plugin in Reaper for mixing and decoding my B-Format 
recordings. The plugin is limited to an output of 8 channels. For a new sound 
installation, I would like to decode to 16 channels (two circles of 8 speakers 
stacked). I know that I could use ICST for Max, but if possible in any way, I 
would to keep on working in a DAW. Are there any other plugins or tools 
available for this purpose (OSX) ?

Any help would be greatly appreciated!

Best,
Moritz 




-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/2415a0a0/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Michael Chapman
 Dear Members of Sursound,

 i am using the VVMicVst Plugin in Reaper for mixing and decoding my
 B-Format recordings. The plugin is limited to an output of 8 channels. For
 a new sound installation, I would like to decode to 16 channels (two
 circles of 8 speakers stacked). I know that I could use ICST for Max, but
 if possible in any way, I would to keep on working in a DAW. Are there any
 other plugins or tools available for this purpose (OSX) ?


OSX : Ambdec ... ?

Michael


You can input / output through a DAW if you use Jack.



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] A higher standard of standardness

2013-07-03 Thread umashankar manthravadi
  
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/e6f4e754/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Moritz Fehr
hi everyone,

thank you very much for your replies -- what i would like to achieve is playing 
a mix of a b-format recording combined with several mono- and stereofiles (have 
been doing this a lot, but only with a maximum of 8 channels). my mixing 
platform is reaper on osx.

i am going to record a space with a soundfield mic and i would like to then 
make a simulation of it by setting up an array of 16 speakers. one speaker 
circle is on ear level, the other one above.
i would like to use the second circle above to add height information to the 
ambisonic soundfield.

as i can see now, adding a second instance of vvmic or harpex might not be 
suitable as it would generate two separate soundfields. (not sure if i am right 
here...)
the b2x plugins seem to have a maximum of 12 outputs. ...i will look at ambdec 
but it does seem to need a lot of routing using jack. 

would the decopro vst plugin (http://www.gerzonic.net/) be a good choice for 
this purpose?

thank you !
moritz




Am 03.07.2013 um 15:38 schrieb Matthias Kronlachner:

 hi!
 
 you may just add an additional 8 channel track for a second instance of 
 vvmicvst in reaper.
 send the 4 channel ambisonics signal to this newly created instance hosting 
 vvmicvst, and route the outputs as you like.
 
 but if this approach gives you good decoding is another issue..
 
 matthias
 
 On 7/3/13 1:37 PM, Moritz Fehr wrote:
 Dear Members of Sursound,
 
 i am using the VVMicVst Plugin in Reaper for mixing and decoding my B-Format 
 recordings. The plugin is limited to an output of 8 channels. For a new 
 sound installation, I would like to decode to 16 channels (two circles of 8 
 speakers stacked). I know that I could use ICST for Max, but if possible in 
 any way, I would to keep on working in a DAW. Are there any other plugins or 
 tools available for this purpose (OSX) ?
 
 Any help would be greatly appreciated!
 
 Best,
 Moritz
 
 
 
 
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/2415a0a0/attachment.html
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
 

*
Moritz Fehr
mobil: 01749231733
moritzf...@web.de
www.moritzfehr.de

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/66f54ac4/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Michael Chapman
  ...i will look at
 ambdec but it does seem to need a lot of routing using jack.


Sixteen speakers need a lot of routing whatever you use ... said not to be
unfriendly, just to emphasise I'm not sure I understand ;-)

If you are worried about repeatedly having to connect everything, then
IIRC the GUI's to Jack allow for a 'save this configuration'.
Even without that Ambdec configuration allows for named connections (sorry
it's along time since I set one up).

Michael
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread ThomasChen
I have done this--smbisonic decoded to 2 hexagons one above the  other.  I 
have also add a stereo mix into the decode.  It is known as  B+
ThomasChen
 
 
In a message dated 7/3/2013 7:34:00 A.M. Pacific Daylight Time,  
m...@moritzfehr.de writes:

hi  everyone,

thank you very much for your replies -- what i would like to  achieve is 
playing a mix of a b-format recording combined with several mono-  and 
stereofiles (have been doing this a lot, but only with a maximum of 8  
channels). 
my mixing platform is reaper on osx.

i am going to record a  space with a soundfield mic and i would like to 
then make a simulation of it  by setting up an array of 16 speakers. one 
speaker circle is on ear level, the  other one above.
i would like to use the second circle above to add height  information to 
the ambisonic soundfield.

as i can see now, adding a  second instance of vvmic or harpex might not be 
suitable as it would generate  two separate soundfields. (not sure if i am 
right here...)
the b2x plugins  seem to have a maximum of 12 outputs. ...i will look at 
ambdec but it does  seem to need a lot of routing using jack. 

would the decopro vst plugin  (http://www.gerzonic.net/) be a good choice 
for this purpose?

thank you  !
moritz




Am 03.07.2013 um 15:38 schrieb Matthias  Kronlachner:

 hi!
 
 you may just add an additional 8  channel track for a second instance of 
vvmicvst in reaper.
 send the 4  channel ambisonics signal to this newly created instance 
hosting vvmicvst, and  route the outputs as you like.
 
 but if this approach gives you  good decoding is another issue..
 
 matthias
 
 On  7/3/13 1:37 PM, Moritz Fehr wrote:
 Dear Members of  Sursound,
 
 i am using the VVMicVst Plugin in Reaper  for mixing and decoding my 
B-Format recordings. The plugin is limited to an  output of 8 channels. For a 
new sound installation, I would like to decode to  16 channels (two circles 
of 8 speakers stacked). I know that I could use ICST  for Max, but if 
possible in any way, I would to keep on working in a DAW. Are  there any other 
plugins or tools available for this purpose (OSX)  ?
 
 Any help would be greatly appreciated!
  
 Best,
 Moritz
 
 
  
 
 -- next part --
  An HTML attachment was scrubbed...
 URL:  
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/2415a0a0/attachment.html
  ___
 Sursound mailing  list
 Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
  

*
Moritz Fehr
mobil:  01749231733
moritzf...@web.de
www.moritzfehr.de

--  next part --
An HTML attachment was scrubbed...
URL:  
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/66f54ac4/attachment.html
___
Sursound  mailing  list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/3854bdfa/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Martin Leese
Moritz Fehr wrote:
...
 i am going to record a space with a soundfield mic and i would like to then
 make a simulation of it by setting up an array of 16 speakers. one speaker
 circle is on ear level, the other one above.
 i would like to use the second circle above to add height information to the
 ambisonic soundfield.

You will have more success if one speaker
ring is *below* ear level and the other above.
Alternatively, if you need a ring at ear level,
try three rings of, say, 4, 6, and 4 speakers.

Regards,
Martin
-- 
Martin J Leese
E-mail: martin.leese  stanfordalumni.org
Web: http://members.tripod.com/martin_leese/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread David McGriffy
as i can see now, adding a second instance of vvmic or harpex might not 
be suitable as it would generate two separate soundfields. (not sure if 
i am right here...)


I can't speak for Harpex (and I can imagine reasons why it might not 
work), but it should be no problem to use two copies of VVMicVST. Each 
output depends only on the inputs and the parameters for that output.  A 
given output will not be affected by the azi/elevation/etc. of the other 
mics. Of course, what is correct is a question of the whole set, but 
that set can span several instances of the plugin.


David

P.S. the standalone Windows program VVMic supports 32 outputs if this helps
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Aaron Heller
Hi Moritz,

I've been building Ambisonic decoders in Faust, which can then be compiled
into a variety of plugins, including VST, PureData, SuperCollider, and so
forth.  What you need sounds easy to do.   Contact me directly (
hel...@ai.sri.com) and we can work out the details.

Info about Faust here:
   http://faust.grame.fr

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US



On Wed, Jul 3, 2013 at 7:33 AM, Moritz Fehr m...@moritzfehr.de wrote:

 hi everyone,

 thank you very much for your replies -- what i would like to achieve is
 playing a mix of a b-format recording combined with several mono- and
 stereofiles (have been doing this a lot, but only with a maximum of 8
 channels). my mixing platform is reaper on osx.

 i am going to record a space with a soundfield mic and i would like to
 then make a simulation of it by setting up an array of 16 speakers. one
 speaker circle is on ear level, the other one above.
 i would like to use the second circle above to add height information to
 the ambisonic soundfield.

 as i can see now, adding a second instance of vvmic or harpex might not be
 suitable as it would generate two separate soundfields. (not sure if i am
 right here...)
 the b2x plugins seem to have a maximum of 12 outputs. ...i will look at
 ambdec but it does seem to need a lot of routing using jack.

 would the decopro vst plugin (http://www.gerzonic.net/) be a good choice
 for this purpose?

 thank you !
 moritz




 Am 03.07.2013 um 15:38 schrieb Matthias Kronlachner:

  hi!
 
  you may just add an additional 8 channel track for a second instance of
 vvmicvst in reaper.
  send the 4 channel ambisonics signal to this newly created instance
 hosting vvmicvst, and route the outputs as you like.
 
  but if this approach gives you good decoding is another issue..
 
  matthias
 
  On 7/3/13 1:37 PM, Moritz Fehr wrote:
  Dear Members of Sursound,
 
  i am using the VVMicVst Plugin in Reaper for mixing and decoding my
 B-Format recordings. The plugin is limited to an output of 8 channels. For
 a new sound installation, I would like to decode to 16 channels (two
 circles of 8 speakers stacked). I know that I could use ICST for Max, but
 if possible in any way, I would to keep on working in a DAW. Are there any
 other plugins or tools available for this purpose (OSX) ?
 
  Any help would be greatly appreciated!
 
  Best,
  Moritz
 
 
 
 
  -- next part --
  An HTML attachment was scrubbed...
  URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/2415a0a0/attachment.html
 
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
 

 *
 Moritz Fehr
 mobil: 01749231733
 moritzf...@web.de
 www.moritzfehr.de

 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/66f54ac4/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/c547d59e/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Round Arrays in Square Spaces: A Case of Iteracy

2013-07-03 Thread Eric Carmichel
I was one of those kids who put round pegs in square holes. Out-of-the box 
thinking didn't apply. Now I'm one of those adults...
Regarding recent posts: I don't think anybody wants to listen to pink noise 
unless you're performing the exercises in Dave Moulton's Golden Ear training. 
But recordings of Gaussian, weighted, and band-limited noise are highly 
purposeful--we all know this.

Digital recordings of pink noise are even better than the old days of analog 
noise generators because we have a replicable reference that we can overlay or 
compare measurements to. On average, pink noise gives a predictable spectrum, 
but without a recording and known time reference, we can't repeat the EXACT 
same signal over and over--again, no news here. But here's something I wish to 
try (I've touched on this in past posts, but now my design is more concrete).

Briefly, I propose a recording of a recording in order to validate *accuracy* 
of spatial reproduction. A human element need not be present (this ain't social 
science). By rotating my TetraMic on a fixture that permits rotation on its 
central axis (see figure in link below)**, I can use a single loudspeaker to 
create the equivalent of a circular array of n loudspeakers playing bursts of 
narrowband noise (or music, if you prefer). I use narrowband (octave or 
third-octave) noise in lieu of pink noise to improve the SNR. This recording 
will provide the initial B-formatted files of noise bursts. I'll arbitrarily 
rotate the mic in 60-degree increments for a total of 6 positions. Because a 
single speaker is being used, I only have to calibrate one speaker at one 
location. Regardless, I now have an equivalent recording of a 6-speaker, 
horizontal-only array.
For playback, I will use a cubical array that consists of eight loudspeakers: 
four below the horizontal plane (plane as it passes through the mic) and four 
above this plane. Four of the speakers are inverted so that the speakers above 
mirror the speakers below. I am building a frame that permits easy mounting of 
the speakers. Each speaker has its own *shelf* that angles the speaker toward 
the center of the cube. The frame can be transported out-of-doors and away from 
reflecting surfaces (other than ground reflections). I work on a ranch 
(part-time) and is why I have ready access to an open space.

Next I play the initial recording that consisted of noise bursts emanating from 
six virtual speakers, but the processed recording is played thru the cubic 
arrangement. At the center of the cube is the TetraMic. This time there this is 
no speaker (or speakers) on the horizontal plane passing thru the mic, but the 
initial recording was made from a virtual array of speakers lying on this 
plane. If the playback provides a true physical replication of the original 
recording, the resulting B-formatted files of the recording-of-a-recording 
should closely match the B-formatted files from the first recording in both 
level and spectral make-up. To a listener, the virtual surround (first 
recording) should appear as speakers in a circular array, each equally spaced 
60 degrees apart and at ear level, when played through the cubic array. Of 
course, I'm assuming the listener is positioned such that his/her ears lie on 
the horizontal plane that passes thru the center of
 cubic array. But when we replace the listener with the mic, the physical wave 
fronts will provide objective evidence of *accuracy* in terms of spatial 
orientation at the listening position. If the radius of the virtual (circular) 
array is greater than the distance to the faces of the cube, we might also get 
a sense of sound-to-source distance that goes beyond the (imaginary) sides 
formed by the cubical array. But because distance-to-source judgments depend on 
familiarity of a sound or SNR, I'd rather rely on objective results obtained 
via this proposed iterative recording process.
Maybe my idea is not original (though it is independently conceived), or even 
the bestest of ideas. But then, it isn't beneath me to put round pegs in square 
holes and do my own experimentation. Note that this experiment is void of 
music, doesn't require human subjects, but it is all about Ambisonics.

Best to All,
Eric 'Blockhead' C.

**URL to photo is www.cochlearconcepts.com/for_sursound/tetra_mount.jpg
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/3cff4821/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] A higher standard of standardness

2013-07-03 Thread Jörn Nettingsmeier

On 07/03/2013 06:31 AM, Robert Greene wrote:


I apologize if people took offense.


fwiw, i did not take offense at your clear preference for realistic 
recordings (which i share and aspire to as well). i do object to 
hand-wavey cultural pessimism that postulates the end of scientific 
thinking.


stereophonic techniques have been scrutinized and researched in very 
great depth and detail, and test recordings of the sort you were 
alluding to are routinely done by sound engineering students and 
seasoned recordist alike. the papers and data are out there.


stating otherwise doesn't change that fact. let's not make sursound into 
a boring solipsistic debate club that negates everything which hasn't 
been discussed here before.


snip

Except in audio, where no simple question ever seems to
get definitively answered and every almost discussion turns into
mush by means of enlarging the complexity of the situation
to the point that there are so many variables that no analysis is
possible without wild difficulties, if at all.

Personally, I would just like to know which mike technique
does what to the tonal character of sources at different
locations around the recording stage. If you don't care, you
don't care. But I wish I had a disc where I could listen
and find out. I find it hard to believe that other people
are not interested in this.


that's because they demonstrably _are_ interested in this.

it's just not as easy as you make it sound.

let's begin with the simple definition of tonal character.
you won't be able to separate tonal character from spatial rendition. 
coloration and comb filtering are a fact of life, and a perfectly 
uncolored monophonic source will often sound less pleasing than a 
comb-filtered stereo reproduction (unless your listening room helps a 
bit). moreover, the brain is able to extrapolate from severely 
comb-filtered sensory input and gives us the impression of hearing an 
uncolored auditory event. good luck simplifying that :) i'm looking 
forward to hearing about your test design.



Science works like that:one step at a time. Assuming that
people are interested in science.


yeah, that's why we have complete understanding of the human brain. 
because it's sooo easy to understand, if only people would read more 
sursound and not add needless complications. come on!



Years ago I decided to learn the piano(I am a violinist!)
just to see how it would go, by learning the Rachmaninoff 3rd
piano concerto --a measure at a time. As you can imagine I
did not get very far!


q.e.d.

your approach to scientific evaluation of recording techniques seems 
similar.



Audio seems to be missing a lot of the basics.


yes, because psychoacoustics is _hard_.


PS There is a good bit of this sort of thing about
LOCALIZATION. But not so much about timbre.


check out for example theile's spectral objection to summing 
localization, but do get a case of wine and cigars before you dig in, 
because it's going to be a loong and very interesting night if you 
follow through some more papers.


best,


jörn



--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT

http://stackingdwarves.net

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Fons Adriaensen
On Wed, Jul 03, 2013 at 01:15:58PM -0400, Daniel Courville wrote:

 Two instances of Harpex-B, each one on a separate output bus,
 in shotgun mode, each decoding to eight shotguns, using the
 Cube preset as a starting point.

M

Readers of this list will know me as one of those who, whenever
Ambisonic file formats etc. are discussed, will spoil the fun
by stating that there's no life below third order or so. And of
course I'm still of that opinion - if you want a system able to
emulate whatever speaker layout and working over an extended
listening area things start to work at third order.

But that doesn't mean that first order doesn't work. It can work
incredibly well in good conditions.

Ten days ago I spent an extended weekend at the music conservatory
of Pesaro (Italy), where David Monacchi (who is a teacher at the
conservatory) has built an electronic music studio featuring a
3rd order periphonic Ambisonic system using 21 speakers. The room
has had extensive acoustical treament, the only remaining problems
are some low frequency room modes (bass traps are being installed
to deal with those). I got involved for specifying the ambisonic
speaker layout and decoder.

David has also made field recordings (Ambisonic, stereo and bin-
aural) in primary forests in various places around he globe. 
These are absolutely fascinating - I you were at the first AMB
convention in Graz you will remember his presentation.

I had already made a third order Ambdec preset for this room.
But since we had little real third order material to test with,
we spent a lot of time listening to David's field recordings
and to some others he made recently using an ST450. David was
using several instances of Harpex to render those. This worked,
but neither of us were really satisfied with the results. So 
I created a first order Ambdec preset using a subset (12) of
the available speakers. The results were astonishing. Suddenly
there was depth, perspective, involvement, and an uncanny sense
of realism. It took both of us half a minute or so to adapt to
it - something that has been reported before. But after that
short time it was really a completely different experience.

In short: the Ambisonic magic only works when things are done
right. Using virtual shotgun mics pointing at some arbitrary
collection of speakers (or even a near optimal one as in this
case) may produce some effect, but it doesn't even come close
to what can be achieved. And in fact it has little or nothing
to do with real Ambisonic reproduction.

Ciao,

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound