Re: [Sursound] does anyone have harry f. olson, gradient microphones, jasa 1946?

2010-12-20 Thread Aaron Heller
I sent it to Jörn directly.  RCA made a 2nd order microphone at one point

http://www.coutant.org/bk10a/

Aaron

2010/12/20 Jörn Nettingsmeier netti...@stackingdwarves.net:
 it's cited in philip cotterel's thesis as the original source of the
 second-order gradient microphone. i can't believe it's not on the web,
 given its ripe old age.
 any pointers would be welcome. (and no, i didn't find it in the
 motherlode either.)

 thanks in advance,

 jörn


 --
 Jörn Nettingsmeier
 Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

 Meister für Veranstaltungstechnik (Bühne/Studio), Elektrofachkraft
 Audio and event engineer - Ambisonic surround recordings

 http://stackingdwarves.net

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Domestic speaker layout (was Re: Minim AD7 for sale)

2011-05-02 Thread Aaron Heller
On Mon, May 2, 2011 at 4:25 PM, Stefan Schreiber st...@mail.telepac.pt wrote:
 Jörn Nettingsmeier wrote:


 for larger audiences, the game seems to be a bit different, although i
 don't quite understand why. i find 1st order over eight speakers covers
 a larger area more easily and uniformly than six, but the theory says it
 shouldn't, because outside the sweet spot, rV reconstruction is mostly
 not happening and everything relies on rE, which is degraded by using
 more speakers than necessary...

r_E is not degraded. r_E remains the same for any uniform layout with
more than the minimum number of speakers.

 But  why  ?

 It is important for theoretical reasons to discuss this a bit more. This is
 not anything evident, I would think.


I believe the reasoning is that when there are many speakers producing
almost the same signal, you get more comb filtering effects when
moving out of the sweet spot.  This is discussed in

 Solvang, Audun. Spectral Impairment of Two-Dimensional Higher Order
Ambisonics JAES Volume 56 Issue 4 pp. 267-279; April 2008.

however the paper doesn't specify the decoder used.  I've seen many
reports of comb-filter effects and then found that  matching (aka
velocity, basic) decoders were being used at HF.  (we discuss this in
BLaH3).   In fact, the only time I've ever heard comb filtering in
Ambi playback is with incorrect decoders.

I will say that when Eric, Richard, and I set up a demo during the
2008 AES in San Francisco, using the 24-speaker hemisphere array at
the Bubble, there was a distinct HF dip at the sweet spot, but we
didn't have precise info about the locations of the speakers and
didn't have a lot of time to make measurements or tweak the decoder
before the demo.  Moving 10-20cm away from the sweet spot in any
direction restored the HF response, but there were no comb filtering
effects.  The configuration was 3 rings of eight speakers, with some
of the speaker behind a projection screen.

For the informal listening tests reported in our papers, the speaker
location error is less than 1 cm. This is verified by placing an omni
microphone at the listening spot and driving the speakers with
impulses.

Since the topic is domestic speaker layouts, I have a permanent four
speaker rectangle set up in our living room, with two more speakers on
stands that can be put in place to make a hexagon.  I've also
experimented with an eight-speaker bi-rectangle for 1st-order
periphony, but that stretched the limits of domestic acceptability.

Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Minim AD7 for sale

2011-05-02 Thread Aaron Heller
2011/5/1 Jörn Nettingsmeier netti...@stackingdwarves.net:

 i've been flabbergasted time
 and again how people could be totally unimpressed by first-oeder
 ambisonic systems that to me were between pretty good and totally
 awesome.
 it's still a conjecture, and i haven't tried to confirm it
 experimentally, but i'm convinced that lower-order ambisonic listening
 takes training - when your brain has learned to discard all the bogus
 cues, the curtain opens.

 that could explain why many people are perfectly content with their own
 FOA systems, and also why they have so few friends to share their passion.

I think Jörn has made several important points here about learning to listen.

The benefits of good 1st-order playback for me are the sense of
envelopment and the accuracy of timbre.  Those take a while to
appreciate and I have the advantage of having many hours of b-format
recordings made in halls I know very well, so I have an absolute
reference.

 Many people are accustomed to hearing sounds come out of individual
speakers, like on _A Kind of Blue_ or _Sgt. Pepper_.  I put on a demo
here at work (one of our conference rooms has a squashed hexagon
array).  People were generally impressed by the sound, but a number of
people walked over to the individual speakers and were disappointed
that they could hear the violins in the front-right speaker and the
basses in the front-left speaker.

Also... I've noticed accommodation effects (for lack of a better term)
when listening to panned test signals, both positive and negative.

  .  after 15-20 minutes of listening to panned noise and switching
between different arrays and decoders, I find the localization gets
completely ambiguous.  Taking a short break restores my localization
abilities.

  .  during the listening tests for BLaH4, with some decoders and
listening to eight directions, localization was indistinct to the
direct left and right, until I turned and looked in that direction
during the announcement, at which point the localization in that
direction became distinct and precise, and remained so after turning
back to the front for the reminder of the session.

Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Domestic speaker layout (was Re: Minim AD7 for sale)

2011-05-03 Thread Aaron Heller
2011/5/2 Jörn Nettingsmeier netti...@stackingdwarves.net:

 those slightly more speakers than necessary cases are a bit tricky...
 first order over a 24 hemisphere is horrible,

At the 2008 demo I wrote about, other that the anomaly at the exact
center, I thought it sounded pretty good.  So did most of the 60 or so
people who came though during the evening.  Definitely not horrible.

I played excerpts of some of my recordings that are on Ambisonia:
Stravinsky's Pulcinella Suite, Beethoven's 4th Symphony and
Appassionata Sonata, Dvorak Violin Concerto, Paul Doombusch's Hampi
Bazzar, some of John Leonard's recordings, Jeffrey Silberman's too.

There was a quite convincing sense of the space in which the
recordings were made.   In the Stravinsky recording, you hear the
reverberation of the brass instruments moving around the hall, as it
actually does.  In the recording of the piano recital, you can hear a
slight slap echo from the front of the balcony above and behind the
microphone.   The contrast from indoors to outdoors is especially
striking.  As I said earlier, the envelopment and accuracy of timbre
are the keys for me -- they draw you into the performance.

I would have been happy to play some HOA recordings, but I don't have any.


 the most striking experience of horizontal first-order degradation over
 eight speakers was in the sala bianca in parma, using virtual ambisonic
 speakers on their wfs system. fons demo'ed ambi rendering, and we switched
 between six and eight sources.

Could you describe what you heard?  I'm genuinely curious.

 i'm sorry to muddy the clear and abundant spring of rigourous ambisonic
 evaluation with my shoddy anecdotal evidence and promise to either chase 50
 subjects through a bullet-proof listening test before speaking (like
 everyone here), or shut up forthwith. ;)

 for larger audiences, the game seems to be a bit different, although i
 don't quite understand why.

 Maybe the decoder is different?

 Don't get me wrong, but sometimes it is productive to pull someone's
 leg... O:-)

 well pulled, but no, the decoder was identical.

 --
 Jörn Nettingsmeier
 Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

 Meister für Veranstaltungstechnik (Bühne/Studio)
 Tonmeister VDT

 http://stackingdwarves.net

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Minim AD7 for sale

2011-05-03 Thread Aaron Heller
On Tue, May 3, 2011 at 10:14 AM, Fons Adriaensen f...@linuxaudio.org wrote:
 On Tue, May 03, 2011 at 07:15:29PM +0530, umashankar mantravadi wrote:

 in fact angelo recommended that i arrange the eight speakers as two crossed 
 squares. two speakers in front and back, and four speakers mid bottom left 
 and right and mid top left and right, the only problem is i do not see a 
 readymade decoder

 A variation on this is an horizontal rectangle, 1 unit wide and 1.73 deep,
 and a vertical rectangle in the YZ plane 1 unit high and 1.73 wide.
 Or the same rotated 90 degrees. Looking from above you see an hexagon.

 This somewhat improves the rE for horizontal directions (not much), at
 the expense of all others. Anything outside the +/- 30 degrees elevation
 region will become very fuzzy.

I had one of these set up at home for a couple of days and found it
better than a horizontal hexagon.  The impression of height is a
welcome addition to the sense of envelopment.   I had a fader set up
in Bidule so you could change the Z gain to compare horizontal-only
with periphonic. The entire listening panel (my son and I) preferred
having the height info, but the rest of the family didn't appreciate
having a couple of ladders in the living room.  I also tried it with
the vertical rectangle in the XZ plane.  That works too.  Contact me
off list if you want the Ambdec config files I used.

Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Distance perception

2011-07-26 Thread Aaron Heller
Some papers that may be of interest:

Takahashi, A Novel View of Hearing in Reverberation, Neuron, Volume
62, Issue 1, 6-7, 16 April 2009
doi:10.1016/j.neuron.2009.04.004

Devore, et al., Accurate Sound Localization in Reverberant
Environments Is Mediated by Robust Encoding of Spatial Cues in the
Auditory Midbrain, Neuron, Volume 62, Issue 1, 123-134, 16 April 2009
doi:10.1016/j.neuron.2009.02.018

Antje Ihlefeld and Barbara G. Shinn-Cunningham, Effect of source
spectrum on sound localization in an everyday reverberant room,  J.
Acoust. Soc. Am. Volume 130, Issue 1, pp. 324-333 (2011)
http://dx.doi.org/10.1121/1.3596476

--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] new technology?

2011-08-31 Thread Aaron Heller
On Wed, Aug 31, 2011 at 6:06 PM, umashankar mantravadi
umasha...@hotmail.com wrote:


 just read about this in the asa newsletter. anyone knows anything about it? 
 http://www.visisonics.com/Products/AudioCamera.html

It a commercial spinoff of work by Ramani Duraiswami and Adam
O'Donovan at U of Maryland.

   more here:  http://www.umiacs.umd.edu/~ramani/

The 64 capsules are positioned at the Fliege points on the sphere.

   http://www.personal.soton.ac.uk/jf1w07/nodes/nodes.html


--
Aaron J. Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] new technology?

2011-08-31 Thread Aaron Heller
Here's a more direct URL:  http://www.umiacs.umd.edu/~odonovan/Audio_Camera/

Aaron

On Wed, Aug 31, 2011 at 9:00 PM, Aaron Heller hel...@ai.sri.com wrote:
 On Wed, Aug 31, 2011 at 6:06 PM, umashankar mantravadi
 umasha...@hotmail.com wrote:


 just read about this in the asa newsletter. anyone knows anything about it? 
 http://www.visisonics.com/Products/AudioCamera.html

 It a commercial spinoff of work by Ramani Duraiswami and Adam
 O'Donovan at U of Maryland.

   more here:  http://www.umiacs.umd.edu/~ramani/

 The 64 capsules are positioned at the Fliege points on the sphere.

   http://www.personal.soton.ac.uk/jf1w07/nodes/nodes.html


 --
 Aaron J. Heller hel...@ai.sri.com
 Menlo Park, CA  US

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] problem with jconvolver on osx

2011-10-12 Thread Aaron Heller
For N x M channel convolutions on MacOS (or Windows), there's
BrahmaVolver.  It is a VST plugin, that I've used with Bidule on MacOS
with good results.

  http://www.aurora-plugins.com/Public/Brahma/Brahmavolver/

It comes up as a 2x2 initially, but you can adjust that in the Setup
panel.  Then you have to delete that instance and make another one.


As far as multithreaded, cross platform stuff, I've switched to using
the Intel Threaded Building Blocks (TBB) for my image processing and
machine learning work.  It let's you code at a higher level of
abstraction (functors, multidimensional iterators, parallel for-loops,
flow graphs, and so forth), and then manages the threads and
synchronization for you.  There is a free and open source version,
licensed under GPLv2 with the runtime exception.  See

   http://threadingbuildingblocks.org/

I haven't studied zita-convolver in any detail, so I don't know how
amenable it would be to that kind of approach.

Best...

Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] problem with jconvolver on osx

2011-10-12 Thread Aaron Heller
Umashankar is correct.  I meant X-volver.  Sorry for the confusion.

   http://www.ramsete.com/Public/Xvolver/

There is a screenshot here:
   http://www.desknotes.it/siten/Software_files/XVolver8x8.png


Aaron



On Wed, Oct 12, 2011 at 7:30 PM, umashankar mantravadi
umasha...@hotmail.com wrote:

 brahmavolver is a standalone program, not  a vst plug in. or so i thought/ (i 
 have version .71 for windows with me)umashankar

 i have published my poems. read (or buy) at http://stores.lulu.com/umashankar
   Date: Wed, 12 Oct 2011 14:58:40 -0700
 From: hel...@ai.sri.com
 To: sursound@music.vt.edu
 Subject: Re: [Sursound] problem with jconvolver on osx

 For N x M channel convolutions on MacOS (or Windows), there's
 BrahmaVolver.  It is a VST plugin, that I've used with Bidule on MacOS
 with good results.

   http://www.aurora-plugins.com/Public/Brahma/Brahmavolver/

 It comes up as a 2x2 initially, but you can adjust that in the Setup
 panel.  Then you have to delete that instance and make another one.


 As far as multithreaded, cross platform stuff, I've switched to using
 the Intel Threaded Building Blocks (TBB) for my image processing and
 machine learning work.  It let's you code at a higher level of
 abstraction (functors, multidimensional iterators, parallel for-loops,
 flow graphs, and so forth), and then manages the threads and
 synchronization for you.  There is a free and open source version,
 licensed under GPLv2 with the runtime exception.  See

    http://threadingbuildingblocks.org/

 I haven't studied zita-convolver in any detail, so I don't know how
 amenable it would be to that kind of approach.

 Best...

 Aaron Heller hel...@ai.sri.com
 Menlo Park, CA  US
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20111013/bdf4de34/attachment.html
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] online multichannel release

2011-11-25 Thread Aaron Heller
Marinos Koutsomichalis mari...@agxivatein.com wrote:
 but still I' m not quite sure about the most important issue:
 which is the most 'common' file-format for such things ?

In terms of installed base of players, AC3 and DTS are the most common
formats for delivery of surround audio.  VLC player can decode either
one, as can the DVD playing software preinstalled on many PCs.
Ambisonia and Nimbus have distributed 4-channel G-format ('speaker
feed') files in DTS-WAV format, which is DTS encoded audio in a
RIFF/WAV wrapper that can be burnt to a CD and played in most home
theater setups.  Judging form the limited statistics I had access to
and the comments on the site, many people downloaded, played
successfully, and enjoyed the DTS-WAV files distributed on Ambisonia.
If you need help with any of this, feel free to ask.

--
Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Can anyone having ISO 9613-1 check this ?

2011-11-27 Thread Aaron Heller
For what it's worth, there a implementation in the MATLAB file
exchange with a comment claiming that it is an accurate implementation
of ISO9613-1. See
  
http://www.mathworks.com/matlabcentral/fileexchange/6000-atmospheric-attenuation-of-sound

It cites
%   Bass, et al., Journal of Acoustical Society of
America, (97) pg 680, January 1995.
%   Bass, et al., Journal of Acoustical Society of
America, (99) pg 1259, February 1996.
%   Kinsler, et al., Fundamentals of Acoustics, 4th ed.,
pg 214, John Wiley  Sons, 2000.

Aaron
--
Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] UHJ decoding and shelf filters

2011-12-16 Thread Aaron Heller
On Fri, Dec 16, 2011 at 3:33 AM, Fons Adriaensen f...@linuxaudio.org wrote:
 The matrix columns are W,X,Y.

I gather that Ambdec version 3 files (v. 0.5.1) use ACN for the
coefficient ordering, which would make it W, Y, X.

For example the LF coefficients you give for the CE speaker (dir=[1,0,0]) are

   add_row     0.102380  0.00  0.311540



It's in the high-50s (F) and about 40% RH here in Menlo Park.

--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US



On Fri, Dec 16, 2011 at 3:33 AM, Fons Adriaensen f...@linuxaudio.org wrote:
 On Fri, Dec 16, 2011 at 09:34:04AM -, Richard Lee wrote:

 Does anyone have Fons' 5.1 matrices handy that they could
 send me?  I can't promise anything quick as its 35C in the
 shade and 100% RH in Cooktown and my small brain is boiling.

 Here (Parma) it's around 3C and 100% RH (thick mist) in the
 evening. I'd prefer Cooktown (provided there's some sea nearby).

 This is the complete preset file:

 -

 # AmbDec configuration
 # Written by Makedec-0.8.0 at Sun Sep  4 22:29:44 2011

 /description      ITU 5.1 decoder

 /version          3

 /dec/chan_mask    b
 /dec/freq_bands   2
 /dec/speakers     5
 /dec/coeff_scale  fuma

 /opt/input_scale  fuma
 /opt/nfeff_comp   input
 /opt/delay_comp   off
 /opt/level_comp   off
 /opt/xover_freq    600
 /opt/xover_ratio   0.0

 /speakers/{
 add_spkr    LS     1.500    110.0      0.0    system:playback_1
 add_spkr    LF     1.500     30.0      0.0    system:playback_2
 add_spkr    CE     1.500      0.0      0.0    system:playback_3
 add_spkr    RF     1.500    -30.0      0.0    system:playback_4
 add_spkr    RS     1.500   -110.0      0.0    system:playback_5
 /}

 /lfmatrix/{
 order_gain     1.0  1.0  1.0  1.0
 add_row     0.512590  0.414680 -0.396620
 add_row     0.143330  0.220650  0.240850
 add_row     0.102380  0.00  0.311540
 add_row     0.143330 -0.220650  0.240850
 add_row     0.512590 -0.414680 -0.396620
 /}

 /hfmatrix/{
 order_gain     2.05000  1.13000  1.0  1.0
 add_row     0.312680  0.252960 -0.241940
 add_row     0.172000  0.264780  0.289010
 add_row     0.071660  0.00  0.218080
 add_row     0.172000 -0.264780  0.289010
 add_row     0.312680 -0.252960 -0.241940
 /}


 /end


 -

 The matrix columns are W,X,Y. W is multiplied
 by the first order_gain, X and Y by the second.

 Ciao,

 --
 FA

 Vor uns liegt ein weites Tal, die Sonne scheint - ein Glitzerstrahl.

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] approximate solutions to the ambisonic decoding problem

2011-12-29 Thread Aaron Heller
On Thu, Dec 29, 2011 at 3:06 PM, Franz Zotter zot...@iem.at wrote:

 ... see papers of last Ambisonics Symposia ...

Are these available?  The conference site still shows To be determined.

  http://www.vis.uky.edu/ambisonics2011/proceedings.php


Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] approximate solutions to the ambisonic decoding problem

2012-01-07 Thread Aaron Heller
On Sat, Jan 7, 2012 at 1:48 AM, Franz Zotter zot...@iem.at wrote:
 Hi,

 On Friday 30 December 2011 03:37:42 Aaron Heller wrote:
 Are these available?  The conference site still shows To be determined.
   http://www.vis.uky.edu/ambisonics2011/proceedings.php

 I asked the authors for their papers to provide a replacement solution on our
 webspace:
 http://ambisonics-symposium.org/proceedings-of-the-ambisonics-symposium-2011

Thanks Franz.  I've been reading though your Acta Acustica paper.
Nice work, well written.  I'll have some questions at some point, as I
have access to two arrays that would benefit from the techniques you
outline.

For others, it is in a special issue on Ambisonics and Spherical
Acoustics.  Lot's of relevant papers.

  http://www.ingentaconnect.com/content/dav/aaua/2012/0098/0001

Best...

Aaron  (hel...@ai.sri.com)
Menlo Park,  CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Decoding coefficients for non symmetrical setups

2012-02-29 Thread Aaron Heller
The code that goes with the LAC2012 conference paper does 3D and
higher orders.   In fact we used it to make a new 3rd-order Ambdec
config for CCRMA's 22 speaker array.  Its written in MATLAB/Gnu
Octave, and it's not a lot of code.  So plenty of opportunity for
tinkering with the goal functions.

One comment that is not in the LAC paper is that with less than 50-70
parameters, the non-linear optimizer works quite well and converges
quickly (less than a few minutes).  5 speaker first-order 2-D is 15
parameters.  22 speakers, 3rd-order is 352 parameters so some strategy
is needed to guide it.  The decoder for CCRMA took about 2 hours.
Constraints and exploiting symmetries to reduce the number of
parameters would help, but we haven't experimented with that yet.

The bulk of the computation is matrix multiplies, so it would be
amenable to a Cuda/GPU implementation.

Aaron  (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Naive HOA Questions

2012-03-04 Thread Aaron Heller
On Fri, Mar 2, 2012 at 4:40 PM, Jörn Nettingsmeier
netti...@stackingdwarves.net wrote:
 On 03/02/2012 08:02 PM, j.l.ander...@phonecoop.coop wrote:
 A few naive HOA Qs I'm hoping to gain some insight on...
 1) Order (M) is related to minimum number of loudspeakers by:

 (M + 1)^2 for 3D
 2M + 1 for 2D

These are not correct.  To get the HF localization (rE) aligned with
the LF localization (rV), you need at least four speakers for 2-D and
six for 3-D, for first-order playback.  For an illustration of the
1st-order, 3-D case, see

http://www.ai.sri.com/ajh/ambisonics/tetra-v-cube.html
http://www.ai.sri.com/ajh/ambisonics/tetra-rVrE/tetra.pdf

[...]
 if you have to play back lower order material as well (such as a native
 soundfield recording), i found it is advisable to have a separate low-order
 decoder which uses fewer speakers, for better clarity and less phasing.

 aaron heller disputes this, he claims to have observed no detrimental
 effects in vastly over-specified systems, and if you look at simulations
 where N-oo, he should be right, but in practice i have found systems with
 many more speakers than strictly necessary to be significantly worse in
 terms of phasiness... maybe others can comment and clarify.

I would not say that I've heard no detrimental effects on large arrays
-- I've heard all sorts of problems, but in every case I've been able
to attribute it to other reasons: bad decoder, errors in speaker
positions, feeding 1st-order signals into a HOA decoder, and so forth.

For example, decoders where the HF/LF balance is set using the
conservation of total energy approach from Daniels' thesis,
emphasize HF more and more as the number  speakers goes up.  I find
this causes the tonal balance on large arrays to be wrong, as well as
creating near-head artifacts (indicative that the sound at left and
right ears is very different).   Reducing the HF/LF balance by 2 to 4
dB fixes this. I use the Stravinsky and Beethoven recordings I posted
to Ambisonia to judge this, as I am quite familiar how they sound when
reproduced correctly.

In reports in the literature, I find that either an incorrect decoder
has been used or there is no succinct statement about the decoding
criteria employed, e.g. Solvang's 2008 JAES paper. (v 58 #4, p. 267).
Same with anecdotal reports.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] blindly identifying matrix encodings

2012-03-10 Thread Aaron Heller
Following up, I made a UHJ version of Eight Directions.  Grab it and the
MATLAB code that made it from

  http://ambisonics.dreamhosters.com/  (at the bottom of the page)

As you can see from the code, is is easy to plug in any coefficients needed.


On Fri, Mar 9, 2012 at 2:07 PM, Aaron Heller hel...@ai.sri.com wrote:

 Perhaps a useful first step would be to take some B-format recordings from
 Ambisonia and encode them into the various systems, so we could listen to
 the differences in some controlled manner.   That would also comprise a
 reference data set to test automated algorithms.

 Sorry if this has been answered before, but is there a concise summary of
 the encoding and decoding equations for the formats?  With that in hand, it
 should be quick MATLAB/Octave exercise to encode some B-format recordings.


 On Fri, Mar 9, 2012 at 10:22 AM, Eero Aro eero@dlc.fi wrote:

 Richard Lee:

  I'm actually listening to the stereo presentation of the reverb
 rather than individual sources.


 My respect!
 But I know what you are talking about. However, your skills need a lot of
 listening experience.


  You'll probably find a stereo fig-8 @ 90 decode from the B-format
 gives sharper images than UHJ encoding listened in stereo of the
 same B-format.


 Naturally.

 Eero

 __**_
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/**mailman/listinfo/sursoundhttps://mail.music.vt.edu/mailman/listinfo/sursound



-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20120310/198a313f/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Can anyone help with my dissertation please?

2012-04-02 Thread Aaron Heller
Um.  Every single recording on Ambisonia was available as a
DTS-CD RIFF/WAV file of a 4.0 decode (that is, Center and LFE were
silent).  All one needed to do was burn them to a CD and play in a DVD
player connected to a 5.1 home theater set up.   See Richard Elen's
article Getting Ambisonics Around for the technical details of the
process.  http://www.ambisonic.net/pdf/ambisonics_around.pdf

I know there were several hundred downloads of my recordings in that
format -- Stravinsky, Beethoven, Brahms, Dvorak, recorded with my
Soundfield MkIV for NPR's Performance Today.

--
Aaron Heller hel...@ai.sri.com
Menlo Park, CA  US


On Sun, Apr 1, 2012 at 4:28 PM, Neil Waterman
neil.water...@asti-usa.com wrote:
 I agree totally with Robert here.

 Most of my work mates have 5.1 set-ups at home, but would never be bothered
 to have anything that required more thought, so bring on the 5.1 mixes of
 ambisonic source material and at least let the masses get a listen.

 Cheers, Neil


 On 4/1/2012 6:44 PM, Robert Greene wrote:


 I don't think anyone thinks that! What people do think
 is that Ambisonics needs some sort of commercial accesibility--
 which it could get if discs were put out that provided not
 abstract Ambisonics as it were but Ambisonics as decoded
 to the 5.1 set up. The message was that no one (statistically speaking)
 in the real world wants anything that requires thought
 and effort.
 Given that Ambisonics can be decoded to any speaker setup
 (even if the result is not idea), why are there no
 5.1 SACDs that show how Ambisonics works on a 5.1 setup?
 One cannot expect people to be interested in something they
 cannot hear in demo form
 Robert

 On Sun, 1 Apr 2012, Augustine Leudar wrote:

 again to anyone who says things like ambisonics cant compete with 5.1
 please bear in mind this is like saying amplitude panning can't
 compete with 5.1 - it doesnt make any sense at all. You mix your
 tracks horizontally ,without elevation, using ambisonics plugins and
 burn your ac3/dts file like any other surround mix. Ambisonics is an
 approach to creating a soundfield it does not require any special
 hardware it can be done with software. The new 22.4 (or something)
 sound systems that cinemas are launching soon will allow height
 information as well. You could mix a lot of films using ambisonics
 when this happens.
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound


 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Can anyone help with my dissertation please?

2012-04-02 Thread Aaron Heller
Sorry I got bogged down in technical details.  Thanks for pointing
that out to me.

What I should have said was that every recording on Ambisonia (~250,
iirc) was available as a file that could be downloaded, burnt to CD,
and played on a plain old 5.1 home theater system -- presumably just
the kind that you, Neil, his work mates have, and many others have.
No special playback setup needed.  Same set of skills and computer set
up that were needed to get files with Napster and make CDs from them.
I seem to recall that plenty of people were able to do that.

Anyway, I put some files at

   http://ambisonics.dreamhosters.com/DTS/

There are a few more and some discussion at

   http://www.ambisonic.net/decodes.html

If needed, some instructions for burning and playing are on pages
10-13 of the SurCode manual

   
http://www.minnetonkaaudio.com/info/PDFs/Manuals/SurCode%20DTS%20CD%20Manual.pdf


Let me know what you hear...

Aaron




On Mon, Apr 2, 2012 at 8:25 AM, Robert Greene gre...@math.ucla.edu wrote:

  All you need to do is.., is the end of the line here.
 Commercially, you might as well try to sell  a car
 where all you need to do to start it is to type
 in a ten digit code, sing Mary had a little lamb three times,
 and notify the post office.
 No one is going to go through this sort of thing in
 the statistical sense of no one.
 Most people do not even know what these words mean
 RIFF/WAV file ,4.0 decode etc
 Why would they want to find out?
 Robert


 On Sun, 1 Apr 2012, Aaron Heller wrote:

 Um.  Every single recording on Ambisonia was available as a
 DTS-CD RIFF/WAV file of a 4.0 decode (that is, Center and LFE were
 silent).  All one needed to do was burn them to a CD and play in a DVD
 player connected to a 5.1 home theater set up.   See Richard Elen's
 article Getting Ambisonics Around for the technical details of the
 process.  http://www.ambisonic.net/pdf/ambisonics_around.pdf

 I know there were several hundred downloads of my recordings in that
 format -- Stravinsky, Beethoven, Brahms, Dvorak, recorded with my
 Soundfield MkIV for NPR's Performance Today.

 --
 Aaron Heller hel...@ai.sri.com
 Menlo Park, CA  US


 On Sun, Apr 1, 2012 at 4:28 PM, Neil Waterman
 neil.water...@asti-usa.com wrote:

 I agree totally with Robert here.

 Most of my work mates have 5.1 set-ups at home, but would never be
 bothered
 to have anything that required more thought, so bring on the 5.1 mixes of
 ambisonic source material and at least let the masses get a listen.

 Cheers, Neil


 On 4/1/2012 6:44 PM, Robert Greene wrote:



 I don't think anyone thinks that! What people do think
 is that Ambisonics needs some sort of commercial accesibility--
 which it could get if discs were put out that provided not
 abstract Ambisonics as it were but Ambisonics as decoded
 to the 5.1 set up. The message was that no one (statistically speaking)
 in the real world wants anything that requires thought
 and effort.
 Given that Ambisonics can be decoded to any speaker setup
 (even if the result is not idea), why are there no
 5.1 SACDs that show how Ambisonics works on a 5.1 setup?
 One cannot expect people to be interested in something they
 cannot hear in demo form
 Robert

 On Sun, 1 Apr 2012, Augustine Leudar wrote:

 again to anyone who says things like ambisonics cant compete with 5.1
 please bear in mind this is like saying amplitude panning can't
 compete with 5.1 - it doesnt make any sense at all. You mix your
 tracks horizontally ,without elevation, using ambisonics plugins and
 burn your ac3/dts file like any other surround mix. Ambisonics is an
 approach to creating a soundfield it does not require any special
 hardware it can be done with software. The new 22.4 (or something)
 sound systems that cinemas are launching soon will allow height
 information as well. You could mix a lot of films using ambisonics
 when this happens.
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound


 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] OT: Spatial music

2012-04-04 Thread Aaron Heller
The 2011 paper by Nachbar, et al, ambiX - A Suggested Ambisonics
Format, specifies SN3D as the normalization scheme.  (see eqn 3 in
section 2.1, The normalization that seems most agreeable is SN3D...)

The papers are here
   http://ambisonics.iem.at/proceedings-of-the-ambisonics-symposium-2011

--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US

On Wed, Apr 4, 2012 at 1:48 AM, Michael Chapman s...@mchapman.com wrote:


 Unless of course they publish a file format for it

 Want a minimal and purposely highly (even overtly) extensible one? That
 I can design. In fact I've meant to do something like this from teenage
 up. :)

 Please do!


 A group of us proposed a CAF based file format at Graz (in 2009)
 http://mchapman.com/amb/reprints/AFF.pdf
 It had a mixed response ;-)

 It has though been taken forward and a further proposal was
 made at the US Ambisonics symposium by Christian Nachbar (Graz)
 and colleagues. (N3D instead of SN3D, being one major change.)

 Time has brought greater agreement and stability.

 As I wasn't at York, and as the Graz folks are on this List, I
 won't give a reference as it would probably be out-of-date,
 anyway.

 So problem solved 

 Michael



 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Blumlein versus ORTF

2012-04-04 Thread Aaron Heller
Last night I coded up a quick experiment in using the Kemar mannequin
HRTFs published by Algazi, et al. at UC Davis.
  http://interface.cipic.ucdavis.edu/sound/hrtf.html

It is very simple...  simulate Blumlein pickup and playback though
speakers at +/- 45 degrees, low-pass at 800 Hz, and use
cross-correlation to estimate the ITDs, and compare to the natural
hearing case.  The result is here

   http://www.ai.sri.com/ajh/ITDs_Blumlein_vs_Natural/

I was pleasantly surprised that such a simple experiment yield such a
clean result.

The 800 Hz LPF is needed to get reliable results from the
crosscorrelation.  As you point out, humans may use the higher
frequency information (I'd like to see a citation for that), but the
point here is that accurate ITDs are present in the ear signals in
stereo playback even though the soundfield has been sampled at a
single point in space.

The MATLAB code is here:
   http://www.ai.sri.com/ajh/ITDs_Blumlein_vs_Natural/itd_blumlein_plot.m

As Eric has pointed out, Braanch (and many others) have done much more
on this, for example:

Braasch, Jonas, A Binaural Model to Predict Position and Extension of
Spatial Images Created with Standard Sound Recording Techniques, AES
Preprint 6610.  http://www.aes.org/e-lib/browse.cfm?elib=13352

  http://www.aes.org/e-lib/browse.cfm?elib=13352


--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US


On Tue, Apr 3, 2012 at 3:01 PM, Robert Greene gre...@math.ucla.edu wrote:

 Thanks everybody for the links and in particular the
 calculation of models link. I shall work on that one
 I know the Lipshitz paper well, but it seems that
 experts disagree. James Johnston has told me
 a number of times for example that he thinks
 getting those time cues from ORTF is really
 important and that pure Blumlein is really
 not the way to go because they are missing.

 So... in this corner expert 1, Stanley L and
 in the opposite corner expert 2 ,JJ. What's a body to do?

 Thanks again
 Robert
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] DTS files (was Re: Can anyone ...)

2012-04-04 Thread Aaron Heller
Hi David,

Thanks for listening and writing.  All these recordings were made at
the Troy Savings Bank Music Hall in upstate NY and broadcast on NPR's
Performance Today about 8-10 years ago.

As for the distortion, frankly I have not listened to the DTS versions
that carefully. Last night, I decoded the Brahms using VLC Player and
note that the DTS version does sound coarser than the original. The
masters are 48kHz, so the DTS encoding also includes a sample rate
conversion to 44.1 kHz, and I'm not sure about the quality of the SRC
in the Surcode DTS encoder.

I've uploaded the B-format files from which the DTS files were made,
if you'd like to listen to those

  http://ambisonics.dreamhosters.com/AMB/

The free Harpex player makes that particularly easy (and you can play
with different virtual mic arrays).  http://harpex.net/

In my humble option, the Stravinsky Pulcinella recording is the best
of the lot.  It was made with my MkIV (#99) when it still had the
original Calrec capsules and alignment.  The Beethoven is from the
same concert and is the one I listen to the most often.  The Dvorak
recording was made after an overhaul by Soundfield Research that
included a capsule replacement, and the Brahms after further tweaking
by Richard Lee and Eric Benjamin.

Thanks

Aaron


On Tue, Apr 3, 2012 at 6:12 PM, David Pickett d...@fugato.com wrote:
 At 14:01 02/04/2012, Aaron Heller wrote:

 I put some files at

   http://ambisonics.dreamhosters.com/DTS/

 I downloaded, cut onto CD and listened to the finale of Brahms I, which I
 have conducted several times (where was this recorded?). It is the first
 time I have heard 4.0 from a CD and for some reason it took me a long time
 to establish a volume level. The wide dynamic range is nice. The
 instrumental timbres are realistic, and it is terrific to hear the applause
 from all around -- something that one unfortunately doesnt get with the DVD
 recordings of the Sylvester concert from the Musikverein. The image seemed
 stable. The worst aspect was the distortion (most noticeable just after
 Letter N from 12:10), which I take to be the 16-bit granularity. I will
 listen to more of these.

 Thanks!

 David
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] multichannel audio over HDMI

2012-05-25 Thread Aaron Heller
Since Bo-Erik mentioned it, I've been wondering about multichannel
computer audio over HDMI to a home theater amp.

Does anyone know if ALSA on Linux or Core Audio on MacOS support this?
 In other words, if I'm running Linux on a motherboard with HDMI
output, do the eight channels of audio show up as an ALSA device that
I could use with Jack?   Same question for MacOS, if I plug the HDMI
output of my MacMini into a home theater amp, do I see an 8 channel
audio device in Plogue Bidule (for example)?

I know this sort of stuff should work, but I'd like to know if anyone
has tried it.

Along these lines, it has been a while since I've bought any home
audio gear, does anyone have a suggestion for a HT receiver where one
can get at all eight channels carried over an HDMI connection?

Thanks

Aaron
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] microphone epiphany ?

2012-05-28 Thread Aaron Heller
I've simulated the diffuse field response for the Velan design using
the A-to-B matrix I gave earlier.  The analysis and graph are at

  http://www.ai.sri.com/ajh/ambisonics/velan_sim_dfr.html

Note that in practice these simulations are only a starting point for
EQ.  Real capsules deviate significantly from the idealized math
models used here.  In addition, the capsules shadow each other and the
space in the middle acts as a resonant cavity.  So you really have to
measure the finished array and then design the EQ.

--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US



On Sun, May 27, 2012 at 7:38 PM, umashankar mantravadi
umasha...@hotmail.com wrote:

 dear aaron thank you umashankar

 i have published my poems. read (or buy) at http://stores.lulu.com/umashankar
   Date: Sat, 26 May 2012 14:09:28 -0700
 From: hel...@ai.sri.com
 To: sursound@music.vt.edu
 Subject: Re: [Sursound] microphone epiphany ?

 One set of gains that works is

     0.2357    0.2357    0.2357    0.2357    0.2357    0.2357
     1.   -1.    0.    0.    0.    0.
     0.    0.    1.   -1.    0.    0.
          0         0         0         0    1.   -1.

 I put a write up on how to derive those at:

   http://www.ai.sri.com/ajh/ambisonics/umashankar_velan.html

 --
 Aaron (hel...@ai.sri.com)
 Menlo Park, CA  US


 On Sat, May 26, 2012 at 12:41 PM, umashankar mantravadi
 umasha...@hotmail.com wrote:
 
  assuming all six capsules have same gain, and all are contributing to W, 
  how much should be the attenuation? umashankar
 
  i have published my poems. read (or buy) at 
  http://stores.lulu.com/umashankar
    Date: Sat, 26 May 2012 18:25:04 +0200
  From: netti...@stackingdwarves.net
  To: sursound@music.vt.edu
  Subject: Re: [Sursound] microphone epiphany ?
 
  On 05/26/2012 04:51 PM, Augustine Leudar wrote:
   Hi Michael,
   thanks for the reply - there is no doubt Ive missed something !
   I am aware of the need for W coordinate and have included it in my 
   encoder
   as instructed although I have to admit I am still not clear on its
   function  - but my question specifically relates to the information 
   gained
   from the 3 fig of eight patterns gained form a mic design such as this 
   one :
  
   http://www.shapeways.com/model/143678/velan-140-internals.html
  
   Am I right in thinking the W component gives you enhanced distance
   information for a given sound source ?
 
  we cannot hear absolute phase, so if you are listening to a figure of
  eight microphone, there is no way to tell whether the sound you're
  hearing was coming into the (positive) frontal lobe or the (inverted)
  rear lobe. hence, the only information we can get from a lonely fig8 is
  it's obviously not coming from the side.
 
  now if you add an omni to the equation, all of a sudden you have the
  possibilty to discern between the two lobes of the fig8: one is in-phase
  with the omni, the other isn't.
 
  thus, the 0th order component helps you resolve the ambiguities of all
  the 1st order signals.
 
  btw, if you move to second order, the situation is the same: with a
  cloverleaf directivity, you only know it's either coming from
  front-or-back, or from left-or-right. only with the first-order
  information, the ambiguity can be resolved.
 
  W also has a subtle role in distance coding, but its main job is really
  to help us make sense of the fig8s.
 
 
  to wrap up, three real fig8s won't ever give you ambisonics. but if you
  obtain the fig8s by subtracting two back-to-back cardioids, you can also
  add them, which then gives you a nice omnidirectional component. note
  that you will have to attenuate the resulting omni signal according to
  the number of capsules you used.
 
 
  best,
 
 
  jörn
 
 
 
  --
  Jörn Nettingsmeier
  Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
 
  Meister für Veranstaltungstechnik (Bühne/Studio)
  Tonmeister VDT
 
  http://stackingdwarves.net
 
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
 
  -- next part --
  An HTML attachment was scrubbed...
  URL: 
  https://mail.music.vt.edu/mailman/private/sursound/attachments/20120527/db077b8f/attachment.html
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20120528/a87d2ddb/attachment.html
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

Re: [Sursound] Setting up my first ambisonic system

2012-06-10 Thread Aaron Heller
On Fri, Jun 8, 2012 at 7:59 AM, Dave Malham dave.mal...@york.ac.uk wrote:
 The BLaH papers are available from http://www.ai.sri.com/ajh/ambisonics/
 though currently there seems to be problems streaming the actual papers from
 Scribd.

Thanks for the heads up, Dave.  Not sure what's going on with Scribd,
but I've updated the page with direct links to PDFs of all the papers.
 (BLaH3 has several updates over previous versions).

  http://www.ai.sri.com/ajh/ambisonics/

which is mirrored at

  http://ambisonics.dreamhosters.com/

-- 
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] preferred (small) linux distro for audio?

2012-07-04 Thread Aaron Heller
I can second the recommendation for Planet CCRMA.  I've used it for
for audio work for a decade (since Red Hat 7 days) on various mini-ITX
based systems and laptops.   'Nando Lopez-Lezcano (the maintainer) and
others on the mailing list are helpful and quick to reply.

On Wed, Jul 4, 2012 at 9:51 AM, Eric Benjamin eb...@pacbell.net wrote:
 I have almost no experience in this, but it seems like this discussion would 
 be
 incomplete without mentioning the Planet CCRMA distribution:

 http://ccrma.stanford.edu/planetccrma/software/

 written with too much blood in my caffeine stream.

 Eric
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Demo / test piece

2012-09-27 Thread Aaron Heller
Very good idea.  OK with me.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US

On Thu, Sep 27, 2012 at 3:43 AM, Michael Chapman s...@mchapman.com wrote:

 Earlier today, I referred to a slow pan of a symphonic piece.

 Perhaps it would be of interest to others :
 -as a demo of ambisonic panning (yawing)
 -as a 'test' of how smooth it reproduces on rigs.

 Would be happy to post it on Ambisonia, if
 -any interest
 -Aaron ia agreeable(*) and obviously under his
 original copyright.

 Michael

 *It is a derivative of
 http://www.ambisonia.com/Members/ajh/ambisonicfile.2007-07-20.2994617157/
 Johannes Brahms: Sym No. 1 in c minor, Op. 68, 4th mvt.
 Easy enough to 'cook' at home, but it might help newbies ...


 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Hybrid Hi-Fi (HyFi?), IRs, etc.

2012-10-05 Thread Aaron Heller
A bit of a warning...  Richard Furse's Ambisonic Decoding Equations
page is a useful resource, but is a bit vague on how the different
matrices should be used (or adapted).

Specifically, the coefficients listed as Rig Decode Matrix to
Reproduce Spherical Harmonics (which have also been called
matching, system, basic, ) should not be used above roughly
400Hz. (See BLaH3 for a discussion of this).

Also, the controlled opposites matrices given are useful for large
areas, but are not optimal for smaller setups, for listening by one or
two people.  These matrices minimize opposite polarity signals.

Decode matrices that maximize the energy localization vector
(rE_max) lie between these two. My understanding is that Richard's
Blue Ripple uses rE_max decoders. Appropriate gain adjustments for 1
to 5th order are in Tables 1 and 2 in BLaH6. Formulas are in given in
Appendix A.

Some of the difficulties and tradeoffs in designing a first-order
decoder for irregular layouts like ITU5.1 layout are discussed in
BLaH4.

The BLaH papers can be downloaded at
  http://www.ai.sri.com/ajh/ambisonics/

Also, as discussed in BLaH3, we've found that the quality of software
ambisonic decoders varies widely.  I urge thorough testing before
using any of them for research work, and when writing up the work,
please specify precisely how the decoding was done.

--
Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


On Thu, Oct 4, 2012 at 11:20 AM, Michael Chapman s...@mchapman.com wrote:
 Martin Leese wrote :
 The Ambi-5 Auditorium Decoder.  I have a
 PDF of an Audio + Design leaflet which
 somebody sent me.  If people want it I can
 place it on my Google Site for download.
 However, it just says:

 The Ambi-5 produces five loudspeaker feeds
 arranged as a regular pentagon with

 See also

 Pentagon [ . . . ] This rig configuration produces a strict idealised
 response that satisfies the Ambisonic matching equations. Generally this
 type of configuration produces a relatively small stable listening area.
 (See controlled opposites below.)
 http://www.muse.demon.co.uk/ref/speakers.html ( Richard Furse)

 Michael

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Trans-Dimensional Portal

2012-10-11 Thread Aaron Heller
On Thu, Oct 11, 2012 at 9:55 AM, Charlie Richmond charlie@gmail.com wrote:
 On Thu, Oct 11, 2012 at 9:51 AM, Stefan Schreiber 
 st...@mail.telepac.ptwrote:

 Sampo Syreeni wrote:


 How about a simple group called ambisonic or Ambisonics, in FB?


 There already is an ambisonics group in fb:

 https://www.facebook.com/pages/Ambisonics/https://www.facebook.com/pages/Ambisonics/108150639213488?fref=ts

There a FB page for Abramowitz  Stegun too, but these are simply
auto-generated from the corresponding Wikipedia article.

However, I do see there is a Surround Music group on FB that looks
pretty active.  https://www.facebook.com/groups/472608276094365/

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonic 'File' Formats

2012-10-30 Thread Aaron Heller
On Tue, Oct 30, 2012 at 9:01 AM, Michael Chapman s...@mchapman.com wrote:
 'Native' CAF also has the possibility for 'W,X,Y,Z' so (again without
 acknowledgment) I suspect that could be counted as a *.amb
 variant.

Has anyone had any success with this?  It's in the Core Audio header
files, but a couple of years ago, I tried writing out some B-format
files and got an 'unimplemented' error. (I forget the exact error, but
could recreate it if anyone is interested).

Also, I've encoded 4-channel B-format files with Ogg Vorbis and MPEG-4
AAC at rates around 160-256 kbps, and wavpack lossy (which is around
350kbps, iirc) and they decoded correctly without spatial artifacts,
so at moderate bit rates these codecs preserve phase information.  At
very low bit rates, Ogg Vorbis switches to square polar mapping mode
which will corrupt the phase relationships in stereo signals. I don't
know how they handle multichannel files.  See
http://xiph.org/vorbis/doc/stereo.html

--
Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Array for sound field recording and extend the sound image

2013-01-07 Thread Aaron Heller
Without more details, it hard to speculate about problems, but I'll note
that subtracting the outputs of two omnis to get a fig-8 response will
result in a frequency response that rises 6dB/octave with a 90 degree phase
shift relative to the sum.  Unless that is corrected, these signals are not
suitable for B-format.


On Mon, Jan 7, 2013 at 11:31 AM, Martin Leese 
martin.le...@stanfordalumni.org wrote:

 Chenrilin wrote:

  Hello, all
  I have designed an array of four microphones to get B-format signal.
 ...
  Then I decoded
  them to get the
  feeding signal for four loudspeakers, and used HRTFs to get the left and
  right signal for headphone.
 
  1.   Furthermore, the experiment is made that a person was walking
  around the array and say
  something in an office. After processed by the method above, I find
 there is
  a problem that
  the speech sounds like a person talking and moving around my head but
 very
  near to my ears
  and head. What is the main reasons for this problem?

 What happens when you decode a walkaround
 downloaded from Ambisonia.com?  That is to
 say, a walkaround recorded with somebody
 else's mic.

 This would allow you to determine whether the
 problem is with your decoder, or with the way
 you calculate your B-Format components.

 Regards,
 Martin
 --
 Martin J Leese
 E-mail: martin.leese  stanfordalumni.org
 Web: http://members.tripod.com/martin_leese/
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130107/9be8ea4a/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] KEMAR, Neumann, Zwislocki

2013-03-29 Thread Aaron Heller
I'll add that the Neumann KU81 and KU100 dummy heads (Kunstkopf) are
designed to sound good over loudspeakers as well.  There's a paper by
Stephen Peus about this that you can download from Neumann  at


http://www.neumann.com/?lang=enid=current_microphonescid=ku100_publications

Look for  Stephan Peus, 1985: Natural Listening with a Dummy Head,
English, 6 pages

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  UA
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130329/f0622412/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Surround formats and lossy compression

2013-04-07 Thread Aaron Heller
Sorry to get pedantic, but K is in common usage to mean 2^10 and there has
been lot of confusion about mega being 2^20 vs 10^6 for storage.  (In fact,
I think someone sued Seagate or Western Digital over the discrepancy).

To fix this, IEC 60027-2 (2000) defines a set of prefixes and abbreviations
to use with binary quantities.  For example 2^10 is a kibi, abbreviated
Ki, with bi pronounced bee.  2^20 is a mebi, abbrev. Mi.  They're
not SI, and I'll be the first to admit that I've never seen these outside
of a standards doc.

Summary here:
  http://physics.nist.gov/cuu/Units/binary.html

some discussion here:

http://searchstorage.techtarget.com/definition/Kibi-mebi-gibi-tebi-pebi-and-all-that


Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130407/5604b8e3/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Optimised Decoder matrix (Ambdec)

2013-04-19 Thread Aaron Heller
Hi Paul (et al.)...

Over the last few months, I put together an Ambisonic Decoder Toolbox for
MATLAB and GNU Octave that implements the AllRAD design technique outlined
in

F. Zotter and M. Frank, “All-Round Ambisonic Panning and Decoding,” J.
Audio Eng Soc, vol. 60, no. 10, pp. 807–820, Nov. 2012.
http://www.aes.org/e-lib/browse.cfm?elib=16554

and

F. Kaiser, “A Hybrid Approach for Three-Dimensional Sound Spatialization,”
Algorithmen in Akustik und Computermusik 2, SE, May 2011.
http://iaem.at/kurse/winter-10-11/aac02se/2011_Kaiser_SeminararbeitVersatileAmbisonicDecoder.pdf


The toolkit reads speaker locations from CSV files (and other formats,
including ambdec presets) and writes out presets files for Fons' Amdec
decoder.  There's also an initial implementation of a Faust backend, that
produces decoders that can be compiled to VST, Supercollider, Pd, MaxDSP,
...(see http://faust.grame.fr/ for more about Faust).

AllRAD is a hybrid ambisonic/vbap technique, especially suited to irregular
arrays.   The idea is you design a decoder for a regular array (in this
case a 240 virtual speaker spherical design) and then map those signals to
the real array using Pulkki's VBAP.

Fernando Lopez-Lezcano (at CCRMA) used it recently to generate a decoder
for a 24-speaker tilted dome at Stanford's new Bing Concert Hall, with very
good results.   We've also done some listening tests, comparing the decoder
for CCRMA's Listening Room described in our LAC2012 paper to an AllRAD
decoder with favorable results.  The former took about 2 hours of optimizer
time, and the latter a few seconds.

There are still a few loose ends -- the performance plots (rE, directional
error) work well only in MATLAB, and there needs to be bit of sanity
checking on the speaker locations, but it is quite usable.  I've have NFC
and phase-matched crossovers filters working in Faust, but not integrated
yet.

If anyone like to try it, contact me off list (hel...@ai.sri.com) and I'll
send you a beta copy.  I'll be doing a general release as soon as I get
some instructions written up.


Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130419/86fb071a/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Overzealous Underthinking

2013-04-23 Thread Aaron Heller
There are dual-diaphragm (von Braunmuhl  Weber design) mics where the each
signal is brought out separately, such as the Pearl TL44 or Neumann QSM69,
giving you back-to-back cardioids.  By taking the sum and difference you
can get a fig-8 and omni simultaneously from the same mic.   Add a second
mic at right angles and you have a natural horizontal B-format array (WXY).
  Martin Kantola's Panphonic microphone works this way:
http://www.panphonic.com/

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US


On Tue, Apr 23, 2013 at 12:45 PM, Eric Carmichel e...@elcaudio.com wrote:

 This post refers to Sursound Digest Vol 57 Issue 16

 (from) Eric:

 A highly-directional mic can be created using omnis and beam forming, but
 not a *series* of directions at a given instant.

 (response from) Fons:

 ??? What would stop anyone from using whatever beamforming algorithm twice
 (or more times) in parallel, using the same mic signals as input?

 New thoughts...

 Hi Fons, It’s not uncommon for me to *underthink* things. I sort-of
 realized that electrical buffering would allow any of the mics to be used
 in parallel, even if their respective signals were mixed electrically in
 any possible combination (to include polarity inversion) or digitally
 offline. But, I have considered a mic technique that *might* benefit from
 multiplexing (or its signal processing equivalent).
 Briefly, I’m a big fan of the Blumlein technique because it gives a
 wonderful front stage when played through a basic stereo setup. The inherit
 problem of this technique comes from source-sounds that emanate from behind
 the mic arrangement (two figure-of-eights, of course). We can’t selectively
 choose front from back and then swap the rearward sounds’ L-R orientations.
 The sum and differences of the two bi-directional mics could be manipulated
 if we got a positive output from both the front and rear lobes
 simultaneously. This may sound trivial, but this can’t be done in parallel
 because we don’t have independent outputs for each of the *lobes*. In other
 words, getting a negative output from a compression to the front could be
 accomplished via polarity inversion, but this automatically leads to a
 positive output for a rarefaction to the rear. It *could* (?) be
 accomplished with the addition of a second pair of mics (starting to sound
 Ambisonic),
  but their differences (physical spacing and performance), when compared
 to the first pair, would create some error, though perhaps not by much. Two
 *virtual* mics could, in real-time, be created via multiplexing (same as
 separating odd- from even-numbered samples of a digitized signal?). This
 leads to a four-channel output from two figure-of-8 mics, which, for the
 time-being doesn’t get us anywhere. But if the R1, R2, L1, and L2 signals
 were *appropriately* mixed (e.g. adding R2 to –L1), maybe there’s a way to
 *get back* the rearward sounds’ proper L-R orientation. As I think about
 it, the mics would have to digitally swap L and R intermittently (one swap
 per sample), which won’t work because they have to be physically facing L
 or R as is required for the Blumlein technique.
 Well, now that I’ve proven myself wrong (again) while jotting down ideas,
 I’m going to post this anyway so that others will steer clear from the
 foibles of poorly conceived ideas. Or, maybe I actually am onto something
 (unlikely). When I consider the elegant *simplicity* of Ambisonics, it
 really is a very cool topic: Four mics, and a lot of positive directions!
 Eric C.
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130423/4c67f3e8/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130423/68415083/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Optimised Decoder matrix (Ambdec)

2013-04-23 Thread Aaron Heller
and colleagues at IEM get credit for publishing a succinct description of
what they did and the results of their analysis.

If you don't want to download and install a couple thousand lines of MATLAB
code, spend me some speaker coordinates in a CSV file and I'll send you
some decoders to try out.  My only request is that they be for real arrays
you have access to (vs. pathological examples), so you can listen and
report back on what you hear.

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130423/2df0a9dd/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] What does a mic with more than 4 channels give you?

2013-04-26 Thread Aaron Heller
On Thu, Apr 25, 2013 at 7:50 PM, Sampo Syreeni de...@iki.fi wrote:

 On 2013-04-25, Robert Greene wrote

  How does anyone think that this is enough to record a soundfield in the
 neighborhodd of a point?


 It necessarily is if you think purely about the pressure field. There the
 pointwise pressure plus three velocity components always dictate the
 close-by pressure gradient as well. But while they do so, they don't
 dictate the velocity field. That can be freely added on subject to the
 condition that it agrees at the coincident point where you chose to put
 your Soundfield in. If you do the math, that leaves you three full degrees
 of freedom in velocity undetermined outside of your measuring point, and
 the classical ambisonic decoding solution then takes full and unabashed
 advantage of that latitude.


In slightly more layman's terms, recall that above 400Hz or so, Ambisonics
switches from attempting to recreate the pressure and velocity, to simply
concentrating the energy flux (and hence transients) in the direction of
the source -- a 'velocity' decode vs. a 'max-rE' decode.  In fact, if you
don't adjust the decoding in this way, you get nasty comb filtering
artifacts and confusing localization cues as you move your head.  This
switchover corresponds to frequency regime where ITD cues become unreliable
and we start to use ILD cues.Is it as good as in natural hearing?  No,
but its arguably the best you can do with four channels, if you want all
directions to be treated equally.  (and I acknowledge that not everyone
feels that is desirable)

In BLaH5 aka Why Ambisonics Works  we looked at how well (or not)
Ambisonics reproduces ITD and ILD cues.  (get it here:
http://www.ai.sri.com/~heller/ambisonics/)

At first order, the best you can do is concentrate the energy is as much as
you can into the hemisphere centered around the source direction, so from a
virtual microphone perspective a supercardiod (max
front-hemisphere/back-hemisphere ratio).  There's still a back lobe, but it
is much smaller.

One of the reasons Ambisonics starts working much better at third order is
that the side lobes get much smaller and the energy is more focused in the
correct direction, roughly correlating with our directional acuity.
Zotter and Frank have  a nice graphical interpretation of rE in Fig. 7 of
their JAES paper.   (It's worth looking at, regardless of what you think
about hybrid ambi/vbap schemes).

One other thing, Sampo mentioned that four speakers is the maximum that
should be used for first-order horizontal playback.   Most people who have
done the listening tests, acknowledge that a six-speaker hexagon is an
improvement over a square.   Some people have reported good results with 8,
others report spectral contamination.   Mathematical analysis supports the
latter.

Aaron Heller
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130426/39ef0da0/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Naive question on MS and Ambisonics

2013-05-09 Thread Aaron Heller
Helmut Wittek gave a paper Tonmeistertagung 2006 that discusses the
relationship between Schoeps Double-MS and Ambisonic B-format.

 http://hauptmikrofon.de/HW/TMT2006_Wittek_DoubleMS_neutral.pdf

English translation here:

 http://www.schoeps.de/documents/SCHOEPS_Double_MS_paper_E_2010.pdf
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130509/ee21e80a/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] theatrical ambisonics

2013-05-12 Thread Aaron Heller
On Sun, May 12, 2013 at 3:59 AM, Iain Mott m...@reverberant.com wrote:

 The Earfilms link is interesting. I wonder if people on the list have
 other references or links on ambisonics applied in theatrical
 productions, either traditional theatre or theatrical installation? Your
 own productions or the work of others.

Two chamber operas by Jonathan Berger presented recently at Stanford used
ambisonics (22 speakers, 3rd-order).  More here:

  http://live.stanford.edu/event.php?code=OPER

and


http://www.icareifyoulisten.com/2013/04/5-question-to-jonathan-berger-composer-founder-music-brain-symposium/

Also, a recent performance by Capella Romana used ambisonics to place the
live singers in the acoustic of the Hagia Sofia in Istanbul.


http://www.sfcv.org/reviews/stanford-live/cappella-romana-time-travel-to-constantinople

Fernando Lopez-Lezcano presented a paper last Thursday at LAC2013 about
it, Byzantium in Bing: Live Virtual Acoustics Employing Free Software.
 Conference proceedings at

http://lac.linuxaudio.org/2013/download/lac2013_proceedings.pdf

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130512/faacf173/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] [allowed] Re: Recreating a 3d soundfield with lots of mics.....

2013-05-19 Thread Aaron Heller
On Sat, May 18, 2013 at 3:01 AM, Rev Tony Newnham 
revtonynewn...@blueyonder.co.uk wrote:

 Indeed - there's a picture or two in one of his books, which I have here
 somewhere.  I don't think he tried to mimic the piositions of instruments
 within an ensemble though - except maybe the piano.  No time to look it up
 at present 

There's a photo of the set up at Royal Festival Hall, about 1/3 down on
this page

http://www.gearplus.com.au/products/wharfedale/history/0-history-wharfedale.htm
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130519/5efd4d84/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] a query

2013-06-23 Thread Aaron Heller
The term Ambisonics does not appear at all in Fellgett's 9/72 article
[1], but is in the title in 11/73 [2].

[1] P. Fellgett, “Directional information in reproduced sound,” Wireless
World, vol. 78, no. 1443, pp. 413–417, 1972.

[2] P. Fellgett, “Ambisonic reproduction of sound,” Electronics and Power,
vol. 19, no. 20, pp. 492–494, 1973.
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130623/f2a16c5d/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Giving Precedence to Ambisonics

2013-06-26 Thread Aaron Heller
Ron Streicher has written about using a Soundfield as the middle mic in a
Decca tree

   http://www.wesdooley.com/pdf/Surround_Sound_Decca_Tree-urtext.pdf

and Tom Chen has a system he calls B+ Format, which augments first-order
B-format from a Soundfield mic with a forward ORTF pair.   I've heard it on
orchestral recordings at his studio in Stockton and it sharpens up the
orchestra image nicely.

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US


On Wed, Jun 26, 2013 at 10:02 AM, Eric Carmichel e...@elcaudio.com wrote:

 Greetings All,
 I have a friend who's an advocate of the Decca Tree mic arrangement. Many
 of his recordings (a lot of choir and guitar) sound quite nice, so I looked
 into aspects of the Decca Tree technique. For those who may not be
 familiar, the *traditional* Decca Tree arrangement is comprised of three
 spaced omnidirectional mics. A center microphone is spaced slightly
 forward. From what I've read thus far (Spatial Audio by Francis Rumsey,
 Focal
 Press; and selected articles in the AES Stereophonic Techniques
 Anthology), the slightly advanced time-of-arrival for the center mic
 stabilizes the central image due the precedence effect. However, the
 existence of the third (center) mic can result in exacerbated
 comb-filtering effects that can arise with spaced pairs. So, to avoid these
 filtering effects, bring on a Soundfield / Ambisonic mic...??
 As I understand, Ambisonics already takes into consideration known
 psychoacoustical principles, and is why shelving is used to *optimize* ILDs
 and ITDs above and below 700 Hz, respectively. But as many readers may
 know, there are some nearly unpredictable ILD/ITD effects at approx. 1.7
 kHz (for example, see Mills, 1972, Foundations of Modern Auditory Theory).
 Creating a virtual Decca Tree seems straightforward. To move the center
 channel, or a virtual mic *forward* would require little more than offline
 processing. I wonder whether anybody has tried the following: Slightly
 delay all channels except the signal (or feeds) that make up the
 forward-most (central) channel. Using an Ambisonic mic would eliminate
 combing effects. I realize a number of Ambisonic plug-ins have built-in
 crossed-cardiod, Blumlein, and spaced omni functions, but not sure I've
 seen any of them give *precedence* to the precedence effect or Decca Tree
 arrangement.
 Two-channel playback (both convention and binaural) is here to stay for a
 while, so optimizing Ambisonics for stereo is desirable to me. In fact, one
 of my favorite recordings from the late 80s was made with the band (The
 Cowboy Junkies) circled around a Calrec Soundfield mic. I've never heard
 whether the Trinity Session recording was released in a surround format, or
 if the mic's hardware decoder converted straight to stereo from the get go.
 That particular recording made me aware of the Soundfield mic, though
 surround sound wasn't an interest for me at that time.
 If anybody I had attempted the Decca Tree using an Ambisonic mic (even
 with addition of a separate and forward omni mic), I'd be interested in
 knowing what your experiences were.
 Many thanks for your time.
 Best,
 Eric C. (the C continues to remind readers that this post submitted by the
 *off-the-cuff* Eric)
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130626/535efc06/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130626/1de5eaf1/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread Aaron Heller
Hi Moritz,

I've been building Ambisonic decoders in Faust, which can then be compiled
into a variety of plugins, including VST, PureData, SuperCollider, and so
forth.  What you need sounds easy to do.   Contact me directly (
hel...@ai.sri.com) and we can work out the details.

Info about Faust here:
   http://faust.grame.fr

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US



On Wed, Jul 3, 2013 at 7:33 AM, Moritz Fehr m...@moritzfehr.de wrote:

 hi everyone,

 thank you very much for your replies -- what i would like to achieve is
 playing a mix of a b-format recording combined with several mono- and
 stereofiles (have been doing this a lot, but only with a maximum of 8
 channels). my mixing platform is reaper on osx.

 i am going to record a space with a soundfield mic and i would like to
 then make a simulation of it by setting up an array of 16 speakers. one
 speaker circle is on ear level, the other one above.
 i would like to use the second circle above to add height information to
 the ambisonic soundfield.

 as i can see now, adding a second instance of vvmic or harpex might not be
 suitable as it would generate two separate soundfields. (not sure if i am
 right here...)
 the b2x plugins seem to have a maximum of 12 outputs. ...i will look at
 ambdec but it does seem to need a lot of routing using jack.

 would the decopro vst plugin (http://www.gerzonic.net/) be a good choice
 for this purpose?

 thank you !
 moritz




 Am 03.07.2013 um 15:38 schrieb Matthias Kronlachner:

  hi!
 
  you may just add an additional 8 channel track for a second instance of
 vvmicvst in reaper.
  send the 4 channel ambisonics signal to this newly created instance
 hosting vvmicvst, and route the outputs as you like.
 
  but if this approach gives you good decoding is another issue..
 
  matthias
 
  On 7/3/13 1:37 PM, Moritz Fehr wrote:
  Dear Members of Sursound,
 
  i am using the VVMicVst Plugin in Reaper for mixing and decoding my
 B-Format recordings. The plugin is limited to an output of 8 channels. For
 a new sound installation, I would like to decode to 16 channels (two
 circles of 8 speakers stacked). I know that I could use ICST for Max, but
 if possible in any way, I would to keep on working in a DAW. Are there any
 other plugins or tools available for this purpose (OSX) ?
 
  Any help would be greatly appreciated!
 
  Best,
  Moritz
 
 
 
 
  -- next part --
  An HTML attachment was scrubbed...
  URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/2415a0a0/attachment.html
 
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
 

 *
 Moritz Fehr
 mobil: 01749231733
 moritzf...@web.de
 www.moritzfehr.de

 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/66f54ac4/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130703/c547d59e/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Two new approaches for the distribution of surround sound/3D audio

2013-08-02 Thread Aaron Heller
Hi Stephan,

Please note:

AAC/HE-AAC profile 1 uses Spectral Band Replication, which means that top
octave information is generated from lower frequency content using hints.
 I'm unsure of the impact this would have on ambisonic decoding.   I guess
one could filter out the replicated contents and treat it as a band-limited
channel.

AAC/HE-AAC profile 2 uses parametric stereo, which is similar to Ogg Vorbis
Square Polar Mapping (described here http://xiph.org/vorbis/doc/stereo.html).
 This destroys phase information and I think would be unstable for
ambisonic content.  Can it be turned off in the encoder?

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US


On Fri, Aug 2, 2013 at 9:39 AM, Stefan Schreiber st...@mail.telepac.ptwrote:

 Martin Leese wrote:

  Stefan Schreiber wrote:
 ...


 To offer a backward-compatible extension of a  UHJ extended  AAC
 stereo file, you would have to include the T and Q audio channels as 3rd
 or 4th audio stream, somewhere. (Probably you could label such a file
 as stereo, the first 2 channels being L and R. Include some tags/flags
 in the header that there are one or two further  extension  audio
 channels, which would have to be decoded by a UHJ decoder. The decoder
 could be an app running on a smartphone, and the output could be a
 binaural version of the surround or actually LRTQ 3D audio recording.)

 If this audio channels approach doesn't work, use the data
 extensions of .mp4. (T and Q are not direct audio channels, so this
 might actually  be the formally correct approach... Because T and Q go
 into some decoder, as extension  data .)







 Somebody would need to produce AAC test
 files containing T and T+Q, and see what
 existing stereo decoders actually do.  If existing
 decoders cannot be made to ignore T and Q
 (by fiddling with the file format) then the idea of
 including T and Q is dead.



 Certainly, but I see many ways to achieve this.

 Note that .aac is one thing, and .m4a and .m4p as container formats are
 something different. (Because Apple seems to mix these things a bit up, a
 decoder will play a aac stereo file in any of these variants, and it will
 be the same thing anyway. Speaking of extensions, it is not always the same
 thing. )


  ...


 - The UHJ article already mentions that the T channel could be
 bandwidth-limited.



 Geoffrey Barton said some time ago that a
 bandwidth-limited T-channel resulted in some
 unwelcome compromises in the design of the
 3-channel UHJ decoder.  This may not be
 such a problem with software decoders as you
 could just include two separate decoders, one
 for 2.5 channels and another for 3.  However,
 this would mean a lot more work.

 I question whether the gain from band-limiting
 T is worth the pain.



 No, I already wrote it is not worth it. (Better to use a lower AAC/HE-AAC
 bitrate for the full T/Q channel/channels, IMO.)


 Best,

 Stefan

 P.S.: Of course you would have to prove such a concept. If you have at
 least three ways to fiddle and two ways don't use hidden audio channels
 at all, things should really work.

 http://en.wikipedia.org/wiki/**MPEG-4_Part_14http://en.wikipedia.org/wiki/MPEG-4_Part_14

 The existence of two different filename extensions, .MP4 and .M4A, for
 naming audio-only MP4 files has been a source of confusion among users and
 multimedia playback software. Some file managers, such as Windows Explorer,
 look up the media type and associated applications of a file based on its
 filename extension. But since MPEG-4 Part 14 is a container format, MPEG-4
 files may contain any number of audio, video, and even subtitle streams,
 making it impossible to determine the type of streams in an MPEG-4 file
 based on its filename extension alone. In response, Apple Inc. started
 using and popularizing the .m4a filename extension, which is used for MP4
 containers with audio data in the lossy Advanced Audio Coding (AAC) or its
 own lossless Apple Lossless (ALAC) formats. Software capable of audio/video
 playback should recognize files with either .m4a or .mp4 filename
 extensions, as would be expected, since there are no file format
 differences between the two.

  Almost any kind of data can be embedded in MPEG-4 Part 14 files through
 private streams. A separate hint track is used to include streaming
 information in the file.



 Which is the option which leads to = 320kbps mode, as well. (I could
 figure this out. Not necessary as response to your posting.)

 -- next part --
 An HTML attachment was scrubbed...
 URL: https://mail.music.vt.edu/**mailman/private/sursound/**
 attachments/20130802/49823d7a/**attachment.htmlhttps://mail.music.vt.edu/mailman/private/sursound/attachments/20130802/49823d7a/attachment.html
 

 __**_
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/**mailman/listinfo/sursoundhttps://mail.music.vt.edu/mailman/listinfo/sursound

-- next part

Re: [Sursound] Two new approaches for the distribution of surround sound/3D audio

2013-08-14 Thread Aaron Heller
On Sun, Aug 11, 2013 at 9:21 PM, Stefan Schreiber st...@mail.telepac.ptwrote:


 Again, the real problem seems to be the lack of available B format
 decoders.


I may be able to help here, as I've recently written a full-featured (dual
band, NFC, blah, blah...) Ambisonic decoder engine in Faust, as well as a
toolkit to design the decoder configurations (written in MATLAB/Octave).
 Several sursounders have been beta testing the toolkit and Faust backend
with some success.  It's all open source, licensed under GNU Affero General
Public License version 3, but I could be persuaded to change that in the
interest of wider adoption.

Faust is a DSP specification language, which compiles to highly optimized
C++, and then to VST, AU, LADSPA, Jack, MaxMSP, csound, SuperCollider, etc.
I believe it can also target Android and IOS, but I haven't confirmed that
personally.

Contact me directly (hel...@ai.sri.com) if you want to try it out.

Aaron  Heller
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130813/72153f52/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Esfera Mic: IBC launch

2013-09-14 Thread Aaron Heller
I don't know if it is relevant, but I found this patent application

  http://www.faqs.org/patents/app/20110135098

Patent application title: METHODS AND DEVICES FOR REPRODUCING SURROUND
AUDIO SIGNALS

Inventors:  Markus Kuhr (Wedemark, DE)  Jurgen Peissig (Wedemark, DE)  Axel
Grell (Wedemark, DE)  Gregor Zielinsky (Wedemark, DE)  Juha Merimaa (Menlo
Park, CA, US)  Veronique Larcher (Palo Alto, CA, US)  David Romblom (San
Francisco, CA, US)  Bryan Cook (Silver Spring, MD, US)  Heiko Zeuner
(Bernau Bei Berlin, DE)
Assignees:  Sennheiser electronic GmbH  Co. KG  Sennheiser Electronic
Corporation
IPC8 Class: AH04R500FI
USPC Class: 381 17
Class name: Electrical audio signal processing systems and devices binaural
and stereophonic pseudo stereophonic
Publication date: 2011-06-09
Patent application number: 20110135098
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130914/2d9b0f4f/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Esfera Mic: IBC launch

2013-09-14 Thread Aaron Heller
Also here, with PDF

  http://www.google.com/patents/US20110135098

(I see Google Patents, now has a Find prior art button)




On Sat, Sep 14, 2013 at 8:36 AM, Aaron Heller hel...@ai.sri.com wrote:

 I don't know if it is relevant, but I found this patent application

   http://www.faqs.org/patents/app/20110135098

 Patent application title: METHODS AND DEVICES FOR REPRODUCING SURROUND
 AUDIO SIGNALS

 Inventors:  Markus Kuhr (Wedemark, DE)  Jurgen Peissig (Wedemark, DE)
  Axel Grell (Wedemark, DE)  Gregor Zielinsky (Wedemark, DE)  Juha Merimaa
 (Menlo Park, CA, US)  Veronique Larcher (Palo Alto, CA, US)  David Romblom
 (San Francisco, CA, US)  Bryan Cook (Silver Spring, MD, US)  Heiko Zeuner
 (Bernau Bei Berlin, DE)
 Assignees:  Sennheiser electronic GmbH  Co. KG  Sennheiser Electronic
 Corporation
 IPC8 Class: AH04R500FI
 USPC Class: 381 17
 Class name: Electrical audio signal processing systems and devices
 binaural and stereophonic pseudo stereophonic
 Publication date: 2011-06-09
 Patent application number: 20110135098



-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130914/6a734e04/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Volume question WRT 7.1 sound recorded at listening position.

2013-09-23 Thread Aaron Heller
On Mon, Sep 23, 2013 at 3:55 PM, Andy Furniss adf.li...@gmail.com wrote:



 I don't quite understand the in phase though, are you saying that they
 artificially adjust phase for the same sound that comes out of more than
 one speaker to affect the mixdown?


The Recording Academy recommendations for surround sound say (sec 4.3)

One potential problem that can arise from routing a signal into two or more
speakers is the danger of increased, and increasingly complex, comb
filtering. This problem multiplies as more speakers are engaged and can
become critical if downmixing is ever employed by the playback system.
Therefore, many experienced surround mixers selectively turn off channels
when bringing a sound inside the surround bubble or when dynamically
panning a sound from one area in the surround space to another.

It is recommended that whenever signal is placed into three, four, or five
speakers, it be decorrelated.



http://www2.grammy.com/PDFs/Recording_Academy/Producers_And_Engineers/SurroundRecommendations.pdf


--
Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130923/a357f99b/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] A-format panner.

2013-09-26 Thread Aaron Heller
A first-order B-format panner needs to implement the equations

  W = S * sqrt(1/2)
  X = S * cos(az) * cos(el)
  Y = S * sin(az) * cos(el)
  Z = S * sin(el)

where S is the signal being panned and az and el define the direction.

The Calrec Soundfield MkIV controller box has analog circuitry for
something like this in the Soundfield Controls section.  Take a look at
   http://ambisonics.dreamhosters.com/schematic-4.pdf
 and
   http://ambisonics.dreamhosters.com/MkIV-Tech-Manual.pdf

to see how Ken Farrar and Richard Lee did it.


Aaron (hel...@ai.sri.com)
Menlo Park, CA  US




On Thu, Sep 26, 2013 at 10:20 AM, Kan Kaban kanka...@alivecinema.orgwrote:

 Thanks for your reply Jörn. You´ll have to excuse my basic english
 sentences, as it´s not my primary language.

  A-format panning as you call it has nothing to do with ambisonics per
 se, and the term frankly doesn't make much sense.
  the module you mention is a simple amplitude panner.
 
  so for your purposes, you might want to look into amplitude panning.
  but this has none of the characteristics of ambisonics.

 Yes, I know there´s no ambisonics until there is. The idea of amplitude
 panning, during A format stage, is to simplify an analogue path before
 b-format exists. Without plugins while possible. Soundfield B-format to 5.1
 / 7.1 converter (http://www.soundfield.com/products/sp451.php) is
 actually hardware, so it seems interesting to make analogue panners
 (anywhere, amplitude or B signals) to keep audio into an analogue path like
 this:

 Quadraphonic (same to A-Format but without height?) keyboard or synth
 signals /// then an amplitude panner /// then the A to B converter /// then
 a B-format mixing stage /// the hole B-format mix to a Sounfield converter
 for 5.1 - 7.1 or anything else.

  now you _could_ implement an ambisonic with-height b-format panner in
 analog. likewise, i could have faxed you my reply in 2's complement, and
 you could have OCR'ed it back into your computer. it's needlessly
 complicated, a waste of resources, and unless the equipment is constantly
 being re-calibrated to perfect accuracy, the result will be so full of
 errors that it's pretty much pointless.

 Well, if there is some kind of B-format analogue panner we would also like
 to learn about it´s design, so we can actually see how pointless it is. The
 idea of panning quadraphonic packs seems easier to implement BEFORE an A to
 B converter.
 Does that exist?. I suppose something had to exist before DSP´s
 possibilities.
 Thanks again.
 Gino.

 El 26/09/2013, a las 7:01, Jörn Nettingsmeier 
 netti...@stackingdwarves.net escribió:

  On 09/26/2013 09:21 AM, Kan Kaban wrote:
  Greetings to all Sursound list.
 
  First of all, thanks for all this years supporting Ambisonics. We´re
  a collective of audiovisual artists preparing for our next project.
  We have been researching ambisonics for the last few moths, seeding
  the possibility of it´s implementation. There are still some concerns
  regarding workflow. Our initial idea is to keep signals / conversions
  into the analogue domain.
 
  erm?
 
  For example, analogue panners. I´ve stepped
  with very few B-format ideas on the internet, so I was wondering if
  there´s a case considering simple A-format panners, before AB
  converters. This eurorack module seems right for the task, right?:
  http://www.intellijel.com/eurorack-modules/planar/
 
  A-format panning as you call it has nothing to do with ambisonics per
 se, and the term frankly doesn't make much sense.
  the module you mention is a simple amplitude panner.
 
  so for your purposes, you might want to look into amplitude panning.
  but this has none of the characteristics of ambisonics.
 
  But height
  information?. Are B-format panners somehow better? We know of the
  multiple ambisonics ITB plugins available, but again, we love ancient
  analogue qualities. Any info will be very appreciated, we´re really
  exited about this  will love to share the results somewhere in the
  near future. Best regards. Gino.
 
  for height, you will need vbap, and it's totally insane to even consider
 doing this in analog (just think of the operation to find the three nearest
 speakers for a given direction).
 
  unless it's actually pain you love, in which case you've struck a gold
 mine.
 
  now you _could_ implement an ambisonic with-height b-format panner in
 analog. likewise, i could have faxed you my reply in 2's complement, and
 you could have OCR'ed it back into your computer. it's needlessly
 complicated, a waste of resources, and unless the equipment is constantly
 being re-calibrated to perfect accuracy, the result will be so full of
 errors that it's pretty much pointless.
 
  --
  Jörn Nettingsmeier
  Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
 
  Meister für Veranstaltungstechnik (Bühne/Studio)
  Tonmeister VDT
 
  http://stackingdwarves.net
 
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  

Re: [Sursound] A-format panner.

2013-09-27 Thread Aaron Heller
On Thu, Sep 26, 2013 at 12:15 PM, Eero Aro eero@dlc.fi wrote:

 The Soundfield microphone directional controls aren't exactly panning.


Sorry to quibble, but if I feed a signal into the W and X inputs (with
appropriate scaling) and ground Y and Z, then the soundfield controls on a
Mk4 behave like a B-format panner.  Right?

Thanks for the schematics.  The magic happens in VR9, labeled FSCB22A,
which is a sine/cosine pot.  Looks like you can still source them.  The
application is a shaft encoder for servos.

   http://www.meditronik.com.pl/doc/bourns/syp078085.pdf
   http://sakae-tsushin.co.jp/eng_page/pdf/pot/e_FSCB22A_FSCB30A_FSCB50A.pdf

The soundfield controller uses two switches to select the quadrant and then
a conventional pot.  It does elevation too.

Also, I notice that the transcoder has LF, RF, LB, RB inputs.  I assume
those are used to transcode quad to UHJ.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130927/92d579c7/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] two new BBC Research white papers on Ambisonics

2013-10-08 Thread Aaron Heller
[1] P. Power, C. Dunn, B. Davies, and J. Hirst, “Localisation of Elevated
Sources in Higher-Order Ambisonics,” BBC RD, WHP 261, Oct. 2013.
http://www.bbc.co.uk/rd/publications/whitepaper261



[2] D. Satongar, C. Dunn, Y. Lan, and F. Li, “Localisation Performance of
Higher-Order Ambisonics for Off-Centre Listening,” BBC RD, WHP 254, Oct.
2013.

http://www.bbc.co.uk/rd/publications/whitepaper254
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131008/d0c438d8/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] two new BBC Research white papers on Ambisonics

2013-10-08 Thread Aaron Heller
The second one uses basic decoding (aka velocity, matching, rV=1) decoding
over the entire frequency range, which means, among other things, that the
ILDs are not as large as they would be with rE_max decoding.

Aaron (hel...@ai.sri.com)
Menlo Park, CA US


On Tue, Oct 8, 2013 at 3:30 PM, Peter Lennox p.len...@derby.ac.uk wrote:

 The first paper basically concludes that higher order (3rd) produces
 better elevation discrimination. Given that elevation (or rather, vertical)
 perception is thought to largely rely on Pinnae effects, this is hardly
 suprising (since, for a given order, wavefront reconstruction error
 increase with frequency)
 One conclusion, therefore, might be that it would be advisable to
 concentrate limited resources in providing better resolution to the up-down
 axis, in preference over the horizontal axis. This is in contrast to a
 strand of thinking that advocates concentrating on the horizontal axis at
 the expense of the vertical, on the basis that we really don't perceive
 up/down all that well because interaural differences are more important
 than other cues.
 So I hope we've laid that ghost to rest.
 Dr Peter Lennox

 School of Technology,
 Faculty of Arts, Design and Technology
 University of Derby, UK
 e: p.len...@derby.ac.uk
 t: 01332 593155
 
 From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Aaron Heller [
 hel...@ai.sri.com]
 Sent: 08 October 2013 23:10
 To: Surround Sound discussion group
 Subject: [Sursound] two new BBC Research white papers on Ambisonics

 [1] P. Power, C. Dunn, B. Davies, and J. Hirst, “Localisation of Elevated
 Sources in Higher-Order Ambisonics,” BBC RD, WHP 261, Oct. 2013.
 http://www.bbc.co.uk/rd/publications/whitepaper261



 [2] D. Satongar, C. Dunn, Y. Lan, and F. Li, “Localisation Performance of
 Higher-Order Ambisonics for Off-Centre Listening,” BBC RD, WHP 254, Oct.
 2013.

 http://www.bbc.co.uk/rd/publications/whitepaper254
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20131008/d0c438d8/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

 _
 The University of Derby has a published policy regarding email and
 reserves the right to monitor email traffic. If you believe this email was
 sent to you in error, please notify the sender and delete this email.
 Please direct any concerns to info...@derby.ac.uk.
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131008/2a787a6b/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Acoustic echoes reveal room shape

2013-10-17 Thread Aaron Heller
Interesting paper in PNAS, from July. I believe it is open access, so
anyone can read/download.   Aaron

http://www.pnas.org/content/110/30/12186.short

The supplemental information (SI) shows some of the equipment and more math.

I. Dokmanić, R. Parhizkar, A. Walther, Y. M. Lu, and M. Vetterli, “Acoustic
echoes reveal room shape,” Proceedings of the National Academy of Sciences,
vol. 101, no. 30, pp. 12186–12191, Jul. 2013.


Abstract

Imagine that you are blindfolded inside an unknown room. You snap your
fingers and listen to the room’s response. Can you hear the shape of the
room? Some people can do it naturally, but can we design computer
algorithms that hear rooms? We show how to compute the shape of a convex
polyhedral room from its response to a known sound, recorded by a few
microphones. Geometric relationships between the arrival times of echoes
enable us to “blindfoldedly” estimate the room geometry. This is achieved
by exploiting the properties of Euclidean distance matrices. Furthermore,
we show that under mild conditions, first-order echoes provide a unique
description of convex polyhedral rooms. Our algorithm starts from the
recorded impulse responses and proceeds by learning the correct assignment
of echoes to walls. In contrast to earlier methods, the proposed algorithm
reconstructs the full 3D geometry of the room from a single sound emission,
and with an arbitrary geometry of the microphone array. As long as the
microphones can hear the echoes, we can position them as we want. Besides
answering a basic question about the inverse problem of room acoustics, our
results find applications in areas such as architectural acoustics, indoor
localization, virtual reality, and audio forensics.
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131017/f7ba1fbb/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Hector bird recording - SoundCloud

2013-11-21 Thread Aaron Heller
I took the liberty of merging them into 4-channel files and putting them on
my server, which might be easier to access than the skydive (the UI was in
Japanese for me, fortunately I recognized the character for 'down')

  http://ambisonics.dreamhosters.com/01-Birds_WXYZ-110425_0119.wav
  http://ambisonics.dreamhosters.com/05-Music_WXYZ-110425_0127.wav

They sound quite nice.  In Harpex, you can clearly see the locations of the
singers, percussion, and birds.  Impressive!

Thanks...

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US



On Thu, Nov 21, 2013 at 10:09 AM, Michael Chapman s...@mchapman.com wrote:

  seemed to have messed up - deleted the files trying to allow downloads. I
  am uploading them again, and here is the link http://sdrv.ms/1baeBAI
 
 Sorry, something went wrong
  Please try again. If you keep seeing this message, go to Service status
 to check whether there's a problem with SkyDrive or to report the issue.

 Michael ;-(

  umashankar
 
  I did not want to process them at all because this is primarily for
 people
  to judge the 14 mm capsules in a tetrahedral array.
 
  umashankar
 
  Date: Thu, 21 Nov 2013 16:22:23 +
  From: augustineleu...@gmail.com
  To: sursound@music.vt.edu
  Subject: Re: [Sursound] Hector bird recording - SoundCloud
 
  strange bird - and nice spacious feel to the recording - Id be tempted
  to
  stick a high pass filter on it though - or maybe use a wind shield lots
  of
  low frequency booming,
  best,
  Gus
 
 
  On 21 November 2013 16:08, umashankar manthravadi
  umasha...@hotmail.comwrote:
 
   okay here is the link to the B-format files
   https://skydrive.live.com/redir?resid=3AEEEA022E2AD294%21227 all
 eight
   files in the folder are by hector Centeno, for whom I built a Brahma
   microphone some years ago. four or of music, four of bird song. they
  were
   processed from the original A-formt to B-format using generic filters.
  Fons
   Adriansen had later provided Hector with the correct filters, but I do
  not
   remember he sent me new versions of the files.
  
   Umashankar
  
From: umasha...@hotmail.com
To: s...@mchapman.com; sursound@music.vt.edu
Date: Thu, 21 Nov 2013 21:34:52 +0530
Subject: Re: [Sursound] Hector bird recording - SoundCloud
   
the B-format files files are in my skydrive folder called brahma
  140.
   they are named birds etc . I have just sent a link from skydrive to
   sursound. if it does not show up I will try to copy and paste the link
   
umashankar
   
 Date: Thu, 21 Nov 2013 15:45:58 +
 From: s...@mchapman.com
 To: sursound@music.vt.edu
 Subject: Re: [Sursound] Hector bird recording - SoundCloud

  dear Augustine
 
  I sent the message from soundcloud so I thought it would include
  a
   link
 automatically.  Here it is
 
 http://soundcloud.com/umashankar-manthravadi/hector-bird-recording
 
 Thanks.

 It probably did, but as a a href=urlclick here/a which did
  not
   come
 out in the pure text version of/on the list.

 Michael

 PS Any chance of downloading the B-format ???


  umashankar
 
  Date: Thu, 21 Nov 2013 12:39:11 +
  From: augustineleu...@gmail.com
  To: sursound@music.vt.edu
  Subject: Re: [Sursound] Hector bird recording - SoundCloud
 
  can't see the link ?
 
 
  On 21 November 2013 03:31, umashankar manthravadi
  umasha...@hotmail.comwrote:
 
   this is a link to a one minute recording by hecter Centeno
  using
   the
 Brahma + Zoom H2n. this  is for the many who had asked how it
  sounds
  for
   nature sound recordings. the new H2n has better preamps, and
  you
   can
  now
   use the microphone with other recorders.
  
  
   Umashankar
   -- next part --
   An HTML attachment was scrubbed...
   URL: 
  
  
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20131121/79ea9bab/attachment.html
   
   ___
   Sursound mailing list
   Sursound@music.vt.edu
   https://mail.music.vt.edu/mailman/listinfo/sursound
  
 
 
 
  --
  07580951119
 
  augustine.leudar.com
  -- next part --
  An HTML attachment was scrubbed...
  URL:
  
  
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20131121/782f991e/attachment.html
   
 ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
 
  -- next part --
  An HTML attachment was scrubbed...
  URL:
  
  
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20131121/9875a116/attachment.html
   
 ___
  Sursound mailing list
  Sursound@music.vt.edu
  

Re: [Sursound] VR and cheap headtracking in 2013...

2013-12-02 Thread Aaron Heller
Not cheap, but the Smyth Research Realizer does do head tracking with any
headphones.  It also measures your HRTFs and headphone's response.  I
played with it at Burning Amp this year and it was quite effective at
externalizing the sound from headphones.

  http://www.smyth-research.com/products.html




On Mon, Dec 2, 2013 at 1:08 PM, Andy Furniss adf.li...@gmail.com wrote:

 Matthias Kronlachner wrote:

 On 12/2/13 6:41 PM, Andy Furniss wrote:


  I've been waiting to see if this will appear.

 http://www.matthiaskronlachner.com/?p=1723

 and hoping it will work with -

 http://www.matthiaskronlachner.com/?p=624

  Sorry for keeping you waiting...

 If you are on OSX just send me an email and I will send you a link to a
 beta installer. (5th order VST Plug-ins)


 NP about the waiting :-) it's good to know that the project is still alive
 and thanks for the offer but I am using Linux.


  Actually my rotation plug-in has OSC integrated (which listens exactly
 to the Kinect head tracking app), but with Reaper you can also route OSC
 messages to control plug-ins. (It's not very nice from the plug-in to
 have it's own communication to the outer world I would say)


 Sounds good, being just a home user I am not used to pro audio apps with
 plugins etc. Something like this would certainly make me take the plunge.


  Although I warn you, the Kinect head tracking takes a lot of CPU power
 and the convolutions for the binaural decoder as well (if you choose eg.
 the IEM Cube or Mumuth as virtual venue). My binaural plug-in is very
 uneconomic in this sense. It will simply convolve every loudspeaker
 signal with the related binaural room impulse responses. For the late
 reverberation this is probably an overkill but well, it's straight
 forward...
 The presets with Kemar HRIRs are much more cheap concerning CPU but I
 prefer not to listen in a dead room.


 Hmm, I guess I won't know about CPU without testing - though I have a quad
 core 3.4Ghz it is getting a bit old now.


  I recommend using something like this for head tracking which save your
 CPU power for real time audio:
 http://www.rcgroups.com/forums/showthread.php?t=1677559


 Interesting and cheap - not so sure about the magnetometer near speakers.


  And I am also waiting for headphones with integrated head tracking to
 appear.


 Yea, but only if Sennheiser will do it and part exchange my current ones
 :-)


  Hope to be able to post a download link and a link to the source
 repository of the ambix plug-ins soon.


 Great, will keep a look out and thanks for this work.


 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131202/78515745/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Ambisonic depending Aural recognition,

2013-12-15 Thread Aaron Heller
The wikipedia article on Ambisonics has the following paper under Source
texts on Ambisonics – basic theory.   I can't seem to find a copy from my
usual sources.  Do any of have a copy?

W.C.Clarck, K.Alimi, B.Spendor: Ambisonic depending Aural recognition,
International Institute of Inuitive Audio research, IIAR 1205, pp 15–32,
May 2008

Also on the second author's Linkedin page as

Ambisonic depending aural recognition

IIAR Journal for Psychoacoustics

May 2008

Direction perception in binaural hearing systems stems from minor Phase
shifts of the dual received input material. The auricle curvature, ethmoid
bone and nasal septum shape the perceived sonar soundscape for each sample
individual. In headphone and in-ear monitoring conditions sonar soundscape
is distorted or incomplete due to the absence of these three functors. This
paper's intention is to develop a framework of all possible functor
structures for modeling different types of functor attributes in an in ear
monitoring system in order to reproduce the lost ambisonic effect of
various listeners using a rounded statistical morphology of 32 basic types
of anatomic features.



Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131215/ef402e8d/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Upcoming Android apps ambisonic related

2013-12-20 Thread Aaron Heller
On Fri, Dec 20, 2013 at 7:40 PM, David Worrall worr...@avatar.com.auwrote:

 I remember reading that, with exposure, human's audio-processing
 hardware can adapt to/learn how to use a non-optimal HRTF, given a bit of
 time.
 Does anyone have a reference for this?


I don't know about 'non-optimal', but we can learn new ones by cross
calibration with other senses, and apparently we don't forget the old ones.


Aaron (hel...@ai.sri.com)

Nature Neuroscience  1, 417 - 421 (1998)
doi:10.1038/1633

Relearning sound localization with new ears

Paul M. Hofman, Jos G.A. Van Riswick  A. John Van Opstal

University of Nijmegen, Department of Medical Physics and Biophysics, Geert
Grooteplein 21, NL-6525 EZ Nijmegen, The Netherlands

Correspondence should be addressed to A. John Van Opstal joh...@mbfys.kun.nl

Because the inner ear is not organized spatially, sound localization relies
on the neural processing of implicit acoustic cues. To determine a sound's
position, the brain must learn and calibrate these cues, using accurate
spatial feedback from other sensorimotor systems. Experimental evidence for
such a system has been demonstrated in barn owls, but not in humans. Here,
we demonstrate the existence of ongoing spatial calibration in the adult
human auditory system. The spectral elevation cues of human subjects were
disrupted by modifying their outer ears (pinnae) with molds. Although
localization of sound elevation was dramatically degraded immediately after
the modification, accurate performance was steadily reacquired.
Interestingly, learning the new spectral cues did not interfere with the
neural representation of the original cues, as subjects could localize
sounds with both normal and modified pinnae.

Full text at:
  http://www.nature.com/neuro/journal/v1/n5/full/nn0998_417.html






-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131220/4d9a2494/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Motion-Tracked Binaural

2013-12-28 Thread Aaron Heller
Dick Duda and Ralph Algazi gave a talk and demo at a San Francisco AES
meeting at Dolby Labs a few years ago.  At that time, they were recording
with a head-sized sphere with either 8 or 16 microphones around the
equator.  They imagined that 8 would be used for teleconferencing and 16
for music recording.

The headphones used a Polhemus tracker to determine orientation.  At low
frequencies, multiple mics were processed to produce the ear signals and at
high-frequencies (where spatial aliasing on the sphere becomes a
consideration) they simply selected the closest microphone to the each ear
location.  Then generic pinna filtering was applied to improve front-back
discrimination.  The immediate impression is the externalization and
solidity of the image.

There is some more recent material here:

   http://www.ece.ucdavis.edu/binaural/

By the way, when Dick was at SRI, he occupied my current office or the one
next door.  At that time, he was working on vision for Shakey the robot,
and co-authored with Peter Hart the classic text _Pattern Classification
and Scene Analysis_.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


On Sat, Dec 28, 2013 at 1:13 PM, dw d...@dwareing.plus.com wrote:

 http://www.google.com/patents/US20040076301


 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20131228/61313efb/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Sensory evaluation of concert halls

2014-01-17 Thread Aaron Heller
Jan 2014 Physics Today just landed on my desk and the cover article is by
Tapio Lokki on evaluation of concert halls.

Tasting music like wine: Sensory evaluation of concert halls

http://scitation.aip.org/content/aip/magazine/physicstoday/article/67/1/10.1063/PT.3.2242

It is labeled free content, so should be available to anyone.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140117/c0e4d4d7/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Sensory evaluation of concert halls

2014-01-17 Thread Aaron Heller
Also their POMA paper from ICA 2013, which is free (I think):

Spatio-temporal energy measurements in renowned concert halls with a
loudspeaker orchestra

Sakari Tervo, Jukka Pätynen and Tapio Lokki

POMA 19, 015019 (2013); http://dx.doi.org/10.1121/1.4799424

http://scitation.aip.org/content/asa/journal/poma/19/1/10.1121/1.4799424


On Fri, Jan 17, 2014 at 12:28 PM, Jörn Nettingsmeier 
netti...@stackingdwarves.net wrote:

 On 01/17/2014 07:29 PM, Aaron Heller wrote:

 Jan 2014 Physics Today just landed on my desk and the cover article is by
 Tapio Lokki on evaluation of concert halls.

 Tasting music like wine: Sensory evaluation of concert halls


http://scitation.aip.org/content/aip/magazine/physicstoday/article/67/1/10.1063/PT.3.2242

 It is labeled free content, so should be available to anyone.



 interesting. what puts me off a little is that they used an ad-hoc mic
array of six spaced omnis, without giving any details whatsoever.
 how good can direction estimates be with that kind of setup? and how do
you ensure the reproduced diffuse sound field is accurate?

 the actual meat is in here, it seems:
 http://scitation.aip.org/content/asa/journal/jasa/133/2/10.1121/1.4770260
 but sadly that one is not freely available. can't see myself paying $30
for a casual read...


 --
 Jörn Nettingsmeier
 Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

 Meister für Veranstaltungstechnik (Bühne/Studio)
 Tonmeister VDT

 http://stackingdwarves.net

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140117/929db1e2/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Construction of purpose built ambisonic studio.

2014-03-08 Thread Aaron Heller
Steve,

I'm not sure I follow everything you're saying about angle errors, but
there are a few installations that work well here in the SF Bay area that I
have personal experience with. The Listening Room at Stanford's CCRMA
is a 3rd-order periphonic facility, described here

   https://ccrma.stanford.edu/room-guides/listening-room/

The others are in private homes, so I'll let the owners to chime in if they
please. They're good sounding rooms, but without special acoustic
treatment.  (unlike my living room, which is glass on three sides).  There
are several accounts of Ambisonic reproduction not working well in very
dead rooms, such as an anechoic chamber.

Also, for 3rd order periphonic you need to place a number of speakers below
the listener, which can be a challenge.  The acoustically transparent floor
in CCRMA's Listening Room is one solution.Eric Benjamin and I have a
paper in the upcoming Linux Audio Conference on designing HOA decoders for
partial coverage speaker arrays, such as domes and rings.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140308/a123ed5c/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] BBC Radio Three Surround Streaming Trial (15. to 31.March)

2014-03-17 Thread Aaron Heller
I have a GigaPort AG connected by USB to my MacBook Pro and it works
correctly with Chrome Version 33.0.1750.152

I used Audio MIDI Setup (in /Applications/Utilites) and selected
Multichannel/5.1 Surround for the speaker configuration of the GigaPort
device.

Aaron Heller  (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140317/441ef5d6/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Periphonic Irregular HO Ambisonics Decoder

2014-03-17 Thread Aaron Heller
When I run it in Python 2.7, thetaTest is an array, not a list as in Fons'
example, so the comparison works as expected and produces a boolean array

   array([.1, .2, .3, .4])  .25
array([False, False,  True,  True], dtype=bool)

and boolean types behave like the constants 0 and 1 under multiplication

  array([1,2,3,4]) * (array([.1, .2, .3, .4])  .25)
array([0, 0, 3, 4])

I don't know much about Python 3.

Aaron


On Mon, Mar 17, 2014 at 7:07 PM, Marc Lavallée m...@hacklava.net wrote:


 Fons, here's my little code review:
 The bug is harmless, because the WbinVec variable is used only if
 WBIN is considered True, and WBIN is actually a constant set to 0.

 --
 Marc


 Le Tue, 18 Mar 2014 01:01:25 +,
 Fons Adriaensen f...@linuxaudio.org a écrit :

  On Mon, Mar 17, 2014 at 05:06:32PM -0700, Aaron Heller wrote:
On Mon, Mar 17, 2014 at 1:09 PM, Fons Adriaensen
   f...@linuxaudio.orgwrote:
  
On Mon, Mar 17, 2014 at 06:05:11PM +0100, /dav/random wrote:
   
 The project is called IDHOA and the code is hosted here [1]
 under GPL .
   
(after automatic conversion to python3)
   
Traceback (most recent call last):
  File ./main.py, line 32, in module
from constants import *
  File /data/build/idhoa/constants.py, line 106, in module
WbinVec = fu.Wbinary()
  File /data/build/idhoa/functions.py, line 525, in Wbinary
return  thetaTest  thetaThreshold
TypeError: unorderable types: list()  float()
   
  
  
   It runs fine in Python 2.7 with NLOpt 2.4.1
  
   It took about 370 seconds to solve the example speaker array at
   3rd-order. From a quick look at the usual performance metrics,  the
   resulting coefficients look pretty good for a challenging array.
 
  It turns out that Python 2 allows to compare a list of floats
  to a float. But the result is probably not what the authors
  assumed it to be:
 
  Python 2.7.6 (default, Nov 26 2013, 12:52:49)
  [GCC 4.8.2] on linux2
  Type help, copyright, credits or license for more information.
   A = [0.1, 0.2, 0.3]
   A  1000
  True
   A  -1000
  True
   A  1000
  False
   A  -1000
  False
  
 
  In other words, the compare that Python 3 refuses will always
  return True in Python 2. I suspect this is a bug.
 
  Ciao,

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140317/9ab9562c/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Auro 3D

2014-03-18 Thread Aaron Heller
There were three 'with height' workshops at the 2012 AES meeting in San
Francisco* that featured an Aura 3D playback setup.   David Bowles
(Swineshead) and Paul Geluso (NYU) played some recordings that were quite
nice -- height in front, with convention surrounds -- but the remainder,
including recordings from NHK, Dolby, DG, and 2L, were either terribly
distorted or a phasey mess.   I recall Kimio Hamasaki apologizing, saying
that the mix down from 22.x to Aura-3D didn't work well.  The recordings
that Morton Lindberg (2L) played were pleasant sounding, but it seemed as
if you were sitting underneath the musicians, which was odd. I was sitting
directly behind Martha de Francisco (Polyhymnia) who thought they sounded
wonderful.

Aaron


* http://www.aes.org/events/133/workshops/?ID=3239
  http://www.aes.org/events/133/workshops/?ID=3236
  http://www.aes.org/events/133/workshops/?ID=3241


On Tue, Mar 18, 2014 at 7:54 AM, Ronald C.F. Antony r...@cubiculum.com
wrote:

 Looks like the 5.1 insanity put on steroids...
 ...but I'd love to be surprised; not betting on it however, given how
thin on theory and how thick on hype the site is.

 Sent from a crippled mobile device

  On 18 Mar 2014, at 15:39, John Leonard j...@johnleonard.co.uk wrote:
 
  Anyone experienced/used/bought this yet?
 
  http://www.auro-3d.com/
 
  Regards,
 
  John
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound
 -- next part --
 A non-text attachment was scrubbed...
 Name: smime.p7s
 Type: application/pkcs7-signature
 Size: 2270 bytes
 Desc: not available
 URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140318/daafd959/attachment.bin

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140318/010257f6/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] NAB2014 Sighting; 360 degree camera arrangement very like a Soundfield microphone

2014-04-10 Thread Aaron Heller
I recall this camera cluster being used for the Lincoln Sound and Vision
production with that odd four faced binaural dummy head.

   http://now.lincoln.com/2013/02/an-entirely-new-sound-and-vision-2/


On Thu, Apr 10, 2014 at 10:39 AM, mgra...@mstvp.com wrote:


 While at NAB2014 in Las Vegas earlier this week I stumbled upon a small
 company (http://www.video-stitch.com/)  with a 360 degree video product
 offering. Their product was actually software running on a computer that
 stitches together the streams from four video cameras.

 The resulting stream is fed into an Oculus Rift VR headset. You can look
 around the scene in a very natural way, the systems rendering the stream
 that most correctly reflect your current perspective.

 The camera arrangement very closely resembled the  tetrahedral array of
 microphones common to Soundfield mics. I took a pic with my phone.

 http://www.mgraves.org/wp-content/uploads/2014/04/2014-04-07-11.31.15.jpg

 The company had no knowledge of Ambisonics, but said that processing audio
 corresponding to the spherical video was on their wish list. I gave them a
 brief introduction and showed them a picture of a Core Sound Tetra mic to
 get them started down the path.

 They seemed to be quite enthusiastic about it.

 Michael
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20140410/264339df/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140410/c3e277ce/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] BBC Proms in 4.0

2014-07-19 Thread Aaron Heller
On Sat, Jul 19, 2014 at 9:47 AM, m...@superorg.com wrote:

On 19 Jul 2014, at 10:13, Ralf R Radermacher wrote:

Am 18.07.14 15:21, schrieb Rupert Brun:
 The BBC will make the BBC Proms Concerts available in 4.0 using
MPEG-DASH. The stream will be available internationally.

Does anyone have the first idea how to record this stuff on a Mac?

Easy to record stream on a Mac

 Install Cycling74's Soundflower its free and doesnt interfere with anything
 Install Audacity its  a free recorder

 Set your Macs Sound Output to Soundflower16 - at this point you wont hear
 anything from chrome
 at this point launch soundflowerbed and allocate tracks to play out of
 your outputs - you'll hear again
 Set Audacity to record a 5/6 channel input from soundflower16


I can't get Soundflowerbed to work on Mavericks -- it appears in the menu
bar, but crashes as soon as I select an input channel.  Instead, I created
an aggregate device (using Audio MIDI Setup) with the Soundflower 16ch
device as channels 1-16 and Built-in Output as 17 and 18.  You then
select Aggregate Device as the MacOS Output Device.

I record to a 4-channel FLAC file using Plogue Bidule, routing channels
1,2,4,5 from Soundflower to a recorder and 1 and 2 to 17  18, to play over
the builtin speakers to monitor.   Unfortunately the volume control in the
Menu Bar also controls the level of the signal that appears in Soundflower,
so I turn that all the way up (to 11) and put a gain block in the path to
Built-in Output to control the levels to the speakers.

I put the Bidule setup, a screen grab of the Aggregate Device setup, and
the 4-channel FLAC and Ogg Vorbis (q=6) files of the the Channel ID
announcements and the last 20 minutes of Saturday's broadcast here:


https://drive.google.com/folderview?id=0B1DUyjAHI9QkajFqNi1PcVlYbmcusp=sharing

Aaron (hel...@ai.sri.com)
Menlo Park, CA, US
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140719/66a51b94/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] AES London Lecture

2014-12-31 Thread Aaron Heller
On Wed, Dec 31, 2014 at 3:20 AM, Dave Malham dave.mal...@york.ac.uk wrote:

 I wonder how closely this is related to the paper he was one of the
authors
 of at the 2010 Ambisonics Symposium? Anyone have it handy?

Here's the URL:
  http://ambisonics10.ircam.fr/drupal/files/proceedings/poster/P6_41.pdf

Also an IEEE paper from 2013

 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6508825

This paper presents a systematic framework for the analysis and design of
circular multichannel surround sound systems. Objective analysis based on
the concept of active intensity fields shows that for stable rendition of
monochromatic plane waves it is beneficial to render each such wave by no
more than two channels. Based on that finding, we propose a methodology for
the design of circular microphone arrays, in the same configuration as the
corresponding loudspeaker system, which aims to capture inter-channel time
and intensity differences that ensure accurate rendition of the auditory
perspective. The methodology is applicable to regular and irregular
microphone/speaker layouts, and a wide range of microphone array radii,
including the special case of coincident arrays which corresponds to
intensity-based systems. Several design examples, involving first and
higher-order microphones are presented. Results of formal listening tests
suggest that the proposed design methodology achieves a performance
comparable to prior art in the center of the loudspeaker array and a more
graceful degradation away from the center.


  Le 31 déc. 2014 à 00:08, John Leonard j...@johnleonard.co.uk a écrit :
 
   This looks interesting:
  
   Upcoming Lectures
  
   London: Tuesday 13th January
  
   Perceptual Sound Field Reconstruction and Coherent Synthesis
  
   Zoran Cvetkovic, Professor of Signal Processing at King's College
London
  
   Imagine a group of fans cheering their team at the Olympics from a
local
  pub, who want to feel transposed to the arena by experiencing a faithful
  and convincing auditory perspective of the scene they see on the screen.
  They hear the punch of the player kicking the ball and are immersed in
the
  atmosphere as if they are watching from the sideline. Alternatively,
  imagine a small group of classical music aficionados following a
broadcast
  from the Royal Opera at home, who want to have the experience of
listening
  to it from best seats at the opera house. Imagine having finally a
surround
  sound system with room simulators that actually sound like the spaces
they
  are supposed to synthesise, or watching a 3D nature film in a home
theatre
  where the sound closely follows the movements one sees on the screen.
  Imagine also a video game capable of providing a convincing dynamic
  auditory perspective that tracks a moving game player and responds to
his
  actions, with virtual objects moving and acoustic environments changing.
  Finally, place all this in the context of visual technology that is
moving
  firmly in the direction of 3D capture and rendering, where enhanced
  spatial accuracy and detail are key features. In this talk we will
present
  a technology that enables all these spatial sound applications using
  low-count multichannel systems.
   This month's lecture is being held at King's College London, Nash
  Lecture Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm
start.
  
   I'll be there if I can.
  
   John
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20141231/b118ab2e/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] YouTube adds ambisonics support

2016-01-14 Thread Aaron Heller
Hi Dillon,

A further correction, the argument to P is the elevation, and the argument
to T should be azimuth, so

  N(l, abs(m)) * P(l, abs(m), sin(E)) * T(m, A)


Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


On Thu, Jan 14, 2016 at 9:27 AM, Dillon Cower <dco...@google.com> wrote:

> Hi Aaron,
>
> Thanks for pointing that out; just a series of bad (horrible, really!)
> typos from transcribing code. :) The equations should be corrected now.
>
> Dillon
>
> On Thu, Jan 14, 2016 at 9:13 AM Aaron Heller <hel...@ai.sri.com> wrote:
>
> > Unfortunately, the definition of the spherical harmonics given in the
> > document is wrong.  The argument to the associated Legendre polynomial
> > should not be multiplied by abs(m).  The definition of T is sloppy as
> > well.  Using the document's notation it should be
> >
> >   N(l, abs(m)) * P(l, abs(m), sin(A)) * T(m, E)
> >
> > with T(m,x) is sin(-m*x) for m<0, cos(m*x) otherwise.
> >
> > The way it is specified in the document, m is always positive inside T.
> >
> > As an exercise in learning SymPy (symbolic python), I've written a
> library
> > that derives the polynomials for the spherical harmonics in cartesian and
> > spherical coordinates.   I'll post a link later today.
> >
> >
> > Aaron Heller (hel...@ai.sri.com)
> > Menlo Park, CA  US
> >
> > On Thu, Jan 14, 2016 at 5:55 AM, Marc Lavallée <m...@hacklava.net>
> wrote:
> >
> > >
> > > It's about time!
> > >
> > > I made my little effort last year, with an experiment on
> > > http://ambisonic.xyz, but I did not updated it since. It gives me hope
> > > that ambisonics could become the "de facto" standard for 360 videos, or
> > > at least a viable and supported option. This a serious opportunity to
> > > promote ambisonics, and the sursound community should help to define
> > > the RFC.
> > >
> > > --
> > > Marc
> > >
> > > Le Thu, 14 Jan 2016 08:13:51 -0500,
> > > Ben Bloomberg <b...@mit.edu> a écrit :
> > >
> > > > Check it out!
> > > >
> > >
> >
> https://github.com/google/spatial-media/blob/master/docs/spatial-audio-rfc.md
> > > >
> > > > Ben
> > > > -- next part --
> > > > An HTML attachment was scrubbed...
> > > > URL:
> > > > <
> > >
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160114/87ecbb68/attachment.html
> > > >
> > > > ___ Sursound mailing list
> > > > Sursound@music.vt.edu
> > > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> > > > here, edit account or options, view archives and so on.
> > >
> > > ___
> > > Sursound mailing list
> > > Sursound@music.vt.edu
> > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here,
> > > edit account or options, view archives and so on.
> > >
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160114/b766886b/attachment.html
> > >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160114/b559cf74/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160114/17cb9b78/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help me with ADT

2016-02-29 Thread Aaron Heller
Hi Martin (and everyone else),

Sorry you're having a frustrating time with my toolbox.   What most users
do is create a SPKR_ARRAY_*.m file with the speaker names and geometry and
then a 'run_*' file  with the invocation of the decoder design function.
See

  adt/examples
 SPKR_ARRAY_Shoebox.m
 run_dec_Shoebox.m

for an example of this.  You can also put the speaker array definition in
the run_* file, as in run_dec_birectangle.m

The interactive.m and run_dec_interactive.m scripts you found are a work in
progress.  I worked on them a bit today and they should be usable.  Do a
"git pull" to get the updates and delete the defaults.mat file.  It will
create a new one.  As you note, the two scripts communicate though the file
defaults.mat.  Run 'interactive' first to enter your choices and then
'run_dec_interactive' to create the decoder.  It assumes that the speaker
coordinates are in the last three columns of the csv file.

If you are still having trouble, please send me the speaker coordinates (in
private email) and I will be happy to create the files for you.

Best...

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA, US



On Sun, Feb 28, 2016 at 2:30 PM, Martin Dupras <martindup...@gmail.com>
wrote:

> Hi,
>
> I've been trying to use ADT for a fortnight now, but I'm kind of stuck.
>
> First of all, I have no experience with Matlab (or Octave) other than
> trying ADT, so I'm sure that many of my woes are because I do not
> understand. I have however read the ADT README files several times,
> and I've been able to run some of the example scripts.
>
> 1) In the very first days that I tried ADT, when I ran the
> interactive.m script, somehow created some decoder files in the
> decoder directory. (I'm not dreaming; I have those files, they got
> generated somehow.) Since then, no matter what I do, the interactive.m
> does not seem to write anything to the decoders folder.
>
> Is it supposed to write files? Why would it work sometimes, and not some
> others?
>
> 2) There seems to be a new run_dec_interactive.m script. It's not
> interactive, so I'm wondering if it's supposed to take the values
> written by interactive.m and then calculate a decoder? I've been
> trying to make sense of that script, but it seems to be getting its
> information from the "defaults.mat". But that is a binary file, so I
> really don't understand how to read it or how to change it.
>
> If anyone can shed some light, I'd be very, very grateful.
>
> Many thanks,
>
> - martin
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160229/669466ba/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] grambilib~ for Pd

2016-03-10 Thread Aaron Heller
Hi Ricky,

I took a quick look at grambipan.c.  For FuMa U you've written

189:   (*APout4++) = sample1 * (2 * cosf(sample2)) * cosf(sample3) *
cosf(sample3); //U

whereas the correct expression (found on Richard Furse's webpage, for
example) is

U:cos(2A) cos(E) cos(E)

Please note that cos(2x) does not equal 2cos(x).  I see similar problems in
other definitions.


You might want to take a look at my Python library,
symbolic_spherical_harmonics.  It derives and writes out expressions for
the spherical harmonics in a number of different languages.

$ python SymYlm.py --spherical --FuMa --four_pi --translation c 2 2
pow(cos(phi), 2)*cos(2*theta)

$ python SymYlm.py --cartesian --FuMa --four_pi --translation c 2 2
pow(x, 2) - pow(y, 2)

 Here's the git repo:
   https://bitbucket.org/ajheller/symbolic_spherical_harmonics

--
Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


On Wed, Mar 9, 2016 at 5:15 PM, Richard Graham 
wrote:
>
> Hi all,
>
> I’ve been working on a simple ambisonics library for Pd.
>
> Here’s v1:
>
> https://github.com/rickygraham/grambilib <
https://github.com/rickygraham/grambilib>
>
> I would love to hear your thoughts!
>
> Ricky
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
https://mail.music.vt.edu/mailman/private/sursound/attachments/20160309/73be71f2/attachment.html
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] OT Stereo stage width - Was: Static stereo source in rotating soundfield, possible?

2016-03-31 Thread Aaron Heller
Marc Lavallée, Eric Benjamin, and I put together a Trifield (three speaker
stereo) plugin and demo'ed it a Burning Amp last fall. It is hosted at

   https://bitbucket.org/ajheller/trifield/overview

It is written in Faust so can be compiled for a number of different hosts,
but we provide precompiled VST plugins for Windows and MacOS in the
download folder.

There are also some plots that use Gerzon velocity and energy localization
vectors (rV and rE) to analyze, +/-45 deg stereo vs Trifield  vs +/- 30 deg
stereo that shed some light on why "the +/- 30 deg stereo triangle" works
well.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US

On Wed, Mar 30, 2016 at 5:59 PM, Stefan Schreiber 
wrote:

> David Pickett wrote:
>
>
>>
>> Michael Gerzon, "Three Channels.  The Future of Stereo?", Studio Sound,
>> vol. 32
>> no. 6, pp. 112, 114, 117, 118, 120, 123 & 125 (1990 June) (An account of
>> Ambisonic ideas applied to 3-speaker frontal stereo.)
>>
>
>
> http://www.audiosignal.co.uk/Resources/Three_channels_A4.pdf
>
>
>
>> MAG: "Optimal Reproduction Matrices for Multispeaker Stereo"  AES, NY,
>> OCt 1991.
>>
>> David
>>
>>
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re-Routing VST Plugin

2016-04-12 Thread Aaron Heller
It's not hard to make a custom plugin for this in Faust.  Here's an example
that specifies a reordering of the inputs with some null outputs
interspersed.  Inputs are the arguments to process(...) and outputs are on
the right-hand side of the "=".

declare name "rerouter";

process(W1, X1, Y1, Z1,
W2, X2, Y2, Z2,
W3, X3, Y3, Z3)
  =
   (W1, W2, W3, 0, 0,
X1, X2, X3, 0, 0,
Y1, Y2, Y3, 0, 0,
Z1, Z2, Z3, 0, 0 );

Drop this into the online Faust compiler at

  http://faust.grame.fr/onlinecompiler/

click the "Exec File" tab, specify "windows" and "vst" and download the VST
plugin.  (or AU, or LADSPA, or MaxMSP, or ...).

More on Faust at
   http://faust.grame.fr/Documentation/

--
Aaron Heller  (hel...@ai.sri.com)
Menlo Park, CA  US




On Tue, Apr 12, 2016 at 1:30 AM, Jörn Nettingsmeier <
netti...@stackingdwarves.net> wrote:

> On 04/11/2016 10:40 PM, Sönke Pelzer wrote:
>
>> True that... mighty Reaper.
>>
>> However, life is not perfect until a small Load/Save button shows up
>> there.
>> :)
>>
>
> agreed. but another shot in the dark (from ancient memory): doesn't reaper
> have track templates somewhere? maybe they include plugin and matrix patch
> setup?
>
>
>
> --
> Jörn Nettingsmeier
> Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
>
> Meister für Veranstaltungstechnik (Bühne/Studio)
> Tonmeister VDT
>
> http://stackingdwarves.net
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160412/af8e795e/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Furse-Malham to ACN conversion

2016-03-24 Thread Aaron Heller
Answering my own question...  the latest version of "Encoder Input Format
for MPEG-H..." says N3D/ACN, as Richard Furse said.  The earlier version of
that document (1/2013) said N3D/SID, which is what I recalled reading.


http://mpeg.chiariglione.org/standards/mpeg-h/3d-audio/encoder-input-format-mpeg-h-3d-audio
(1/2013)
  Table 3 – Ordering of HOA components shows SID.

appears to be superseded by


http://mpeg.chiariglione.org/standards/mpeg-h/3d-audio/n15268-encoder-input-format-mpeg-h-3d-audio
(2/2015)
  End of Section 4.2 says ACN.

Aaron



On Thu, Mar 24, 2016 at 10:51 AM, Aaron Heller <hel...@ai.sri.com> wrote:
>
>
>
> On Thu, Mar 24, 2016 at 10:47 AM, Aaron Heller <hel...@ai.sri.com> wrote:
>>
>> ISO/IEC 14496-11 "Information technology — Coding of audio-visual
objects —Part 11: Scene description and application engine" (second
edition, dated 11/2013; see Table 9 in page 36)
>>
>>   and
>>
>> ISO/IEC JTC1/SC29/WG11 MPEG2015//N15268 "Encoder Input Format for MPEG-H
3D Audio" (w15286, dated 2/2015; see Section 4.1),
>>
>> show HOA components in Daniel's SID order. (0,0; 1,1; 1,-1; 2,2; 2,-2,
2,1, 2,-1, 2,0; ...)
>
>
> Should be...
>
>   (0,0;   1,1; 1,-1; 1,0;   2,2; 2,-2; 2,1; 2,-1; 2,0; ...)
>
>
>>
>>
>> I'll admit I don't follow MPEG activities closely.  Have those been
superseded?
>>
>> Best...
>>
>> Aaron (hel...@ai.sri.com)
>> Menlo Park, CA   US
>>
>>
>>
>> On Thu, Mar 24, 2016 at 5:20 AM, Richard Furse <rich...@muse440.com>
wrote:
>> >
>> > N3D/ACN
>> >
>> >
http://mpeg.chiariglione.org/standards/mpeg-h/3d-audio/dis-mpeg-h-3d-audio
>> >
>> > Best wishes,
>> >
>> > --Richard
>> >
>> >
>> > > -Original Message-
>> > > From: Sursound [mailto:sursound-boun...@music.vt.edu] On Behalf Of
>> > > Sönke Pelzer
>> > > Sent: 24 March 2016 11:55
>> > > To: Surround Sound discussion group
>> > > Subject: Re: [Sursound] Furse-Malham to ACN conversion
>> > >
>> > > Hi Aaron,
>> > >
>> > > I looked up some recent MPEG-H papers, but couldn't find information
about
>> > > their HOA channel ordering and normalization scheme.
>> > > Could you please point me to these?
>> > >
>> > > Thank you,
>> > > Sönke
>> > >
>> > >
>> > > 2016-03-24 7:25 GMT+01:00 Aaron Heller <hel...@ai.sri.com>:
>> > >
>> > > > Martin,
>> > > >
>> > > > Note that while AmbDec can accommodate FuMa normalization on
input, it
>> > > > still makes the connections to jack in ACN order so the inputs
will often
>> > > > appear in Jack in ACN order (and never in FuMa order).  I say
"often"
>> > > > because the Jack API does not have any notion of the 'order' of an
>> > > > application's ports.  To make matters worse, some jack control
clients,
>> > > > like qjackctl, sort the ports by name, so they appear in the GUI
in FuMa
>> > > > order, but if you simply do a 'bulk connect' you'll find the
individual
>> > > > connections were made in ACN order.  You really have to check the
port
>> > > > names carefully when connecting.
>> > > >
>> > > > The ADT can generate decoders in Faust with any combination of the
the
>> > > > channel order and normalization conventions that I'm aware of
(FuMa,
>> > > ACN,
>> > > > MPEG-H, ...).  Those can be completed to VST and used in Reaper.
>> > > >
>> > > > There are also 3rd-order FuMa and 5th-order panners for Ambix/ACN
(the
>> > > > latter thanks to Florian Grond).   They are in the faust directory,
>> > > > called ambi_panner_fms.dsp and  ambi_panner_5_ACN.dsp.  They have a
>> > > built
>> > > > in pink noise source with is useful for testing.
>> > > >
>> > > > It can also generate adapter matrices to convert signals between
any of the
>> > > > order and normalization conventions. An example of an Ambix to FuMa
>> > > > converter is in ambix2fuma.dsp .
>> > > >
>> > > > Also  feel free to send me the ADT setup you're using, if
you'd like me
>> > > > to check it.
>> > > >
>> > > > Aaron
>> > > >
>> > > > On Wed, Mar 23, 2016 at 2:30 PM, Dave Malham
>> > > <dave.mal...@york.ac.

Re: [Sursound] Furse-Malham to ACN conversion

2016-03-24 Thread Aaron Heller
ISO/IEC 14496-11 "Information technology — Coding of audio-visual objects
—Part 11: Scene description and application engine" (second edition, dated
11/2013; see Table 9 in page 36)

  and

ISO/IEC JTC1/SC29/WG11 MPEG2015//N15268 "Encoder Input Format for MPEG-H 3D
Audio" (w15286, dated 2/2015; see Section 4.1),

show HOA components in Daniel's SID order. (0,0; 1,1; 1,-1; 2,2; 2,-2, 2,1,
2,-1, 2,0; ...)

I'll admit I don't follow MPEG activities closely.  Have those been
superseded?

Best...

Aaron (hel...@ai.sri.com)
Menlo Park, CA   US


On Thu, Mar 24, 2016 at 5:20 AM, Richard Furse <rich...@muse440.com> wrote:
>
> N3D/ACN
>
> http://mpeg.chiariglione.org/standards/mpeg-h/3d-audio/dis-mpeg-h-3d-audio
>
> Best wishes,
>
> --Richard
>
>
> > -Original Message-
> > From: Sursound [mailto:sursound-boun...@music.vt.edu] On Behalf Of
> > Sönke Pelzer
> > Sent: 24 March 2016 11:55
> > To: Surround Sound discussion group
> > Subject: Re: [Sursound] Furse-Malham to ACN conversion
> >
> > Hi Aaron,
> >
> > I looked up some recent MPEG-H papers, but couldn't find information
about
> > their HOA channel ordering and normalization scheme.
> > Could you please point me to these?
> >
> > Thank you,
> > Sönke
> >
> >
> > 2016-03-24 7:25 GMT+01:00 Aaron Heller <hel...@ai.sri.com>:
> >
> > > Martin,
> > >
> > > Note that while AmbDec can accommodate FuMa normalization on input, it
> > > still makes the connections to jack in ACN order so the inputs will
often
> > > appear in Jack in ACN order (and never in FuMa order).  I say "often"
> > > because the Jack API does not have any notion of the 'order' of an
> > > application's ports.  To make matters worse, some jack control
clients,
> > > like qjackctl, sort the ports by name, so they appear in the GUI in
FuMa
> > > order, but if you simply do a 'bulk connect' you'll find the
individual
> > > connections were made in ACN order.  You really have to check the port
> > > names carefully when connecting.
> > >
> > > The ADT can generate decoders in Faust with any combination of the the
> > > channel order and normalization conventions that I'm aware of (FuMa,
> > ACN,
> > > MPEG-H, ...).  Those can be completed to VST and used in Reaper.
> > >
> > > There are also 3rd-order FuMa and 5th-order panners for Ambix/ACN (the
> > > latter thanks to Florian Grond).   They are in the faust directory,
> > > called ambi_panner_fms.dsp and  ambi_panner_5_ACN.dsp.  They have a
> > built
> > > in pink noise source with is useful for testing.
> > >
> > > It can also generate adapter matrices to convert signals between any
of the
> > > order and normalization conventions. An example of an Ambix to FuMa
> > > converter is in ambix2fuma.dsp .
> > >
> > > Also  feel free to send me the ADT setup you're using, if you'd
like me
> > > to check it.
> > >
> > > Aaron
> > >
> > > On Wed, Mar 23, 2016 at 2:30 PM, Dave Malham
> > <dave.mal...@york.ac.uk>
> > > wrote:
> > >
> > > > Hi Martin (and Eric!),
> > > >  One very simple thing I would do, before doing anything else,
with
> > > any
> > > > system that's playing, as we say, silly bu..ers, is just to play a
well
> > > > localisable sound out of each speaker (on its own) in turn and
check that
> > > > (a) it's coming out of the speaker it should (all connections are
> > > correct)
> > > > and that it sounds like it's coming from the direction you think it
> > > should
> > > > (acoustics not too disruptive). If you really want to be picky,
stick a
> > > > soundfield type mic at the nominal centre point and check correct B
> > > format
> > > > signals are produced for each speaker location at the same time.
Only
> > > then
> > > > start worrying about decoders, plugin connections and the rest. I
once
> > > > worked out that in a simple 1st order system driving a cube of
speakers,
> > > > there are 16 million ways of it going wrong, without counting
individual
> > > > component failures in amps, etc. Of course, lots of these ways of
going
> > > > wrong are self cancelling (*both* ends of speaker cable can be
> > connected
> > > > wrongly, cancelling out the polarity inversion, for instance) which
is a
> > > > darn good job otherwise our job would be near impossible. So,
checking
> > 

Re: [Sursound] Furse-Malham to ACN conversion

2016-03-24 Thread Aaron Heller
On Thu, Mar 24, 2016 at 10:47 AM, Aaron Heller <hel...@ai.sri.com> wrote:

> ISO/IEC 14496-11 "Information technology — Coding of audio-visual objects
> —Part 11: Scene description and application engine" (second edition, dated
> 11/2013; see Table 9 in page 36)
>
>   and
>
> ISO/IEC JTC1/SC29/WG11 MPEG2015//N15268 "Encoder Input Format for MPEG-H
> 3D Audio" (w15286, dated 2/2015; see Section 4.1),
>
> show HOA components in Daniel's SID order. (0,0; 1,1; 1,-1; 2,2; 2,-2,
> 2,1, 2,-1, 2,0; ...)
>

Should be...

  (0,0;   1,1; 1,-1; 1,0;   2,2; 2,-2; 2,1; 2,-1; 2,0; ...)



>
> I'll admit I don't follow MPEG activities closely.  Have those been
> superseded?
>
> Best...
>
> Aaron (hel...@ai.sri.com)
> Menlo Park, CA   US
>
>
>
> On Thu, Mar 24, 2016 at 5:20 AM, Richard Furse <rich...@muse440.com>
> wrote:
> >
> > N3D/ACN
> >
> >
> http://mpeg.chiariglione.org/standards/mpeg-h/3d-audio/dis-mpeg-h-3d-audio
> >
> > Best wishes,
> >
> > --Richard
> >
> >
> > > -Original Message-
> > > From: Sursound [mailto:sursound-boun...@music.vt.edu] On Behalf Of
> > > Sönke Pelzer
> > > Sent: 24 March 2016 11:55
> > > To: Surround Sound discussion group
> > > Subject: Re: [Sursound] Furse-Malham to ACN conversion
> > >
> > > Hi Aaron,
> > >
> > > I looked up some recent MPEG-H papers, but couldn't find information
> about
> > > their HOA channel ordering and normalization scheme.
> > > Could you please point me to these?
> > >
> > > Thank you,
> > > Sönke
> > >
> > >
> > > 2016-03-24 7:25 GMT+01:00 Aaron Heller <hel...@ai.sri.com>:
> > >
> > > > Martin,
> > > >
> > > > Note that while AmbDec can accommodate FuMa normalization on input,
> it
> > > > still makes the connections to jack in ACN order so the inputs will
> often
> > > > appear in Jack in ACN order (and never in FuMa order).  I say "often"
> > > > because the Jack API does not have any notion of the 'order' of an
> > > > application's ports.  To make matters worse, some jack control
> clients,
> > > > like qjackctl, sort the ports by name, so they appear in the GUI in
> FuMa
> > > > order, but if you simply do a 'bulk connect' you'll find the
> individual
> > > > connections were made in ACN order.  You really have to check the
> port
> > > > names carefully when connecting.
> > > >
> > > > The ADT can generate decoders in Faust with any combination of the
> the
> > > > channel order and normalization conventions that I'm aware of (FuMa,
> > > ACN,
> > > > MPEG-H, ...).  Those can be completed to VST and used in Reaper.
> > > >
> > > > There are also 3rd-order FuMa and 5th-order panners for Ambix/ACN
> (the
> > > > latter thanks to Florian Grond).   They are in the faust directory,
> > > > called ambi_panner_fms.dsp and  ambi_panner_5_ACN.dsp.  They have a
> > > built
> > > > in pink noise source with is useful for testing.
> > > >
> > > > It can also generate adapter matrices to convert signals between any
> of the
> > > > order and normalization conventions. An example of an Ambix to FuMa
> > > > converter is in ambix2fuma.dsp .
> > > >
> > > > Also  feel free to send me the ADT setup you're using, if you'd
> like me
> > > > to check it.
> > > >
> > > > Aaron
> > > >
> > > > On Wed, Mar 23, 2016 at 2:30 PM, Dave Malham
> > > <dave.mal...@york.ac.uk>
> > > > wrote:
> > > >
> > > > > Hi Martin (and Eric!),
> > > > >  One very simple thing I would do, before doing anything else,
> with
> > > > any
> > > > > system that's playing, as we say, silly bu..ers, is just to play a
> well
> > > > > localisable sound out of each speaker (on its own) in turn and
> check that
> > > > > (a) it's coming out of the speaker it should (all connections are
> > > > correct)
> > > > > and that it sounds like it's coming from the direction you think it
> > > > should
> > > > > (acoustics not too disruptive). If you really want to be picky,
> stick a
> > > > > soundfield type mic at the nominal centre point and check correct B
> > > > format
> > > > > signals a

Re: [Sursound] Flac for FOA or amb files?

2016-03-25 Thread Aaron Heller
>From an existing AMB file, any of these will work

.  flac --channel-map=none AJH_eight-positions.amb

.  sox AJH_eight-positions.amb AJH_eight-positions.flac

.  open AMB file in Audacity and then export selecting FLAC format

Note that FLAC is limited to eight channels, so this will work for
first-order files only.

I'm curious, how do you use VLC Player with VST plugins?

Best...

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US

On Fri, Mar 25, 2016 at 3:14 PM, Bo-Erik Sandholm 
wrote:
>
> Has anyone a procedure or description on how to create 4 channel
> Flac flac files?
>
> I want to use VLC as a player on windows to start a vst chain for FOA to
> binaural processing.
>
> Best Regards
> Bo-Erik
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
https://mail.music.vt.edu/mailman/private/sursound/attachments/20160325/9ec3ef68/attachment.html
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Furse-Malham to ACN conversion

2016-03-24 Thread Aaron Heller
Martin,

Note that while AmbDec can accommodate FuMa normalization on input, it
still makes the connections to jack in ACN order so the inputs will often
appear in Jack in ACN order (and never in FuMa order).  I say "often"
because the Jack API does not have any notion of the 'order' of an
application's ports.  To make matters worse, some jack control clients,
like qjackctl, sort the ports by name, so they appear in the GUI in FuMa
order, but if you simply do a 'bulk connect' you'll find the individual
connections were made in ACN order.  You really have to check the port
names carefully when connecting.

The ADT can generate decoders in Faust with any combination of the the
channel order and normalization conventions that I'm aware of (FuMa, ACN,
MPEG-H, ...).  Those can be completed to VST and used in Reaper.

There are also 3rd-order FuMa and 5th-order panners for Ambix/ACN (the
latter thanks to Florian Grond).   They are in the faust directory,
called ambi_panner_fms.dsp and  ambi_panner_5_ACN.dsp.  They have a built
in pink noise source with is useful for testing.

It can also generate adapter matrices to convert signals between any of the
order and normalization conventions. An example of an Ambix to FuMa
converter is in ambix2fuma.dsp .

Also  feel free to send me the ADT setup you're using, if you'd like me
to check it.

Aaron

On Wed, Mar 23, 2016 at 2:30 PM, Dave Malham  wrote:

> Hi Martin (and Eric!),
>  One very simple thing I would do, before doing anything else, with any
> system that's playing, as we say, silly bu..ers, is just to play a well
> localisable sound out of each speaker (on its own) in turn and check that
> (a) it's coming out of the speaker it should (all connections are correct)
> and that it sounds like it's coming from the direction you think it should
> (acoustics not too disruptive). If you really want to be picky, stick a
> soundfield type mic at the nominal centre point and check correct B format
> signals are produced for each speaker location at the same time. Only then
> start worrying about decoders, plugin connections and the rest. I once
> worked out that in a simple 1st order system driving a cube of speakers,
> there are 16 million ways of it going wrong, without counting individual
> component failures in amps, etc. Of course, lots of these ways of going
> wrong are self cancelling (*both* ends of speaker cable can be connected
> wrongly, cancelling out the polarity inversion, for instance) which is a
> darn good job otherwise our job would be near impossible. So, checking the
> simple things first is a good way to avoid delving around the complex..
>
> Good luck
>  Dave
>
>
> On 23 March 2016 at 18:13, Eric Benjamin  wrote:
>
> > "In both cases the sound was coming from seemingly random places, and a
> > number of positions went practically silent."
> > What is needed, not just for you, but for everyone, is a comprehensive
> set
> > of test files. It may be that your loudspeakers aren't where the system
> > thinks they are (wrong speaker assignments), or it may be that that the
> > decoder is doing the wrong thing. I have more extensive versions of test
> > files, including "with height" like the eight directions files on
> > Ambisonia., featuring the voice of the lovely Haley Jo. I could upload
> > those if anyone would like them.
> > You can then use metering to determine if the specific sounds light up
> the
> > speaker(s) that they should.
> >
> > Sent from Yahoo Mail on Android
> >
> >   On Tue, Mar 22, 2016 at 1:18 PM, Jörn Nettingsmeier<
> > netti...@stackingdwarves.net> wrote:   On 03/22/2016 07:49 PM, Martin
> > Dupras wrote:
> > > Today I tried playback sources in third order Ambisonics on a 8+6+1
> > > hemispheric speaker array using Reaper. It didn't quite work as
> > > intended so I'm trying to figure out where I've gone wrong.
> > >
> > > I was using the Blue Ripple TOA-Core panner plugin to position the
> > > sound. I understand that Blue Rippler plugins use the Furse-Malham
> > > convention.
> > >
> > > The only decoders that I could find to decode to my specific array
> > > (using coefficients that I calculated using the Ambisonics Decoder
> > > Toolkit) were the Ambix Plug-ins and AmbDec.
> > >
> > > I tried Ambix first, which I understand uses the ACN ordering
> > > convention. I tried re-ordering the channels based on information that
> > > I found here:
> > https://en.wikipedia.org/wiki/Ambisonic_data_exchange_formats#ACN.
> > > But that didn't really work.
> > >
> > > I then tried to run 16 outputs out of Reaper into Jack, and from Jack
> > > into AmbDec, again using my ADT-calculated coefficients. I understand
> > > that AmbDec uses the Furse-Malham convention, so I would have thought
> > > it was compatible with the output of the Blue Rippler plugins. But
> > > again, that didn't really work well at all.
> > >
> > > In both cases the sound was coming from seemingly 

Re: [Sursound] Furse-Malham to ACN conversion

2016-03-24 Thread Aaron Heller
Make that "compiled to VST"...

  http://faust.grame.fr/onlinecompiler/


On Wed, Mar 23, 2016 at 11:25 PM, Aaron Heller <hel...@ai.sri.com> wrote:

> Martin,
>
> Note that while AmbDec can accommodate FuMa normalization on input, it
> still makes the connections to jack in ACN order so the inputs will often
> appear in Jack in ACN order (and never in FuMa order).  I say "often"
> because the Jack API does not have any notion of the 'order' of an
> application's ports.  To make matters worse, some jack control clients,
> like qjackctl, sort the ports by name, so they appear in the GUI in FuMa
> order, but if you simply do a 'bulk connect' you'll find the individual
> connections were made in ACN order.  You really have to check the port
> names carefully when connecting.
>
> The ADT can generate decoders in Faust with any combination of the the
> channel order and normalization conventions that I'm aware of (FuMa, ACN,
> MPEG-H, ...).  Those can be completed to VST and used in Reaper.
>
> There are also 3rd-order FuMa and 5th-order panners for Ambix/ACN (the
> latter thanks to Florian Grond).   They are in the faust directory,
> called ambi_panner_fms.dsp and  ambi_panner_5_ACN.dsp.  They have a built
> in pink noise source with is useful for testing.
>
> It can also generate adapter matrices to convert signals between any of
> the order and normalization conventions. An example of an Ambix to FuMa
> converter is in ambix2fuma.dsp .
>
> Also  feel free to send me the ADT setup you're using, if you'd like
> me to check it.
>
> Aaron
>
> On Wed, Mar 23, 2016 at 2:30 PM, Dave Malham <dave.mal...@york.ac.uk>
> wrote:
>
>> Hi Martin (and Eric!),
>>  One very simple thing I would do, before doing anything else, with
>> any
>> system that's playing, as we say, silly bu..ers, is just to play a well
>> localisable sound out of each speaker (on its own) in turn and check that
>> (a) it's coming out of the speaker it should (all connections are correct)
>> and that it sounds like it's coming from the direction you think it should
>> (acoustics not too disruptive). If you really want to be picky, stick a
>> soundfield type mic at the nominal centre point and check correct B format
>> signals are produced for each speaker location at the same time. Only then
>> start worrying about decoders, plugin connections and the rest. I once
>> worked out that in a simple 1st order system driving a cube of speakers,
>> there are 16 million ways of it going wrong, without counting individual
>> component failures in amps, etc. Of course, lots of these ways of going
>> wrong are self cancelling (*both* ends of speaker cable can be connected
>> wrongly, cancelling out the polarity inversion, for instance) which is a
>> darn good job otherwise our job would be near impossible. So, checking the
>> simple things first is a good way to avoid delving around the complex..
>>
>> Good luck
>>  Dave
>>
>>
>> On 23 March 2016 at 18:13, Eric Benjamin <eb...@pacbell.net> wrote:
>>
>> > "In both cases the sound was coming from seemingly random places, and a
>> > number of positions went practically silent."
>> > What is needed, not just for you, but for everyone, is a comprehensive
>> set
>> > of test files. It may be that your loudspeakers aren't where the system
>> > thinks they are (wrong speaker assignments), or it may be that that the
>> > decoder is doing the wrong thing. I have more extensive versions of test
>> > files, including "with height" like the eight directions files on
>> > Ambisonia., featuring the voice of the lovely Haley Jo. I could upload
>> > those if anyone would like them.
>> > You can then use metering to determine if the specific sounds light up
>> the
>> > speaker(s) that they should.
>> >
>> > Sent from Yahoo Mail on Android
>> >
>> >   On Tue, Mar 22, 2016 at 1:18 PM, Jörn Nettingsmeier<
>> > netti...@stackingdwarves.net> wrote:   On 03/22/2016 07:49 PM, Martin
>> > Dupras wrote:
>> > > Today I tried playback sources in third order Ambisonics on a 8+6+1
>> > > hemispheric speaker array using Reaper. It didn't quite work as
>> > > intended so I'm trying to figure out where I've gone wrong.
>> > >
>> > > I was using the Blue Ripple TOA-Core panner plugin to position the
>> > > sound. I understand that Blue Rippler plugins use the Furse-Malham
>> > > convention.
>> > >
>> > > The only decoders that I could find to 

Re: [Sursound] Different usages, different spaces, different decoders?

2016-03-03 Thread Aaron Heller
In BLaH11 (AES 137, 10/2014, Los Angeles) , Eric and I compared horizontal
FOA over a 2-meter radius 4-speaker diamond vs. 8-speaker octagon with
binaural dummy head measurements and listening tests.  (classic decoding:
2-band, rV=1 at LF, rE=sqrt(1/2) at HF, 400 Hz xover, NFC).

The TL;DR summary is yes, with 8 speakers what Solvang/2008 calls spectral
impairment is clearly audible in the vicinity of the sweet spot as an HF
rolloff or dullness of the sound, but as you move away from the central
location, the soundfield collapses to the nearest loudspeaker much faster
with only 4 loudspeakers.

Here's the AES permalink for the paper.

  http://www.aes.org/e-lib/browse.cfm?elib=17452

--
Aaron (hel...@ai.sri.com)
Menlo Park, CA  US

On Thu, Mar 3, 2016 at 11:32 AM, Martin Leese <
martin.le...@stanfordalumni.org> wrote:

> Martin Leese wrote:
>
> > Peter Lennox wrote:
> >>  Following on from discussions of decoder solutions: Forgive me if I've
> >> missed this (I've been watching sursound for about 20 years, or so - but
> >> I
> >> just may have missed the odd discussion!)
> >>
> >> Has anyone systematically studied the interactions between decoders,
> >> speaker
> >> layouts and particular rooms?
> >
> > Dermot Furlong looked at the last two in the
> > early 1990s.  He made a lengthy post to
> > "sursound" in June 1996 describing his work.
> > This post used to be available in my area on
> > the Ambisonia.com site, but it seems to have
> > been deleted.  I still have the files, but am not
> > sure of the best way for making them available.
> >
> > ...
> >> (and I haven't even mentioned the possible
> >> variety of speaker dispersion characteristics!)
> >
> > Dermot also looked at this.
>
> I have made the research of Dermot Furlong
> available on one of my Google Sites at:
> https://sites.google.com/site/mytemporarydownloads/
>
> Scroll down to the section "Ambisonic stuff"
> and look for the file "dermot.zip".
>
> Regards,
> Martin
> --
> Martin J Leese
> E-mail: martin.leese  stanfordalumni.org
> Web: http://members.tripod.com/martin_leese/
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] YouTube now supports Ambisonics (warning....part advertisement..)

2016-04-25 Thread Aaron Heller
Is there some trick to uploading these files?  The files I make upload, but
then fail during processing.

 I make the files like this:

$ /opt/local/bin/ffmpeg -loop 1 -framerate 30 -i cube-1920x960.png -i
> AJH_eight-positions-ambix.wav -map 0:v -map 1:a -c:a pcm_s16le
> -channel_layout quad  -c:v libx264 -preset medium -tune stillimage
> -shortest -pix_fmt yuv420p output-png-30fps-pcm_s16le-quad.mov



then use Dillon's "Spatial Media Metadata Injector" (SMMI for short) like
this:

$ python spatialmedia --inject --spatial-audio
> output-png-30fps-pcm_s16le-quad.mov
> output-png-30fps-pcm_s16le-quad-inject-spatial-audio.mov
> Processing: output-png-30fps-pcm_s16le-quad.mov
> Saved file settings
> Track 0
> Spherical = true
> Stitched = true
> StitchingSoftware = Spherical Metadata Tool
> ProjectionType = equirectangular
> Track 1
> Ambisonic Type: periphonic
> Ambisonic Order: 1
> Ambisonic Channel Ordering: ACN
> Ambisonic Normalization: SN3D
> Number of Channels: 4
> Channel Map: [0, 1, 2, 3]


and then use Google Chrome to upload
 output-png-30fps-pcm_s16le-quad-inject-spatial-audio.mov to my account on
YouTube and the file uploads, but then YouTube reports " Upload failed:
Can't process file"

If I don't specify --spatial-audio on the SMMI, the resulting file uploads
and processes just fine.  The spherical video works, but no spatial audio.
You can find it here:  https://youtu.be/dyf_BXpqeMg

I also tried different audio codecs  '-c:a libfdk_aac -b:a 512k' and '-c:a
libvorbis -b:a 512k' and different containers 'mp4' and 'mkv' -- no joy.

Any ideas anyone???  (Am I on double-secret probation?  Do I need upload
insurance?)


Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


On Fri, Apr 22, 2016 at 12:06 PM, Bruce Wiggins 
wrote:
>
> We've been running a few tests and I'll be putting more details up on my
> website when we're happy with the results.
>
> 360 Ambisonic VR Tests:
> http://www.youtube.com/playlist?list=PL6JlYpSUt3kgtQJou2AfgAVtWA5a6SMO1
>
> The use of wav over aac, by the way, is more to do with ffmpeg messing
with
> the channel order, which is a pain.  Much easier to use pcm.
>
> Cheers
>
> Bruce
>
> On Fri, 22 Apr 2016 17:07 Marc Lavallee,  wrote:
>
> > On Fri, 22 Apr 2016 17:42:34 +0200
> > Bo-Erik Sandholm  wrote:
> >
> > > Hi
> > > I suspect it could be possible to create a 360 video containing  one
> > > cicular/ball panorama with FOA sound and play that via youtube?
> >
> > But only using their Youtube Android app... Those using a
> > non-google-appified Cyagenmod as their Android OS cannot use the
> > Youtube app, because all the google-apps must be installed first.
> > Google is breaking EU antitrust rules:
> > http://europa.eu/rapid/press-release_MEMO-16-1484_en.htm
> >
> > > Is this possible? if so, which tools to use to create a video to
> > > upload.
> >
> > Check:
> >
https://support.google.com/youtube/answer/6395969?hl=en_topic=2888648
> > Then:
> > https://trac.ffmpeg.org/wiki/Encode/YouTube
> >
> > The video must be in equirectangular 2:1 format.
> >
> > > I have foa recordings of my own and panoramas from the recording.
> > > Bo-Erik
> >
> > I hope to soon provide a browser based alternative to the Youtube app.
> > --
> > Marc
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] YouTube now supports Ambisonics (warning....part advertisement..)

2016-04-21 Thread Aaron Heller
On Thu, Apr 21, 2016 at 12:14 AM, Trond Lossius <trond.loss...@bek.no>
wrote:

> > On 20 Apr 2016, at 21:16, Marc Lavallee <m...@hacklava.net> wrote:
> >
> > I wonder why using uncompressed PCM instead of compressed AAC...
>
> Is there a risk of compressed audio altering the phase between the
> channels, affecting the spatial image?
>

Marc and I looked at this informally when he was developing ambisonic.xyz.
  We took panned first-order B-format (e.g., AJH-eight-positions.amb),
though an encode/decode cycle with candidate codecs, and then looked at the
spatial spreading of energy with a simple parametric decoder.  No listening
tests, just visual comparison of plots of spatial energy.

We found very little spreading with low-complexity AAC, but a fair amount
with HE-AAC.

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160421/84c6ed60/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] B-Format encoding equations

2016-04-18 Thread Aaron Heller
My symbolic_spherical_harmonics library will print out the expressions in
cartesian or spherical form with various normalizations and in your choice
of programming languages.  Here's the repository with some example
invocations:

  https://bitbucket.org/ajheller/symbolic_spherical_harmonics

I wrote as an exercise to learn Symbolic Python (SymPy) and to show how the
spherical harmonics are derived and checked.

For example, here's a 5th order ambiX (ACN/SN3D) panner in Faust, the list
of expressions for the the spherical harmonics was produced by the library

https://bitbucket.org/ambidecodertoolbox/adt/src/6c6bdc2460352cd4b72d2dde6928fcb5141d1976/faust/ambi_panner_ambix_o5.dsp?fileviewer=file-view-default

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA   US


On Mon, Apr 18, 2016 at 9:12 AM, Courville, Daniel <courville.dan...@uqam.ca
> wrote:

> Jan wrote:
>
> >how about http://ambisonics.ch/standards/channels/
> >click each channel number to see the Spherical Harmonics in N3D and
> conversion factors to SN3D
>
> Ah, yes... I knew that page, but never (or at least not recently) clicked
> on the ACN number.
>
> Thanks,
>
> Daniel
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160418/81b8bb18/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Aaron Heller
Hi Martin,

A few things...

1. You should use a first-order decoder to play first-order sources. That's
not the same as playing a first-order file into the first-order inputs of a
third-order decoder.

2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array gets
the energy correct, and hence the sense of envelopment; localization is not
that precise.  The magnitude of the energy localization vector, rE, in this
situation is only sqrt(3)/3, which Gerzon noted is “perilously close to
being unsatisfactory." [1]

3. The decoders in the AmbiX plugins are single-band rE_max decoders, a
dual-band decoder will improve localization for central listeners a bit.
Both Ambdec and the FAUST decoders produced by the ADT (the ".dsp" files)
support 2-band decoding.

4. If you really want more precise localization, consider parametric
decoding using Harpex or the Harpex-based upmixer plugin from Blue Ripple
Sound. In my experience, it works very well with panned sources and
acoustic recordings in dry environments (outdoors, dry hall). For
recordings in very reverberant halls (like my recordings), the improvement
is not that great.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


[1]  Michael A. Gerzon. Practical Periphony: The Reproduction of
Full-Sphere Sound. Preprint 1571
from the 65th Audio Engineering Society Convention, London, February 1980.
AES E-lib http://www.aes.org/e-lib/browse.cfm?elib=3794.

   1.


On Wed, Jul 5, 2017 at 3:10 PM, Martin Dupras 
wrote:

> I've deployed a 21-speaker near spherical array a few days ago, which
> I think is working ok, but I'm having difficulty with playing back
> some first order A-format recordings on it. They sound really very
> diffuse and not very localised at all. I figured that some of you good
> people on here might have some idea of where I might be going wrong or
> what is not right.
>
> At the moment I'm using Reaper, and for decoding I'm using Matthias
> Kronlachner's Ambix decoder plug-in, with a configuration that I've
> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
> decoder configuration is right. I've calculated it with ambix ordering
> and scaling, and third order in H and V.  The speaker array has six
> speakers at floor level (-22 degrees elevation), eight at ear level at
> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
> apex.
>
> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
> order), the localisation is pretty good. I've tested that with several
> people by panning to random places and asking to blindly point out to
> where they hear the source. Generally, they're in about the right
> place (say within 45 degrees on average.)
>
> On the other hand, if I play 1st order A-format recordings (mostly
> that I've made using our Core TetraMic), the localisation of sources
> is pretty poor. I also tried with the "xyz.wav" example file from Core
> (https://www.vvaudio.com/downloads) with the same results. To convert
> from A-format to B-format, I've tried using Core's VVtetraVST plugin
> with the calibration files for the mic (followed by the o3a FuMa to
> Ambix converter), and the Senneheiser Ambeo plugin (which does the
> same job, but in Ambix form already.)
>
> So what am I doing wrong? I've spent the last couple of days checking
> everything thoroughly. I've calibrated all the speakers to within 1dB
> SPL for the same signal received with an omni mic at the centre of the
> sphere. I've triple-checked that the encoder is in the right channel
> numbering:
>
> //--- decoder information ---
> // decoder file =
> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
> // speaker array name = BSU_Array_6861_RAE1
> // horizontal order   = 3
> // vertical order = 3
> // coefficient order  = acn
> // coefficient scale  = SN3D
> // input scale= SN3D
> // mixed-order scheme = HV
> // input channel order: W Y Z X V T R S U Q O M K L N P
> // output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
> S12 S13 S14 S15 S16 S17 S18 S19 S20 S21
>
> I'll welcome any suggestion or advice!
>
> Thanks,
>
> - martin
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Aaron Heller
Forgot the URL...

http://www.ai.sri.com/~heller/ambisonics/index.html#test-files

On Wed, Jul 5, 2017 at 6:15 PM, Aaron Heller <hel...@ai.sri.com> wrote:

> I have some first-order test files that you can try. They're FuMa
> order/normalization. There's "eight directions" and some pink noise pans.
> With a good decoder, localization should be pretty good with these --
> better in the front than the back in my experience.
>
> Aaron
>
> On Wed, Jul 5, 2017 at 5:53 PM, Aaron Heller <hel...@ai.sri.com> wrote:
>
>> Hi Martin,
>>
>> A few things...
>>
>> 1. You should use a first-order decoder to play first-order sources.
>> That's not the same as playing a first-order file into the first-order
>> inputs of a third-order decoder.
>>
>> 2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array
>> gets the energy correct, and hence the sense of envelopment; localization
>> is not that precise.  The magnitude of the energy localization vector, rE,
>> in this situation is only sqrt(3)/3, which Gerzon noted is “perilously
>> close to being unsatisfactory." [1]
>>
>> 3. The decoders in the AmbiX plugins are single-band rE_max decoders, a
>> dual-band decoder will improve localization for central listeners a bit.
>> Both Ambdec and the FAUST decoders produced by the ADT (the ".dsp" files)
>> support 2-band decoding.
>>
>> 4. If you really want more precise localization, consider parametric
>> decoding using Harpex or the Harpex-based upmixer plugin from Blue Ripple
>> Sound. In my experience, it works very well with panned sources and
>> acoustic recordings in dry environments (outdoors, dry hall). For
>> recordings in very reverberant halls (like my recordings), the improvement
>> is not that great.
>>
>> Aaron (hel...@ai.sri.com)
>> Menlo Park, CA  US
>>
>>
>> [1]  Michael A. Gerzon. Practical Periphony: The Reproduction of
>> Full-Sphere Sound. Preprint 1571
>> from the 65th Audio Engineering Society Convention, London, February
>> 1980. AES E-lib http://www.aes.org/e-lib/browse.cfm?elib=3794.
>>
>>1.
>>
>>
>> On Wed, Jul 5, 2017 at 3:10 PM, Martin Dupras <martindup...@gmail.com>
>> wrote:
>>
>>> I've deployed a 21-speaker near spherical array a few days ago, which
>>> I think is working ok, but I'm having difficulty with playing back
>>> some first order A-format recordings on it. They sound really very
>>> diffuse and not very localised at all. I figured that some of you good
>>> people on here might have some idea of where I might be going wrong or
>>> what is not right.
>>>
>>> At the moment I'm using Reaper, and for decoding I'm using Matthias
>>> Kronlachner's Ambix decoder plug-in, with a configuration that I've
>>> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
>>> decoder configuration is right. I've calculated it with ambix ordering
>>> and scaling, and third order in H and V.  The speaker array has six
>>> speakers at floor level (-22 degrees elevation), eight at ear level at
>>> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
>>> apex.
>>>
>>> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
>>> order), the localisation is pretty good. I've tested that with several
>>> people by panning to random places and asking to blindly point out to
>>> where they hear the source. Generally, they're in about the right
>>> place (say within 45 degrees on average.)
>>>
>>> On the other hand, if I play 1st order A-format recordings (mostly
>>> that I've made using our Core TetraMic), the localisation of sources
>>> is pretty poor. I also tried with the "xyz.wav" example file from Core
>>> (https://www.vvaudio.com/downloads) with the same results. To convert
>>> from A-format to B-format, I've tried using Core's VVtetraVST plugin
>>> with the calibration files for the mic (followed by the o3a FuMa to
>>> Ambix converter), and the Senneheiser Ambeo plugin (which does the
>>> same job, but in Ambix form already.)
>>>
>>> So what am I doing wrong? I've spent the last couple of days checking
>>> everything thoroughly. I've calibrated all the speakers to within 1dB
>>> SPL for the same signal received with an omni mic at the centre of the
>>> sphere. I've triple-checked that the encoder is in the right channel
>>> numbering:
>>>
>>> //--- decoder information ---

Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Aaron Heller
The decoders produced by my toolbox in FAUST (the ".dsp" files) have
distance, level, and near-field compensation up to 5th-order (and more
soon). Those can be compiled to a large number of plugin types, including
VST, AU, MaxMSP, ...

   https://bitbucket.org/ambidecodertoolbox/adt

Aaron

On Thu, Jul 6, 2017 at 1:53 PM, Sampo Syreeni  wrote:

> On 2017-07-05, Martin Dupras wrote:
>
> I've deployed a 21-speaker near spherical array a few days ago, which I
>> think is working ok, but I'm having difficulty [...]
>>
>
> Oh, and by the way, *please* compensate each speaker for 1) its
> propagation delay to the central sweet spot, and also 2) its frequency and
> distance dependent proximity effect. Both compensations can be done
> analytically, with the second one being par of the course for close range,
> domestic POA setups of the old kind. In that circuit the first one is more
> or less subsumed or at least approximated by the second one already.
> However if you *do* happen to use speakers at widely varying distances from
> the sweet spot, and you *do* happen to be able to do modern digital
> correction, *do* correct for absolute delay as well. It *will* make a
> difference, especially at the lowest orders. After all, you just said
> you're working with a "near-spherical array"; pretty much by definition
> that means not all of the speakers experience equal propagation delay
> towards the sweet spot...
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] multichannel VST recorder os x

2017-06-20 Thread Aaron Heller
Oli... Perhaps more than you wanted, but Plogue Bidule has a 16-channel
recorder (up to 32 ch) and is available as VST and AU plugins. It will also
host VST and AU plugins, and do many other types of processing. There is a
standalone version as well that interfaces to audio devices directly.

https://www.plogue.com/products/bidule/

I have used it for years on Macs and it is has been very dependable.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US

On Tue, Jun 20, 2017 at 2:23 PM, Oliver Larkin 
wrote:

> audiounit also acceptable
>
> > On 20 Jun 2017, at 22:14, Oliver Larkin 
> wrote:
> >
> > hello,
> >
> > does anyone know of a VST plug-in that will record a valid 16 channel
> wav file on os x? would rather not join mono files manually
> >
> > thanks,
> >
> > oli
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] multichannel VST recorder os x

2017-06-21 Thread Aaron Heller

> On Jun 21, 2017, at 1:13 PM, Oliver Larkin  wrote:
> 
> 
> I had tried bidule, but it looked like it was going to record 16 mono files

The setting in Bidule's Audio File Recorder is a bit confusing here... if you 
set "Channels per File" to "Stereo" it records all the channels in a single 
file, regardless of the number.


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] getting the peak frequency from microphone input

2017-10-25 Thread Aaron Heller
Doing this was one of the programming assignments in the course "Audio
Signal Processing for Music Applications"

   https://www.coursera.org/learn/audio-signal-processing

Excellent course. Tools here:

   https://www.upf.edu/web/mtg/sms-tools

   https://github.com/MTG/sms-tools


On Tue, Oct 24, 2017 at 3:17 PM, Sampo Syreeni  wrote:
>
> On 2017-10-24, Pierre Alexandre Tremblay wrote:
>
>> if display check the spectrumdraw~ object from the HIRT:
http://eprints.hud.ac.uk/id/eprint/14897/ <
http://eprints.hud.ac.uk/id/eprint/14897/> You can select and display the
peak
>
>
> Can it somehow be tracked programmatically, and e.g. turned into a
controller for further processing? I'd be interested in the primitives used
for that as well.
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Simple Software to Play a 6-channel WAV File

2017-10-25 Thread Aaron Heller
I use it extensively as well. Easy to control from a midi control surface
if you need physical controls. Glad to help with any questions.

Aaron

On Wed, Oct 25, 2017 at 11:58 AM, Fons Adriaensen 
wrote:

> On Wed, Oct 25, 2017 at 02:43:23PM -0400, len moskowitz wrote:
>
> > I'll try Plogue Bidule next .  It's a bit more complicated that we'd
> > like, but if it works, maybe we can work around the complexity.
>
> It's probably the best solution, and quite easy to use.
> My collegues at work use it all the time to play and/or
> record multichannel files.
>
> Ciao,
>
> --
> FA
>
> A world of exhaustive, reliable metadata would be an utopia.
> It's also a pipe-dream, founded on self-delusion, nerd hubris
> and hysterically inflated market opportunities. (Cory Doctorow)
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] getting the peak frequency from microphone input

2017-10-25 Thread Aaron Heller
Marc... As I recall, this is the first assignment that requires some
experimentation to determine the correct parameters. Good luck with it.

Regarding the original poster's question ... as you learn in this course,
the perceived pitch is not necessarily the strongest frequency in the
signal -- you need to analyze the harmonic structure of the signal. In many
instruments (trumpet, trombone, low-string on a violin, bottom octave of a
piano), the overtones are stronger than the fundamental.

Similar analysis is needed for parametric decoding of Ambisonics signals,
such as done by Harpex.or DirAC.

Aaron

On Wed, Oct 25, 2017 at 8:09 AM, Marc Lavallée <m...@hacklava.net> wrote:

> True! This is actually the assignment for this week. :-)
> I could post the solution here after I succeed,
> but I’m afraid that would be against the “Coursera honor code”...
> —
> Marc
>
> > On Oct 25, 2017, at 10:59 AM, Aaron Heller <ajhel...@gmail.com> wrote:
> >
> > Doing this was one of the programming assignments in the course "Audio
> > Signal Processing for Music Applications"
> >
> >   https://www.coursera.org/learn/audio-signal-processing
> >
> > Excellent course. Tools here:
> >
> >   https://www.upf.edu/web/mtg/sms-tools
> >
> >   https://github.com/MTG/sms-tools
> >
> >
> > On Tue, Oct 24, 2017 at 3:17 PM, Sampo Syreeni <de...@iki.fi> wrote:
> >>
> >> On 2017-10-24, Pierre Alexandre Tremblay wrote:
> >>
> >>> if display check the spectrumdraw~ object from the HIRT:
> > http://eprints.hud.ac.uk/id/eprint/14897/ <
> > http://eprints.hud.ac.uk/id/eprint/14897/> You can select and display
> the
> > peak
> >>
> >>
> >> Can it somehow be tracked programmatically, and e.g. turned into a
> > controller for further processing? I'd be interested in the primitives
> used
> > for that as well.
> >> --
> >> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> >> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> >>
> >> ___
> >> Sursound mailing list
> >> Sursound@music.vt.edu
> >> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <https://mail.music.vt.edu/mailman/private/sursound/
> attachments/20171025/2a71df18/attachment.html>
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20171025/4f4263cf/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] octofile release

2018-07-29 Thread Aaron Heller
There's a IETF proposal from folks at Google for "Ambisonics in an Ogg Opus
Container", based on

  Nachbar, et al., Ambix - A Suggested Ambisonics Format. 3rd International
Symposium on Ambisonics and Spherical Acoustics, Lexington, KY (2011)

and the idea of a default stereo decode from Etienne Deleflie's Universal
Ambisonic work



   https://tools.ietf.org/html/draft-ietf-codec-ambisonics-07

Martin Leese has posted pointers to it here from time to time. Early
versions had errors, which I informed them of and were fixed in later
versions.


Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA



On Sun, Jul 29, 2018 at 4:45 PM, Stefan Schreiber 
wrote:
>
> Short comments (to text below):
> (1)
>
> The ambisonics input channels can’t be coded in some 7.1 (channel
coupling, LFE) style, agreed.
>
> With opus you seem to need channel mapping #255, not #1 - the latter
 corresponds to (classical) 2D surround sound layouts.
>
> (2) I believe that  FB is already using up to 11 channels coded with
opus, although I am not absolutely sure about. (Could say more about this
in a few weeks, hopefully.)
>
> (3) Multi-channel != (classical) 5.1/7.1 surround sound.  In fact
surround sound is not a synonym for 5.1/7.1.
>
> We are on the surround list, so should know about this!
>
> But now we are getting slightly confused: Xipg.org’s Opus or Flac don’t
have to care about Dolby Digital or DD+ 5.1/7.1, right? So maybe you mean
5.1/7.1 channel mapping?
>
> To claim that 5.1 is “Dolby” doesn’t make sense. (There is an official
ITU layout standard, and many versions implemented/defined by DTS, Mpeg,
Sony and “anybody else”.)
>
>
https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.775-3-201208-I!!PDF-E.pdf
>
> So (righteously) you can implement 5.1 and 7.1 audio tracks since
Vorbis...
>
> (4) A lossless compression format could be used for mastering. I meant
this.
>
> Best,
>
> Stefan
>
> - - - - - - -
>
> Citando Marc Lavallée :
>
>> Le 2018-07-29 à 05:56 PM, Stefan Schreiber a écrit :
>>
>>> 1. I believe that the opus encoders/decoders have always supported more
than 8 channels.
>>
>>
>> Correct, but when encoding 8 or less channels, correlation is applied in
ways that are incompatible with Ambisonics; for example, the LFE channel is
filtered... With more than 8 channels, Opus don't correlate channels, but
it does now if the input stream is Ambisonics (and if the Ambisonics mode,
disabled by default, is compiled in).
>>
>>> 2. The next question is what ogg channel mapping and consequently
real-world browsers allow...
>>>
>>>
>>>
>>>  But in some sense the hack you did is known. (More complicated is
maybe to make it work...)
>>
>>
>> I tried only with 4 channels. It worked. I don't know if browsers are
now capable to support more than 8 channels. If the Octomic is getting
popular with VR content producers, maybe browsers will start supporting
streams with more than 8 channels (without systematically down-mixing them
to stereo).
>>
>>> 3. If they already plan to issue some ogg ambisonics standard (using
ogg opus of course) since at least 2016: You also need an associated
mastering standard, which would not change or compress any audio data.
Correct?
>>>
>>>
>>>
>>>  So what is “political” about extending the channel count of FLAC?
>>
>>
>> Multi-channel still mean Dolby 5.1 or 7.1. There's an inertia because
"standards" were designed as vendor lock-ins.
>>
>>> Compromise proposal:
>>>
>>>
>>>
>>>  4. So let’s maybe use .wav or .caf for the “mastering format”.
Microsoft and Apple already allow more than 8 channels...
>>
>>
>> Sure. Lossy codecs are not suited for mastering.
>>
>>> P.S.: “Joint stereo” you could classify as parametric coding.
>>
>>
>> Ok.
>>
>>
>>
>>  Marc
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20180729/c6f73626/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] oktava 1st order mic

2018-03-12 Thread Aaron Heller
The radius of the tetrahedral array determines the frequency at which the
B-format polar patterns start to breakdown. The formula given by Gerzon is
c/(pi*r), where c is the speed of sound and r is the radius of the
array[1]. Depending of the design, the acoustic radius is about 10% larger
than the physical radius, because the sound has to diffract around the
structures. So, in round numbers 10/r kHz, with r in cm. In a Soundfield
mic, the physical radius 1.47cm, so around 6.8 kHz. The Octava is over 4
cm, so less than 2.5kHz.  Note that very small capsules tend to be noisy,
so there is a tradeoff between noise and integrity of the patterns at high
frequencies.

In many of the 3D printed designs, the array is not open enough and the
interior space behind the capsules becomes a resonant chamber. This causes
peaks, dips and phase shifts in the response of the individual capsules
that are difficult to correct and affect the resulting patterns. There is
also the general geometry of the microphone body that tells you how much
care went into the design in terms of acoustic shadowing, reflections, and
diffraction. The large flat surface on the top of the preamp enclosure in
the Octava does not look good to me.

Part of the magic of a tetrahedral microphone is that the free- and
diffuse-field responses track each other. To achieve this, it is important
that the directivities of the four capsules are well matched [2].
Calibration can compensate for this to some degree, but the better the
capsules match, the better the result will be.  The only way to do this is
have a large collection of capsules, measure them individually, pick sets
of four, and then calibrate the entire array.  I know that Core Sound does
this (and Calrec did this). I don't know about other companies. In general,
I am suspicious of any tetrahedral mic that uses generic A-to-B conversion,
with no individual calibration.

[1] M. A. Gerzon, "The Design of Precisely Coincident Microphone Arrays for
Stereo and Surround Sound," 50th AES Convention Preprints, London, no. 20,
1975.

[2] A. J. Heller and E. M. Benjamin, "Calibration of Soundfield Microphones
using the Diffuse-Field Response," 133rd AES Convention Preprints, San
Francisco, no. 7811, 2012.


On Sun, Mar 11, 2018 at 3:13 PM, Peter P.  wrote:
>
> * Len Moskowitz  [2018-03-11 18:48]:
> > Gerard Lardner wrote:
> >
> > > Fons Adriaensen in Italy calibrated my Oktava. I believe Richard Lee
in
> > > Australia might still offer a calibration service, though he appears
to
> > > be less active on the internet these days, and I think Core Sound in
the
> > > USA also will do it - they used to say it on their website, but I
> > > haven't checked lately.
> >
> > We could, but in general we can confidently state that Oktava doesn't
> > understand how to build a first-order ambisonic microphone, and the
cost and
> > effort to calibrate it is not worthwhile.
>
> Thank you for your opinion Len. I am tempted to ask 'why' but let me
> ask instead what are the most difficult things to get right when
> building a first-order microphone.
>
> best, P
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sursound Digest, Vol 116, Issue 3

2018-03-08 Thread Aaron Heller
I have a "Mid-2010" Mac Mini connected to a Sherwood Newcastle R-972 via
HDMI. The "direct" input to the receiver shows up on the Mac Mini as an
8-channel audio output device. I'm not a MaxMSP user but it works fine with
Plogue Bidule.  You set up the channel mappings with the Audio MIDI Setup
program.

Aaron Heller
Menlo Park, CA  US

On Thu, Mar 8, 2018 at 5:54 AM, Augustine Leudar <augustineleu...@gmail.com>
wrote:
>
> Hi Marc,
> The processing involve dhere is not too CPU intensive - rather its whether
> the graphics card is capable of transmitting 8 channels of audio over HDMI
> that I am wondering about .
> Marc - its a commercial project and we will be connected online to
> update/make changes if necessary. As mentioned its possible to
> automatically tell the max patch which drivers ot use etc. I totally agree
> with you on "if it can go wrong it will" and it has to be rock solid - the
> last permanent install we did the cleaners kept unpluggin it - - we'd get
> the call "its not working !!) - wed drive two hours to ge tthere and
switch
> it back on. Lessons learnt. Linux would be my prefferedd choice for
> stablitiy but it doesnt have the software tools to do what is needed in
> this instance (Im a big fan of soundscape renderer on linux ) nor are
those
> involved in the project profficient in Linux. I have however found windows
> 10 to be a complete pain in the ass for anything thats installed for more
> than a couple of days. This will be for a year.
> best
> Gus
>
> On 8 March 2018 at 09:50, Dave Hunt <davehuntau...@btinternet.com> wrote:
>
> > Hi,
> >
> > Mac Minis are not that expensive, though whether they are adequate
depends
> > on how much audio processing is required.
> >
> > After set up you may not need a monitor, keyboard or mouse, though these
> > are fairly cheap and readily available now. These are only required if
> > something goes wrong, usually when an audio interface or other external
> > unit is not powered up when the computer boots.They can be set up to
boot
> > and run a program automatically to specified times, or to run a program
> > when booted.
> >
> > If it is a long term install, the cost of this part of the system
becomes
> > very low compared to other installation costs, and the daily management
of
> > the whole.
> >
> > Ciao,
> >
> > Dave Hunt
> >
> >
> > On 8 Mar 2018, at 01:34, sursound-requ...@music.vt.edu wrote:
> >
> > > From: Augustine Leudar <augustineleu...@gmail.com>
> > > Subject: Re: [Sursound] realtime 5.1 streaming over hdmi ?
> > > Date: 7 March 2018 18:37:46 GMT
> > > To: Surround Sound discussion group <sursound@music.vt.edu>
> > >
> > >
> > > Yes we got it working immediately on the Mac - one channel on
> > speaker.bBut
> > > obviously there's an extra cist involved for client if they have to
buy a
> > > Mac! Its a longterm install.
> > >
> > > On Wednesday, 7 March 2018, jim moses <jmo...@brown.edu> wrote:
> > >
> > >> The Mac HDMI out sounds like your best bet. You may need to work with
> > the
> > >> audio-midi-setup app to make it work (seeting the # of channels and
then
> > >> assignments in 'configure speakers').
> > >>
> > >> jim
> >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
>
>
>
> --
> Dr. Augustine Leudar
> Artistic Director Magik Door LTD
> Company Number : NI635217
> Registered 63 Ballycoan rd,
> Belfast BT88LL
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
https://mail.music.vt.edu/mailman/private/sursound/attachments/20180308/daa765d8/attachment.html
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20180308/b0c5a859/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Twirling 720 Microphone

2018-03-08 Thread Aaron Heller
A friend bought one, so I've been doing some reverse engineering.

In the 96kHz stereo file, the samples are interleaved

L: 1313131313
R: 2424242424

litepc2aformat reformats this to a 48kHz, 4-channel file

1: 1
2: 2
3: 3
4: 4

I also measured the impulse responses of Twirling720 Studio.app when
converting from 4-channel A-format to 4-channel B-format FuMa. It is just a
4x4 matrix, no coincidence correction or any other filters.  (view this in
a fixed-width font)

1 2 3 4
FRU   BRD   FLD   BLU
W   0.03540.03540.03540.0354
X   0.0107   -0.12200.1220   -0.0107
Y  -0.1220   -0.01070.01070.1220
Z   0.0866   -0.0866   -0.08660.0866


If you invert that matrix and convert the velocity components to polar,
they assume the mics are pointed at,

az
   -85.
  -175.
 5.
95.

el
35.2644
   -35.2644
   -35.2644
35.2644

so the whole mic array is rotated 40 degrees clockwise. The elevations are
correct for a tetrahedral array.

Also the euclidean length of the X, Y, Z components equals W*sqrt(2), so
the gains are for perfect cardioid capsules.

Looking at some sample recordings that were made well into the diffuse
field of a fairly reverberant hall, W looks about 3 dB too low, so I'm
guessing the capsules are more directional that cardioids, something
between a cardioid and supercardoid (which would make W 4.6 dB too low).

Without coincidence correction filters, the diffuse field response will be
wrong.

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US



On Thu, Mar 8, 2018 at 4:27 PM, Marc Lavallée <m...@hacklava.net> wrote:
>
> Hi Steven, and thanks for the useful info.
>
> The fact that the Twirling720 presents itself as a stereo 96Khz
> device and the possibility that the 4 channels are "encoded" in 2
> channels is very interesting, because it could potentially be used with
> any computer. I still don't use mine because the Android app is asking
> for too many permissions on my phone. I'm waiting for the official SDK,
> hoping I can program simple and safe custom apps. I also plan to build
> a mount because holding it only from its USB plug is very risky; a
> modified phone shell could be a good start.
>
> Le Thu, 8 Mar 2018 18:40:20 +
> Steven Boardman <boardroomout...@gmail.com> a écrit:
>
> > I have one too. Been using it successfully for a while with my android
> > phone.
> > It does seem to present itself to other audio apps as  2 channel 96khz
> > device.
> >  When using the apk it records either stereo, or 4 channel  A and B
> > format at 48khz. I think theres some  sort if matrixing going on.
> > I haven't done any vertical tests, but the horizontal works well for
> > the price.
> > It is also pretty easy to rotate the capsule spindle, so not sure how
> > accurate the positioning is. Mine is also not quite perpendicular!
> > The manual is useless.
> > They are quick to fix bugs, and implement suggestions thoigh. (the
> > A-Fornat one was mine.)
> > Its way better than a h2n in my opinion, and really easy to carry, as
> > i always have my phone anyway.
> > Because if this i use it a lot, as i carry it at all times.
> > I just have to make a mount for use with my Samsung gear 360.
> >
> > Best
> >
> > Steve
> >
> > On 8 Mar 2018 16:05, "John Leonard Main" <j...@johnleonard.uk> wrote:
> >
> > > Mine (pre-ordered for some small amount) arrived a couple of days
> > > ago and I’ve got it hooked up to my MacBook via a suitable USB
> > > adapter and an old Apple keyboard extension cable. At first, I
> > > couldn't get a sensible signal out of it, but then discovered that
> > > it needs to be connected via USB3, or it won’t work. Then I took a
> > > look at the capsule orientation, which, although it is indeed a
> > > tetrahedral array, seems to be skewed by 45º off centre, but as the
> > > output is encoded in some way into two channels, this may not be a
> > > problem. By using their 720 Studio app, I can get a sort-of
> > > surround signal out of it, although it appears to have no vertical
> > > information. The skimpy on-line manual is pretty useless for Mac
> > > users, so I wonder in anyone else has had better or more consistent
> > > results?
> > >
> > > Bruce - I could send it to you for chamber analysis, if you’re
> > > interested.
> > >
> > > All the best,
> > >
> > > John
> > > ___
> > > Sursound mailing list
> > > Sursound@music.vt.edu
> > > https://mail.music.vt.edu/mailman/listinfo/sursound - uns

Re: [Sursound] Twirling 720 Microphone

2018-03-08 Thread Aaron Heller
One correction, before  you invert the A-to-B gain matrix, you have to
account for the -3dB gain on W in B-format FuMa.

On Thu, Mar 8, 2018 at 8:35 PM, Aaron Heller <ajhel...@gmail.com> wrote:

> A friend bought one, so I've been doing some reverse engineering.
>
> In the 96kHz stereo file, the samples are interleaved
>
> L: 1313131313
> R: 2424242424 <(242)%20424-2424>
>
> litepc2aformat reformats this to a 48kHz, 4-channel file
>
> 1: 1
> 2: 2
> 3: 3
> 4: 4
>
> I also measured the impulse responses of Twirling720 Studio.app when
> converting from 4-channel A-format to 4-channel B-format FuMa. It is just a
> 4x4 matrix, no coincidence correction or any other filters.  (view this in
> a fixed-width font)
>
> 1 2 3 4
> FRU   BRD   FLD   BLU
> W   0.03540.03540.03540.0354
> X   0.0107   -0.12200.1220   -0.0107
> Y  -0.1220   -0.01070.01070.1220
> Z   0.0866   -0.0866   -0.08660.0866
>
>
> If you invert that matrix and convert the velocity components to polar,
> they assume the mics are pointed at,
>
> az
>-85.
>   -175.
>  5.
> 95.
>
> el
> 35.2644
>-35.2644
>-35.2644
> 35.2644
>
> so the whole mic array is rotated 40 degrees clockwise. The elevations are
> correct for a tetrahedral array.
>
> Also the euclidean length of the X, Y, Z components equals W*sqrt(2), so
> the gains are for perfect cardioid capsules.
>
> Looking at some sample recordings that were made well into the diffuse
> field of a fairly reverberant hall, W looks about 3 dB too low, so I'm
> guessing the capsules are more directional that cardioids, something
> between a cardioid and supercardoid (which would make W 4.6 dB too low).
>
> Without coincidence correction filters, the diffuse field response will be
> wrong.
>
> Aaron Heller (hel...@ai.sri.com)
> Menlo Park, CA  US
>
>
>
>
> On Thu, Mar 8, 2018 at 4:27 PM, Marc Lavallée <m...@hacklava.net> wrote:
> >
> > Hi Steven, and thanks for the useful info.
> >
> > The fact that the Twirling720 presents itself as a stereo 96Khz
> > device and the possibility that the 4 channels are "encoded" in 2
> > channels is very interesting, because it could potentially be used with
> > any computer. I still don't use mine because the Android app is asking
> > for too many permissions on my phone. I'm waiting for the official SDK,
> > hoping I can program simple and safe custom apps. I also plan to build
> > a mount because holding it only from its USB plug is very risky; a
> > modified phone shell could be a good start.
> >
> > Le Thu, 8 Mar 2018 18:40:20 +
> > Steven Boardman <boardroomout...@gmail.com> a écrit:
> >
> > > I have one too. Been using it successfully for a while with my android
> > > phone.
> > > It does seem to present itself to other audio apps as  2 channel 96khz
> > > device.
> > >  When using the apk it records either stereo, or 4 channel  A and B
> > > format at 48khz. I think theres some  sort if matrixing going on.
> > > I haven't done any vertical tests, but the horizontal works well for
> > > the price.
> > > It is also pretty easy to rotate the capsule spindle, so not sure how
> > > accurate the positioning is. Mine is also not quite perpendicular!
> > > The manual is useless.
> > > They are quick to fix bugs, and implement suggestions thoigh. (the
> > > A-Fornat one was mine.)
> > > Its way better than a h2n in my opinion, and really easy to carry, as
> > > i always have my phone anyway.
> > > Because if this i use it a lot, as i carry it at all times.
> > > I just have to make a mount for use with my Samsung gear 360.
> > >
> > > Best
> > >
> > > Steve
> > >
> > > On 8 Mar 2018 16:05, "John Leonard Main" <j...@johnleonard.uk> wrote:
> > >
> > > > Mine (pre-ordered for some small amount) arrived a couple of days
> > > > ago and I’ve got it hooked up to my MacBook via a suitable USB
> > > > adapter and an old Apple keyboard extension cable. At first, I
> > > > couldn't get a sensible signal out of it, but then discovered that
> > > > it needs to be connected via USB3, or it won’t work. Then I took a
> > > > look at the capsule orientation, which, although it is indeed a
> > > > tetrahedral array, seems to be skewed by 45º off centre, but as the
> > > > output is encoded in some way into two channels, this may no

Re: [Sursound] Sursound Digest, Vol 116, Issue 3

2018-03-08 Thread Aaron Heller
One more detail... I had to create an Aggregate audio device to get it
working with Jack. I don't remember the details, but could refresh my
memory if anyone needs them.

On Thu, Mar 8, 2018 at 8:59 PM, Aaron Heller <ajhel...@gmail.com> wrote:

> I have a "Mid-2010" Mac Mini connected to a Sherwood Newcastle R-972 via
> HDMI. The "direct" input to the receiver shows up on the Mac Mini as an
> 8-channel audio output device. I'm not a MaxMSP user but it works fine with
> Plogue Bidule.  You set up the channel mappings with the Audio MIDI Setup
> program.
>
> Aaron Heller
> Menlo Park, CA  US
>
>
> On Thu, Mar 8, 2018 at 5:54 AM, Augustine Leudar <
> augustineleu...@gmail.com> wrote:
> >
> > Hi Marc,
> > The processing involve dhere is not too CPU intensive - rather its
> whether
> > the graphics card is capable of transmitting 8 channels of audio over
> HDMI
> > that I am wondering about .
> > Marc - its a commercial project and we will be connected online to
> > update/make changes if necessary. As mentioned its possible to
> > automatically tell the max patch which drivers ot use etc. I totally
> agree
> > with you on "if it can go wrong it will" and it has to be rock solid -
> the
> > last permanent install we did the cleaners kept unpluggin it - - we'd get
> > the call "its not working !!) - wed drive two hours to ge tthere and
> switch
> > it back on. Lessons learnt. Linux would be my prefferedd choice for
> > stablitiy but it doesnt have the software tools to do what is needed in
> > this instance (Im a big fan of soundscape renderer on linux ) nor are
> those
> > involved in the project profficient in Linux. I have however found
> windows
> > 10 to be a complete pain in the ass for anything thats installed for more
> > than a couple of days. This will be for a year.
> > best
> > Gus
> >
> > On 8 March 2018 at 09:50, Dave Hunt <davehuntau...@btinternet.com>
> wrote:
> >
> > > Hi,
> > >
> > > Mac Minis are not that expensive, though whether they are adequate
> depends
> > > on how much audio processing is required.
> > >
> > > After set up you may not need a monitor, keyboard or mouse, though
> these
> > > are fairly cheap and readily available now. These are only required if
> > > something goes wrong, usually when an audio interface or other external
> > > unit is not powered up when the computer boots.They can be set up to
> boot
> > > and run a program automatically to specified times, or to run a program
> > > when booted.
> > >
> > > If it is a long term install, the cost of this part of the system
> becomes
> > > very low compared to other installation costs, and the daily
> management of
> > > the whole.
> > >
> > > Ciao,
> > >
> > > Dave Hunt
> > >
> > >
> > > On 8 Mar 2018, at 01:34, sursound-requ...@music.vt.edu wrote:
> > >
> > > > From: Augustine Leudar <augustineleu...@gmail.com>
> > > > Subject: Re: [Sursound] realtime 5.1 streaming over hdmi ?
> > > > Date: 7 March 2018 18:37:46 GMT
> > > > To: Surround Sound discussion group <sursound@music.vt.edu>
> > > >
> > > >
> > > > Yes we got it working immediately on the Mac - one channel on
> > > speaker.bBut
> > > > obviously there's an extra cist involved for client if they have to
> buy a
> > > > Mac! Its a longterm install.
> > > >
> > > > On Wednesday, 7 March 2018, jim moses <jmo...@brown.edu> wrote:
> > > >
> > > >> The Mac HDMI out sounds like your best bet. You may need to work
> with
> > > the
> > > >> audio-midi-setup app to make it work (seeting the # of channels and
> then
> > > >> assignments in 'configure speakers').
> > > >>
> > > >> jim
> > >
> > > ___
> > > Sursound mailing list
> > > Sursound@music.vt.edu
> > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here,
> > > edit account or options, view archives and so on.
> > >
> >
> >
> >
> > --
> > Dr. Augustine Leudar
> > Artistic Director Magik Door LTD
> > Company Number : NI635217
> > Registered 63 Ballycoan rd,
> > Belfast BT88LL
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <https://mail.music.vt.edu/mailman/private/sursound/
> attachments/20180308/daa765d8/attachment.html>
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20180308/090b3de2/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Into the Soundfield

2018-10-14 Thread Aaron Heller
A new site hosting Stephen Thorton's diaries and a new film about Michael
Gerzon and Ambisonics at Oxford

 https://intothesoundfield.music.ox.ac.uk

All very nicely done.

Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA  US
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20181014/1d1c3179/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Octomic Impulses Up North

2018-09-22 Thread Aaron Heller
The is known as the "substitution method" in microphone calibration
literature.

On Sat, Sep 22, 2018 at 9:07 AM Fons Adriaensen  wrote:

> On Sun, Sep 16, 2018 at 12:18:54PM -0600, Jonathan Kawchuk wrote:
>
> > I'm also curious if anyone has experimented with running an inverse EQ
> > calibration curve for your your sweep speakers in order to compensate for
> > inherent peaks and valleys in the speaker’s frequency response during
> > impulse response capture. Not sure if this is needed or if a sweep
> > generally irons out these inconsistencies?
>
> The way this is usually done is to record a sweep using an omni
> measurement mic, compute the inverse EQ from that and apply it
> to the measured IRs, not to the speaker signal (the net effect
> is the same).
>
> --
> FA
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Anyone ever tried to bypass youtube/facebook360 player Ambisonics decoder?

2019-02-20 Thread Aaron Heller
If it is any help, the script I wrote to make YouTube videos from AMB files
is here:

   https://bitbucket.org/ambidecodertoolbox/amb2yt/src

Some samples that might help you reverse engineer the format

https://youtu.be/eY9DMn8pgGA

https://youtu.be/RC4ptd9B-NA

You could make a file with isolated W, X, Y, and Z content, upload, then
download and see where the channels end up.

Aaron

On Mon, Feb 18, 2019 at 11:08 AM David Pickett  wrote:

> Why not make up a signal from six totally
> different mono wavefiles and see where they land after decoding?
>
> David
>
> At 17:33 18-02-19, you wrote:
> >Content-Transfer-Encoding: base646 channel format on YT:
> >
> >
> https://github.com/google/spatial-media/blob/master/docs/spatial-audio-rfc.md
> >
> >So channel ordering (normally) is W, Y, Z, X, L, R.
> >
> >It is possible to change the channel layout, which might be a
> >problematic feature...
> >
> >“For example, a channel layout of 4, 5, 0, 1, 2, 3 indicates that the
> >layout of the stored audio is /L/, /R/, /W/, /Y/, /Z/, /X/.”
> >
> >Best,
> >
> >Stefan
>
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


  1   2   >