Re: [Sursound] Help: what am I doing wrong?

2017-08-11 Thread Steven Boardman
Hi Martin

Did you ever get to the source of the problem?

Best

Steve

On 5 Jul 2017 23:11, "Martin Dupras"  wrote:

I've deployed a 21-speaker near spherical array a few days ago, which
I think is working ok, but I'm having difficulty with playing back
some first order A-format recordings on it. They sound really very
diffuse and not very localised at all. I figured that some of you good
people on here might have some idea of where I might be going wrong or
what is not right.
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-15 Thread Bo-Erik Sandholm
https://ambisonic.info/info/ricardo/decoder.html

Good info compiled to refresh the basics for foa decoding

On 7 Jul 2017 19:20, "Sampo Syreeni"  wrote:

> On 2017-07-06, Aaron Heller wrote:
>
> The decoders produced by my toolbox in FAUST (the ".dsp" files) have
>> distance, level, and near-field compensation up to 5th-order (and more
>> soon). Those can be compiled to a large number of plugin types, including
>> VST, AU, MaxMSP, ...
>>
>
> ...and we like it. ;)
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-07 Thread Sampo Syreeni

On 2017-07-06, Aaron Heller wrote:

The decoders produced by my toolbox in FAUST (the ".dsp" files) have 
distance, level, and near-field compensation up to 5th-order (and more 
soon). Those can be compiled to a large number of plugin types, 
including VST, AU, MaxMSP, ...


...and we like it. ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-07 Thread Matthias Kronlachner

Hi Aaron,

yes, this is true.

However, there is mcfx_gain_delay which can be used for delay and gain 
compensation (and sending test signals for crosscheck).


The reason I have this separated is my thinking of the speaker 
adjustment being a separate topic/step than generating the decoder.


In practice you might not know the exact xyz coordinates of your 
speakers, rather an more-or-less accurate estimate of azimuth and 
elevation to generate your decoder. Further the 
speakers/amps/dsps/converters in your playback system might not have 
equal gain (and of course the particularities of the room for each 
speaker position...).
Therefore using cart./sph. coordinates as basis for your gain and delay 
compensation might not be the best choice in real world.


I usually do a measurement with an omni mic in the sweet spot for 
getting gain and delay corrected, and of course crosscheck with your 
ears. (although, being overly correct for delay compensation might also 
not be the best choice and increase coloration)
In my installations there is often a digital mixer or separate computer 
involved feeding the loudspeakers, therefore I set the correction 
including some filtering there.
This is the step that does not depend on the spatialization method, and 
should anyway be done for any loudspeaker setup. Therefore I don't have 
to worry about the calibration of speakers anymore afterwards when using 
different computers to playback on the array with whatever fancy 
spatialization sound designers and composers come up.


spat~ has some nice tools that help you measuring gain and delay very 
quickly.


However, I released the first order version of ambix only for 
completeness, I don't really recommend using first order if you want to 
create a wow effect, as it is way to blurry. Active upmixing or higher 
order mics are the way to go, however the current hype did not yet touch 
this.


Best,
Matthias

On 07.07.2017 04:56, Aaron Heller wrote:

The
Ambix decoder is a matrix multiplication only, with no distance
compensation (please correct me if I'm wrong, Matthias).


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Aaron Heller
The decoders produced by my toolbox in FAUST (the ".dsp" files) have
distance, level, and near-field compensation up to 5th-order (and more
soon). Those can be compiled to a large number of plugin types, including
VST, AU, MaxMSP, ...

   https://bitbucket.org/ambidecodertoolbox/adt

Aaron

On Thu, Jul 6, 2017 at 1:53 PM, Sampo Syreeni  wrote:

> On 2017-07-05, Martin Dupras wrote:
>
> I've deployed a 21-speaker near spherical array a few days ago, which I
>> think is working ok, but I'm having difficulty [...]
>>
>
> Oh, and by the way, *please* compensate each speaker for 1) its
> propagation delay to the central sweet spot, and also 2) its frequency and
> distance dependent proximity effect. Both compensations can be done
> analytically, with the second one being par of the course for close range,
> domestic POA setups of the old kind. In that circuit the first one is more
> or less subsumed or at least approximated by the second one already.
> However if you *do* happen to use speakers at widely varying distances from
> the sweet spot, and you *do* happen to be able to do modern digital
> correction, *do* correct for absolute delay as well. It *will* make a
> difference, especially at the lowest orders. After all, you just said
> you're working with a "near-spherical array"; pretty much by definition
> that means not all of the speakers experience equal propagation delay
> towards the sweet spot...
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Sampo Syreeni

On 2017-07-05, Martin Dupras wrote:

I've deployed a 21-speaker near spherical array a few days ago, which 
I think is working ok, but I'm having difficulty [...]


Oh, and by the way, *please* compensate each speaker for 1) its 
propagation delay to the central sweet spot, and also 2) its frequency 
and distance dependent proximity effect. Both compensations can be done 
analytically, with the second one being par of the course for close 
range, domestic POA setups of the old kind. In that circuit the first 
one is more or less subsumed or at least approximated by the second one 
already. However if you *do* happen to use speakers at widely varying 
distances from the sweet spot, and you *do* happen to be able to do 
modern digital correction, *do* correct for absolute delay as well. It 
*will* make a difference, especially at the lowest orders. After all, 
you just said you're working with a "near-spherical array"; pretty much 
by definition that means not all of the speakers experience equal 
propagation delay towards the sweet spot...

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Sampo Syreeni

On 2017-07-05, Aaron Heller wrote:

1. You should use a first-order decoder to play first-order sources. 
That's not the same as playing a first-order file into the first-order 
inputs of a third-order decoder.


What Aaron said. The optimum decoders at different orders aren't 
comparable to each other. You'd like to think so, but it unfortunately 
really isn't the case. It even isn't the case that you can pseudo-invert 
UHJ into pantophonic B-format without needing a separate decoder matrix.


The worst thing is that the lower your limiting system order, the more 
you're relying on the precise psychoacoustics of an optimal decoder. 
Straying from it by naïvely feeding first order B-format into a second 
order decoder will be *much* worse, relatively speaking, than feeding 
second order into a third order one. Not to mention something which came 
from UHJ.


2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array 
gets the energy correct, and hence the sense of envelopment; 
localization is not that precise. The magnitude of the energy 
localization vector, rE, in this situation is only sqrt(3)/3, which 
Gerzon noted is “perilously close to being unsatisfactory." [1]


Feeding something like pantophony into a periphonic rig or vice versa is 
also suspect. Such setups lead to confusion between cylindrical and 
spherical harmonics, which means that the average intensity falloff gets 
mangled as well. While it remains sensible in angle, in radius around 
the sweet spot it doesn't. You can't rectify that problem even 
theoretically if you mix 2D and 3D ambisonic setups, with their 
topologically differing basis functions.


3. The decoders in the AmbiX plugins are single-band rE_max decoders, 
a dual-band decoder will improve localization for central listeners a 
bit. Both Ambdec and the FAUST decoders produced by the ADT (the 
".dsp" files) support 2-band decoding.


...and as I said above, at low orders we're relying more on the optimum 
psychoacoustic decode. A single band rE_max just won't do there.


4. If you really want more precise localization, consider parametric 
decoding using Harpex or the Harpex-based upmixer plugin from Blue 
Ripple Sound.


And even before going with something like Harpex, which is essentially a 
try at dual source active decoding, at least put in one of the newer, 
numerically optimized passive decoders, such as (was it?) Bruce 
Wiggins's Tabu seach derived framework. Then after doing that and 
Harpex, try out something like DirAC, from the newer active decoder 
family.


In my experience, it works very well with panned sources and acoustic 
recordings in dry environments (outdoors, dry hall). For recordings in 
very reverberant halls (like my recordings), the improvement is not 
that great.


Harpex does a limited number of direct sources rather well and stably. 
DirAC on the other hand does a higher number of sources, combined with 
ambience separation and spatial whitening. The two approaches appear to 
be complementary, but as of yet, I've never seen anybody implement them 
in the same active decoder. Nor to really take heed of the older passive 
decoder ideas too well, in combination with any active decoder concept.



As for what someone said down this thread about the optimum number of 
speakers in an old discussion... That one started out with, was it, 
Furse's or Leese's "Giant Geese". The undeniable percept in wide area 
reproduction that sound sources just sound *way* too big, even if 
well localizable within the first order framework.


Correct me if I'm wrong, because I don't think anybody's put all of the 
pieces together in any one post, but... I believe especially after the 
NFC-HOA work and the many listening tests on sparse first order 
reproduction arrays of various cardinalities boiled downto a couple of 
points.


First, optimum ambisonic playback with any rig isn't just dependent on 
angle, but rig diameter as well. That's first seen in how near-field 
compensation makes the transmission format depend on intended diameter. 
It was presaged by the original distance compensation circuitry of Plain 
Old Ambisonic of the Gerzon vein, which is precisely the first order 
rig dependent part of NFC-HOA, just placed on the decoder side. Where it 
then can't fully compensate for...


...secondly, spatial aliasing caused by the sparseness of the rig. It 
can do so at a single sweet point at the center, for a dual band 
decoder. But over the whole audible band, the sweet spot is so small in 
the first order case that the compensation necessarily falls short even 
at inter-ear distances. Suddenly we start to hear combing from the 
several speakers out there, on the rim...


...leading to third, some psychoacoustics which we didn't really expect. 
We were always assuming that more speakers leading to a denser rig would 
just automatically make for a better sound stage, because it comes 
closer to the ideal holophonic limit. But that's not really true 

Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Steven Boardman
Or:

Ambisonic spherical convention is left-oriented. 
The +ve X axis showing into the 0 degree direction for both, azimuth and 
elevation.  
The azimuth increases counterclockwise towards the +ve Y axis.  
Elevation is +ve above the XY plane, and - ve for below.  

I got this wrong when I first used the ADT, but Aaron Heller (thanks), spotted 
it.
Dual band decoding with parametric up-mixing (Harpex), is definitely 
recommended for first order on a 3rd order rig.

You could also do a decode to less speakers, or to binaural to check?

Best

Steve
> On 6 Jul 2017, at 00:39, Steven Boardman  wrote:
> 
> Check what convention the ADT uses for speaker co-ordinates. It may be that 
> you have some of your + or -  the wrong way round, or assigned to the 
> incorrect axis.  
> For Ambisonics convention, 
>  X is front and back, Y is left and right, Z is up and down. 
> It is a 90 degree rotation from graphics convention (+x right, +y forward, +z 
> up). 
> This would obviously make a big difference to the decode, and one that i have 
> been stumped by myself many times.. 
> 
> Steve 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Bo-Erik Sandholm
I remember a thread discussing  decoding and listening tests from a number
of years back.

For foa and dual band decoding the best results was with not too many
speakers in the array.

My memory of the result and my interpretation of the discussion is that,
6 speakers in the horizontal ring and 4 speakers in + and - 60 degrees
rings would be ideal for foa.
That is 14 channels total. More speakers gave a more diffuse result.

Bo-Erik

On 6 Jul 2017 03:16, "Aaron Heller"  wrote:

Forgot the URL...

http://www.ai.sri.com/~heller/ambisonics/index.html#test-files

On Wed, Jul 5, 2017 at 6:15 PM, Aaron Heller  wrote:

> I have some first-order test files that you can try. They're FuMa
> order/normalization. There's "eight directions" and some pink noise pans.
> With a good decoder, localization should be pretty good with these --
> better in the front than the back in my experience.
>
> Aaron
>
> On Wed, Jul 5, 2017 at 5:53 PM, Aaron Heller  wrote:
>
>> Hi Martin,
>>
>> A few things...
>>
>> 1. You should use a first-order decoder to play first-order sources.
>> That's not the same as playing a first-order file into the first-order
>> inputs of a third-order decoder.
>>
>> 2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array
>> gets the energy correct, and hence the sense of envelopment; localization
>> is not that precise.  The magnitude of the energy localization vector,
rE,
>> in this situation is only sqrt(3)/3, which Gerzon noted is “perilously
>> close to being unsatisfactory." [1]
>>
>> 3. The decoders in the AmbiX plugins are single-band rE_max decoders, a
>> dual-band decoder will improve localization for central listeners a bit.
>> Both Ambdec and the FAUST decoders produced by the ADT (the ".dsp" files)
>> support 2-band decoding.
>>
>> 4. If you really want more precise localization, consider parametric
>> decoding using Harpex or the Harpex-based upmixer plugin from Blue Ripple
>> Sound. In my experience, it works very well with panned sources and
>> acoustic recordings in dry environments (outdoors, dry hall). For
>> recordings in very reverberant halls (like my recordings), the
improvement
>> is not that great.
>>
>> Aaron (hel...@ai.sri.com)
>> Menlo Park, CA  US
>>
>>
>> [1]  Michael A. Gerzon. Practical Periphony: The Reproduction of
>> Full-Sphere Sound. Preprint 1571
>> from the 65th Audio Engineering Society Convention, London, February
>> 1980. AES E-lib http://www.aes.org/e-lib/browse.cfm?elib=3794.
>>
>>1.
>>
>>
>> On Wed, Jul 5, 2017 at 3:10 PM, Martin Dupras 
>> wrote:
>>
>>> I've deployed a 21-speaker near spherical array a few days ago, which
>>> I think is working ok, but I'm having difficulty with playing back
>>> some first order A-format recordings on it. They sound really very
>>> diffuse and not very localised at all. I figured that some of you good
>>> people on here might have some idea of where I might be going wrong or
>>> what is not right.
>>>
>>> At the moment I'm using Reaper, and for decoding I'm using Matthias
>>> Kronlachner's Ambix decoder plug-in, with a configuration that I've
>>> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
>>> decoder configuration is right. I've calculated it with ambix ordering
>>> and scaling, and third order in H and V.  The speaker array has six
>>> speakers at floor level (-22 degrees elevation), eight at ear level at
>>> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
>>> apex.
>>>
>>> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
>>> order), the localisation is pretty good. I've tested that with several
>>> people by panning to random places and asking to blindly point out to
>>> where they hear the source. Generally, they're in about the right
>>> place (say within 45 degrees on average.)
>>>
>>> On the other hand, if I play 1st order A-format recordings (mostly
>>> that I've made using our Core TetraMic), the localisation of sources
>>> is pretty poor. I also tried with the "xyz.wav" example file from Core
>>> (https://www.vvaudio.com/downloads) with the same results. To convert
>>> from A-format to B-format, I've tried using Core's VVtetraVST plugin
>>> with the calibration files for the mic (followed by the o3a FuMa to
>>> Ambix converter), and the Senneheiser Ambeo plugin (which does the
>>> same job, but in Ambix form already.)
>>>
>>> So what am I doing wrong? I've spent the last couple of days checking
>>> everything thoroughly. I've calibrated all the speakers to within 1dB
>>> SPL for the same signal received with an omni mic at the centre of the
>>> sphere. I've triple-checked that the encoder is in the right channel
>>> numbering:
>>>
>>> //--- decoder information ---
>>> // decoder file =
>>> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
>>> // speaker array name = BSU_Array_6861_RAE1
>>> // horizontal order   = 3

Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Aaron Heller
Forgot the URL...

http://www.ai.sri.com/~heller/ambisonics/index.html#test-files

On Wed, Jul 5, 2017 at 6:15 PM, Aaron Heller  wrote:

> I have some first-order test files that you can try. They're FuMa
> order/normalization. There's "eight directions" and some pink noise pans.
> With a good decoder, localization should be pretty good with these --
> better in the front than the back in my experience.
>
> Aaron
>
> On Wed, Jul 5, 2017 at 5:53 PM, Aaron Heller  wrote:
>
>> Hi Martin,
>>
>> A few things...
>>
>> 1. You should use a first-order decoder to play first-order sources.
>> That's not the same as playing a first-order file into the first-order
>> inputs of a third-order decoder.
>>
>> 2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array
>> gets the energy correct, and hence the sense of envelopment; localization
>> is not that precise.  The magnitude of the energy localization vector, rE,
>> in this situation is only sqrt(3)/3, which Gerzon noted is “perilously
>> close to being unsatisfactory." [1]
>>
>> 3. The decoders in the AmbiX plugins are single-band rE_max decoders, a
>> dual-band decoder will improve localization for central listeners a bit.
>> Both Ambdec and the FAUST decoders produced by the ADT (the ".dsp" files)
>> support 2-band decoding.
>>
>> 4. If you really want more precise localization, consider parametric
>> decoding using Harpex or the Harpex-based upmixer plugin from Blue Ripple
>> Sound. In my experience, it works very well with panned sources and
>> acoustic recordings in dry environments (outdoors, dry hall). For
>> recordings in very reverberant halls (like my recordings), the improvement
>> is not that great.
>>
>> Aaron (hel...@ai.sri.com)
>> Menlo Park, CA  US
>>
>>
>> [1]  Michael A. Gerzon. Practical Periphony: The Reproduction of
>> Full-Sphere Sound. Preprint 1571
>> from the 65th Audio Engineering Society Convention, London, February
>> 1980. AES E-lib http://www.aes.org/e-lib/browse.cfm?elib=3794.
>>
>>1.
>>
>>
>> On Wed, Jul 5, 2017 at 3:10 PM, Martin Dupras 
>> wrote:
>>
>>> I've deployed a 21-speaker near spherical array a few days ago, which
>>> I think is working ok, but I'm having difficulty with playing back
>>> some first order A-format recordings on it. They sound really very
>>> diffuse and not very localised at all. I figured that some of you good
>>> people on here might have some idea of where I might be going wrong or
>>> what is not right.
>>>
>>> At the moment I'm using Reaper, and for decoding I'm using Matthias
>>> Kronlachner's Ambix decoder plug-in, with a configuration that I've
>>> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
>>> decoder configuration is right. I've calculated it with ambix ordering
>>> and scaling, and third order in H and V.  The speaker array has six
>>> speakers at floor level (-22 degrees elevation), eight at ear level at
>>> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
>>> apex.
>>>
>>> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
>>> order), the localisation is pretty good. I've tested that with several
>>> people by panning to random places and asking to blindly point out to
>>> where they hear the source. Generally, they're in about the right
>>> place (say within 45 degrees on average.)
>>>
>>> On the other hand, if I play 1st order A-format recordings (mostly
>>> that I've made using our Core TetraMic), the localisation of sources
>>> is pretty poor. I also tried with the "xyz.wav" example file from Core
>>> (https://www.vvaudio.com/downloads) with the same results. To convert
>>> from A-format to B-format, I've tried using Core's VVtetraVST plugin
>>> with the calibration files for the mic (followed by the o3a FuMa to
>>> Ambix converter), and the Senneheiser Ambeo plugin (which does the
>>> same job, but in Ambix form already.)
>>>
>>> So what am I doing wrong? I've spent the last couple of days checking
>>> everything thoroughly. I've calibrated all the speakers to within 1dB
>>> SPL for the same signal received with an omni mic at the centre of the
>>> sphere. I've triple-checked that the encoder is in the right channel
>>> numbering:
>>>
>>> //--- decoder information ---
>>> // decoder file =
>>> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
>>> // speaker array name = BSU_Array_6861_RAE1
>>> // horizontal order   = 3
>>> // vertical order = 3
>>> // coefficient order  = acn
>>> // coefficient scale  = SN3D
>>> // input scale= SN3D
>>> // mixed-order scheme = HV
>>> // input channel order: W Y Z X V T R S U Q O M K L N P
>>> // output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
>>> S12 S13 S14 S15 S16 S17 S18 S19 S20 S21
>>>
>>> I'll welcome any suggestion or advice!
>>>
>>> Thanks,
>>>
>>> - martin
>>> ___
>>> Sursound mailing list
>>> 

Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Aaron Heller
Hi Martin,

A few things...

1. You should use a first-order decoder to play first-order sources. That's
not the same as playing a first-order file into the first-order inputs of a
third-order decoder.

2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array gets
the energy correct, and hence the sense of envelopment; localization is not
that precise.  The magnitude of the energy localization vector, rE, in this
situation is only sqrt(3)/3, which Gerzon noted is “perilously close to
being unsatisfactory." [1]

3. The decoders in the AmbiX plugins are single-band rE_max decoders, a
dual-band decoder will improve localization for central listeners a bit.
Both Ambdec and the FAUST decoders produced by the ADT (the ".dsp" files)
support 2-band decoding.

4. If you really want more precise localization, consider parametric
decoding using Harpex or the Harpex-based upmixer plugin from Blue Ripple
Sound. In my experience, it works very well with panned sources and
acoustic recordings in dry environments (outdoors, dry hall). For
recordings in very reverberant halls (like my recordings), the improvement
is not that great.

Aaron (hel...@ai.sri.com)
Menlo Park, CA  US


[1]  Michael A. Gerzon. Practical Periphony: The Reproduction of
Full-Sphere Sound. Preprint 1571
from the 65th Audio Engineering Society Convention, London, February 1980.
AES E-lib http://www.aes.org/e-lib/browse.cfm?elib=3794.

   1.


On Wed, Jul 5, 2017 at 3:10 PM, Martin Dupras 
wrote:

> I've deployed a 21-speaker near spherical array a few days ago, which
> I think is working ok, but I'm having difficulty with playing back
> some first order A-format recordings on it. They sound really very
> diffuse and not very localised at all. I figured that some of you good
> people on here might have some idea of where I might be going wrong or
> what is not right.
>
> At the moment I'm using Reaper, and for decoding I'm using Matthias
> Kronlachner's Ambix decoder plug-in, with a configuration that I've
> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
> decoder configuration is right. I've calculated it with ambix ordering
> and scaling, and third order in H and V.  The speaker array has six
> speakers at floor level (-22 degrees elevation), eight at ear level at
> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
> apex.
>
> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
> order), the localisation is pretty good. I've tested that with several
> people by panning to random places and asking to blindly point out to
> where they hear the source. Generally, they're in about the right
> place (say within 45 degrees on average.)
>
> On the other hand, if I play 1st order A-format recordings (mostly
> that I've made using our Core TetraMic), the localisation of sources
> is pretty poor. I also tried with the "xyz.wav" example file from Core
> (https://www.vvaudio.com/downloads) with the same results. To convert
> from A-format to B-format, I've tried using Core's VVtetraVST plugin
> with the calibration files for the mic (followed by the o3a FuMa to
> Ambix converter), and the Senneheiser Ambeo plugin (which does the
> same job, but in Ambix form already.)
>
> So what am I doing wrong? I've spent the last couple of days checking
> everything thoroughly. I've calibrated all the speakers to within 1dB
> SPL for the same signal received with an omni mic at the centre of the
> sphere. I've triple-checked that the encoder is in the right channel
> numbering:
>
> //--- decoder information ---
> // decoder file =
> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
> // speaker array name = BSU_Array_6861_RAE1
> // horizontal order   = 3
> // vertical order = 3
> // coefficient order  = acn
> // coefficient scale  = SN3D
> // input scale= SN3D
> // mixed-order scheme = HV
> // input channel order: W Y Z X V T R S U Q O M K L N P
> // output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
> S12 S13 S14 S15 S16 S17 S18 S19 S20 S21
>
> I'll welcome any suggestion or advice!
>
> Thanks,
>
> - martin
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Steven Boardman
Check what convention the ADT uses for speaker co-ordinates. It may be that
you have some of your + or -  the wrong way round, or assigned to the
incorrect axis.
For Ambisonics convention,
 X is front and back, Y is left and right, Z is up and down.
It is a 90 degree rotation from graphics convention (+x right, +y forward,
+z up).
This would obviously make a big difference to the decode, and one that i
have been stumped by myself many times..

Steve

On 6 Jul 2017 00:02, "Martin Dupras"  wrote:


I'm not entirely sure I follow the speaker coordinates question. The
way I've done it is by putting radius, azimuth and elevation for each
in a CSV file to ADT, which then calculates the right encoding
configuration for that particular array. I trust that ADT and ambix
decoder are doing the right things, e.g. that "speaker 1 is at radius
4.5m and, 0 degrees azimuth and -22.5 degrees elevation" is actually
receiving the right mix of signals.

But this is useful, thank you. I wouldn't be surprised to find that
I've overlooked something, so any more things to check are welcome!

- martin
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Martin Dupras
I've used the VVTetraVST plugin too, with FuMa->Ambix conversion,
using the actual calibration files for the mic. The Ambeo seems to
give slightly better results, but I'm willing to have more goes with
VVTetraVST is you think it's significant.

The way I have the routing in Reaper is that the track with the
4-channel content send to parent channels 1-4 (but I've tried 1-16
too, just for luck, not that it should make any difference), to the
master which is 16-channels in, 21 out. I'm using a MadiFace
interface, and each output in Reaper is definitely going to the right
channel and the right speaker. (I've tested this by having a mono
source and routing it individually to each speaker.)

The speakers are all powered Genelecs. I've calibrated them such that
the same signal routed to any of them comes to within 1dB SPL of each
other at the centre of the hemisphere. The speakers are in phase by
virtue of being connected by way of a Studer Desk which takes MADI
input and outputs on 32 XLR outs.

I'm not entirely sure I follow the speaker coordinates question. The
way I've done it is by putting radius, azimuth and elevation for each
in a CSV file to ADT, which then calculates the right encoding
configuration for that particular array. I trust that ADT and ambix
decoder are doing the right things, e.g. that "speaker 1 is at radius
4.5m and, 0 degrees azimuth and -22.5 degrees elevation" is actually
receiving the right mix of signals.

But this is useful, thank you. I wouldn't be surprised to find that
I've overlooked something, so any more things to check are welcome!

- martin

On 5 July 2017 at 23:48, Steven Boardman  wrote:
> Ok.
>
> Just to rule out, only use the ambeo plugin for the ambeo. If its the
> Tetramic then use its own b-format conversion.
>
> Does the routing of Reaper,  match that of the ADT,  and that of the actual
> speaker connections. Are all speakers connected in phase?
> Do your speaker co-ordinates follow ambisonics,  or graphics convention?
>
> Steve
>
>
> On 5 Jul 2017 23:28, "Martin Dupras"  wrote:
>
> They're converted to Ambix. The A-format recordings are in a 4-channel
> track which has the Sennheiser Ambeo plug-in which converts from
> A-Format to Ambix B-format. (It's capable of FuMa and Ambix; I have it
> on the latter.) I've also verified that the capsules order is correct.
> The Ambeo plugin expects the capsules to be FLU, FRD, BLD and BRU,
> which I believe to be the same on both the TetraMic and the Sennheiser
> Ambeo.
>
> - martin
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Steven Boardman
Ok.

Just to rule out, only use the ambeo plugin for the ambeo. If its the
Tetramic then use its own b-format conversion.

Does the routing of Reaper,  match that of the ADT,  and that of the actual
speaker connections. Are all speakers connected in phase?
Do your speaker co-ordinates follow ambisonics,  or graphics convention?

Steve


On 5 Jul 2017 23:28, "Martin Dupras"  wrote:

They're converted to Ambix. The A-format recordings are in a 4-channel
track which has the Sennheiser Ambeo plug-in which converts from
A-Format to Ambix B-format. (It's capable of FuMa and Ambix; I have it
on the latter.) I've also verified that the capsules order is correct.
The Ambeo plugin expects the capsules to be FLU, FRD, BLD and BRU,
which I believe to be the same on both the TetraMic and the Sennheiser
Ambeo.

- martin
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Steven Boardman
Oops just reread sorry!
You are already converting from FuMa to Ambix.

Could it just be that it's 1st order?
I wouldn't class within 45 degrees good for 3rd order though...

Steve

On 5 Jul 2017 23:11, "Martin Dupras"  wrote:

> I've deployed a 21-speaker near spherical array a few days ago, which
> I think is working ok, but I'm having difficulty with playing back
> some first order A-format recordings on it. They sound really very
> diffuse and not very localised at all. I figured that some of you good
> people on here might have some idea of where I might be going wrong or
> what is not right.
>
> At the moment I'm using Reaper, and for decoding I'm using Matthias
> Kronlachner's Ambix decoder plug-in, with a configuration that I've
> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
> decoder configuration is right. I've calculated it with ambix ordering
> and scaling, and third order in H and V.  The speaker array has six
> speakers at floor level (-22 degrees elevation), eight at ear level at
> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
> apex.
>
> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
> order), the localisation is pretty good. I've tested that with several
> people by panning to random places and asking to blindly point out to
> where they hear the source. Generally, they're in about the right
> place (say within 45 degrees on average.)
>
> On the other hand, if I play 1st order A-format recordings (mostly
> that I've made using our Core TetraMic), the localisation of sources
> is pretty poor. I also tried with the "xyz.wav" example file from Core
> (https://www.vvaudio.com/downloads) with the same results. To convert
> from A-format to B-format, I've tried using Core's VVtetraVST plugin
> with the calibration files for the mic (followed by the o3a FuMa to
> Ambix converter), and the Senneheiser Ambeo plugin (which does the
> same job, but in Ambix form already.)
>
> So what am I doing wrong? I've spent the last couple of days checking
> everything thoroughly. I've calibrated all the speakers to within 1dB
> SPL for the same signal received with an omni mic at the centre of the
> sphere. I've triple-checked that the encoder is in the right channel
> numbering:
>
> //--- decoder information ---
> // decoder file =
> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
> // speaker array name = BSU_Array_6861_RAE1
> // horizontal order   = 3
> // vertical order = 3
> // coefficient order  = acn
> // coefficient scale  = SN3D
> // input scale= SN3D
> // mixed-order scheme = HV
> // input channel order: W Y Z X V T R S U Q O M K L N P
> // output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
> S12 S13 S14 S15 S16 S17 S18 S19 S20 S21
>
> I'll welcome any suggestion or advice!
>
> Thanks,
>
> - martin
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Martin Dupras
They're converted to Ambix. The A-format recordings are in a 4-channel
track which has the Sennheiser Ambeo plug-in which converts from
A-Format to Ambix B-format. (It's capable of FuMa and Ambix; I have it
on the latter.) I've also verified that the capsules order is correct.
The Ambeo plugin expects the capsules to be FLU, FRD, BLD and BRU,
which I believe to be the same on both the TetraMic and the Sennheiser
Ambeo.

- martin


On 5 July 2017 at 23:24, Steven Boardman  wrote:
> When you are converting from A to B-format are you going to FuMa or Ambix
> convention?
>
> Generally most 1st order content is FuMa, so it will need to be converted
> to Ambix before your decode.
>
> Steve
>
> On 5 Jul 2017 23:11, "Martin Dupras"  wrote:
>
>> I've deployed a 21-speaker near spherical array a few days ago, which
>> I think is working ok, but I'm having difficulty with playing back
>> some first order A-format recordings on it. They sound really very
>> diffuse and not very localised at all. I figured that some of you good
>> people on here might have some idea of where I might be going wrong or
>> what is not right.
>>
>> At the moment I'm using Reaper, and for decoding I'm using Matthias
>> Kronlachner's Ambix decoder plug-in, with a configuration that I've
>> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
>> decoder configuration is right. I've calculated it with ambix ordering
>> and scaling, and third order in H and V.  The speaker array has six
>> speakers at floor level (-22 degrees elevation), eight at ear level at
>> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
>> apex.
>>
>> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
>> order), the localisation is pretty good. I've tested that with several
>> people by panning to random places and asking to blindly point out to
>> where they hear the source. Generally, they're in about the right
>> place (say within 45 degrees on average.)
>>
>> On the other hand, if I play 1st order A-format recordings (mostly
>> that I've made using our Core TetraMic), the localisation of sources
>> is pretty poor. I also tried with the "xyz.wav" example file from Core
>> (https://www.vvaudio.com/downloads) with the same results. To convert
>> from A-format to B-format, I've tried using Core's VVtetraVST plugin
>> with the calibration files for the mic (followed by the o3a FuMa to
>> Ambix converter), and the Senneheiser Ambeo plugin (which does the
>> same job, but in Ambix form already.)
>>
>> So what am I doing wrong? I've spent the last couple of days checking
>> everything thoroughly. I've calibrated all the speakers to within 1dB
>> SPL for the same signal received with an omni mic at the centre of the
>> sphere. I've triple-checked that the encoder is in the right channel
>> numbering:
>>
>> //--- decoder information ---
>> // decoder file =
>> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
>> // speaker array name = BSU_Array_6861_RAE1
>> // horizontal order   = 3
>> // vertical order = 3
>> // coefficient order  = acn
>> // coefficient scale  = SN3D
>> // input scale= SN3D
>> // mixed-order scheme = HV
>> // input channel order: W Y Z X V T R S U Q O M K L N P
>> // output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
>> S12 S13 S14 S15 S16 S17 S18 S19 S20 S21
>>
>> I'll welcome any suggestion or advice!
>>
>> Thanks,
>>
>> - martin
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-05 Thread Steven Boardman
When you are converting from A to B-format are you going to FuMa or Ambix
convention?

Generally most 1st order content is FuMa, so it will need to be converted
to Ambix before your decode.

Steve

On 5 Jul 2017 23:11, "Martin Dupras"  wrote:

> I've deployed a 21-speaker near spherical array a few days ago, which
> I think is working ok, but I'm having difficulty with playing back
> some first order A-format recordings on it. They sound really very
> diffuse and not very localised at all. I figured that some of you good
> people on here might have some idea of where I might be going wrong or
> what is not right.
>
> At the moment I'm using Reaper, and for decoding I'm using Matthias
> Kronlachner's Ambix decoder plug-in, with a configuration that I've
> calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
> decoder configuration is right. I've calculated it with ambix ordering
> and scaling, and third order in H and V.  The speaker array has six
> speakers at floor level (-22 degrees elevation), eight at ear level at
> 1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
> apex.
>
> Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
> order), the localisation is pretty good. I've tested that with several
> people by panning to random places and asking to blindly point out to
> where they hear the source. Generally, they're in about the right
> place (say within 45 degrees on average.)
>
> On the other hand, if I play 1st order A-format recordings (mostly
> that I've made using our Core TetraMic), the localisation of sources
> is pretty poor. I also tried with the "xyz.wav" example file from Core
> (https://www.vvaudio.com/downloads) with the same results. To convert
> from A-format to B-format, I've tried using Core's VVtetraVST plugin
> with the calibration files for the mic (followed by the o3a FuMa to
> Ambix converter), and the Senneheiser Ambeo plugin (which does the
> same job, but in Ambix form already.)
>
> So what am I doing wrong? I've spent the last couple of days checking
> everything thoroughly. I've calibrated all the speakers to within 1dB
> SPL for the same signal received with an omni mic at the centre of the
> sphere. I've triple-checked that the encoder is in the right channel
> numbering:
>
> //--- decoder information ---
> // decoder file =
> ../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
> // speaker array name = BSU_Array_6861_RAE1
> // horizontal order   = 3
> // vertical order = 3
> // coefficient order  = acn
> // coefficient scale  = SN3D
> // input scale= SN3D
> // mixed-order scheme = HV
> // input channel order: W Y Z X V T R S U Q O M K L N P
> // output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
> S12 S13 S14 S15 S16 S17 S18 S19 S20 S21
>
> I'll welcome any suggestion or advice!
>
> Thanks,
>
> - martin
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Help: what am I doing wrong?

2017-07-05 Thread Martin Dupras
I've deployed a 21-speaker near spherical array a few days ago, which
I think is working ok, but I'm having difficulty with playing back
some first order A-format recordings on it. They sound really very
diffuse and not very localised at all. I figured that some of you good
people on here might have some idea of where I might be going wrong or
what is not right.

At the moment I'm using Reaper, and for decoding I'm using Matthias
Kronlachner's Ambix decoder plug-in, with a configuration that I've
calculated with Aaron Heller's Ambisonics Decoder Toolbox. I think the
decoder configuration is right. I've calculated it with ambix ordering
and scaling, and third order in H and V.  The speaker array has six
speakers at floor level (-22 degrees elevation), eight at ear level at
1m70 (0 degrees elevation), six at 45 degrees elevation and one at the
apex.

Now: if I pan monophonic sources using a panner (e.g. o3a panner, 3rd
order), the localisation is pretty good. I've tested that with several
people by panning to random places and asking to blindly point out to
where they hear the source. Generally, they're in about the right
place (say within 45 degrees on average.)

On the other hand, if I play 1st order A-format recordings (mostly
that I've made using our Core TetraMic), the localisation of sources
is pretty poor. I also tried with the "xyz.wav" example file from Core
(https://www.vvaudio.com/downloads) with the same results. To convert
from A-format to B-format, I've tried using Core's VVtetraVST plugin
with the calibration files for the mic (followed by the o3a FuMa to
Ambix converter), and the Senneheiser Ambeo plugin (which does the
same job, but in Ambix form already.)

So what am I doing wrong? I've spent the last couple of days checking
everything thoroughly. I've calibrated all the speakers to within 1dB
SPL for the same signal received with an omni mic at the centre of the
sphere. I've triple-checked that the encoder is in the right channel
numbering:

//--- decoder information ---
// decoder file =
../decoders/BSU_Array_6861_RAE1_3h3v_allrad_5200_rE_max_2_band.config
// speaker array name = BSU_Array_6861_RAE1
// horizontal order   = 3
// vertical order = 3
// coefficient order  = acn
// coefficient scale  = SN3D
// input scale= SN3D
// mixed-order scheme = HV
// input channel order: W Y Z X V T R S U Q O M K L N P
// output speaker order: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11
S12 S13 S14 S15 S16 S17 S18 S19 S20 S21

I'll welcome any suggestion or advice!

Thanks,

- martin
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.