[Sursound] Mosca v0.3 released

2020-05-21 Thread Iain Mott

Dear list,

I'd like to announce the release of a new and greatly improved version 
of the 'Mosca' SuperCollider quark for sound spatialisation which is 
available via the SuperCollider "Quarks.gui" installer. The release 
notes are as follows:


"Revised SuperCollider quark for three-dimensional sound spatialisation, 
with recent developments featuring the work of Thibaud Keller from 
SCRIME, University of Bordeaux. v0.3 includes multiple spatialisation 
libraries assignable on a per-source basis, banks of RIRs for both 
'close' and 'distant' convolution reverberation (again on a per-source 
basis), support for higher-order ambisonic signals and sound files, 
improved GUI, OSC interface for the intermedia sequencer OSSIA/score and 
many other improvements and optimisations."


Once installed, documentation found in the help file for the Mosca 
quark. On the help page, see the link for the "Guide to Mosca" for full 
documentation.


Source code with basic readme file is available here:

https://github.com/escuta/mosca

Also available is the paper "Three-dimensional sound design with Mosca" 
that Thibaud and I presented at a conference last year:


https://www.researchgate.net/publication/336983923

Please let us know if you use Mosca in any projects!

All the best,

Iain Mott



Iain Mott
https://escuta.org


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Mosca: GUI assisted ambisonics quark v0.2 for SuperCollider

2018-02-12 Thread Iain Mott

Hello list,

For users of SuperCollider:

A new and much improved "Mosca" quark for GUI assisted ambisonics is 
available in the quark repository. It has been tested with the ATK 4.0.1 
and SC 3.9.0.


Version 0.2 includes optional head-tracking, streaming from disk, 
non-GUI operation, a code interface and many other changes and bug fixes.


The home page for the project is here: http://escuta.org/mosca

A video tutorial is available here: http://escuta.org/moscavideo

The git source is here: https://github.com/escuta/mosca

All the best!

Iain Mott

--
_
Iain Mott
http://escuta.org

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Mosca: Ambisonic GUI for Supercollider

2016-10-31 Thread Iain Mott

Hello all,

Forwarding an edited message that I sent to the SuperCollider list on 
the weekend:


Recently I've been working on a class/quark for GUI assisted authoring 
of ambisonic sound fields. The class is called Mosca and I think it may 
be of general interest. It uses Joseph Anderson's Ambisonic Toolkit 
among other components and allows for quick/easy spatialisation of 
sounds from a variety of sources (file, hardware-inputs, SC synths) as 
well as various options for reverberation, recording of trajectories and 
control data, A-format inserts for user defined filters and basic sync 
with a DAW via MMC - among other features.


It has not been tested on Mac or Windows however, only on Linux, so I'd 
like to invite people on the list to please try it out. (I believe there 
will be file saving issues on Windows in relation to the Automation 
quark - something I'll try and fix once I get my hands on a Windows 
machine.)


The home page for the project is here: http://escuta.org/mosca

The Github page is here: https://github.com/escuta/mosca

It requires the Ambisonic Tool Kit (ATK) to be installed and the ATK 
kernels. See http://www.ambisonictoolkit.net/download/supercollider/


It also requires the latest version of the Automation Quark and 
additionally: Mathlib, CTK and XML (which should all install 
automatically with Mosca).


Once installed, please run help on "Mosca" for detailed information and 
source code examples.


You'll also need to set up a project directory with subdirectories 
"auto" and "rir". See the zip file here http://escuta.org/mosca for an 
example B-format RIR file (48kHz) and B-format recordings, some of which 
are included with kind permission from John Leonard.


I hope it's of use to people on the list. Many thanks to Joseph Anderson 
for his assistance and suggestions as well Neels Hofmeyr and to various 
people on the list for answering my recent barrage of questions!


Please see also my B-format field recordings here: 
http://escuta.org/en/projects/research/cerrado/audio-map.html
and other ambisonic resources here: 
http://escuta.org/en/projects/research/ambiresources.html


Please let me know if you have any suggestions or discover any problems.

All the best,

Iain

--
_
Iain Mott
http://escuta.org

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Multichannel amps

2015-12-27 Thread Iain Mott
Can't remember the price but the Crown CT8150 is very good. Has balanced
inputs and does 125W into 8 ohms. Only eight channels, but is very slim
and rack mountable so 4 units won't take up much space:

http://www.crownaudio.com/en-US/products/ct-8150

Iain


Em Dom, 2015-12-27 às 00:55 +, Augustine Leudar escreveu:
> Frank can I ask what speakers you used with this ?
> cheers,
> Augustine
> 
> On 29 October 2015 at 13:27, Frank Ekeberg  wrote:
> 
> >
> > I have a couple of Dayton MA1240a that I've used for multichannel sound
> > installations, that I can recommend. The MA1240a is a 12 channel amplifier
> > with quite decent sound quality. Solidly built, too. I paid just below $500
> > per amp at one of the online discount electronics stores in the US.
> >
> > http://www.daytonaudio.com/index.php/ma1240-multi-zone-12-channel-amplifier.html
> >
> > -- frank
> >
> > On 29.10.2015 10:29, Augustine Leudar wrote:
> >
> >> please bear in mind that I can buy 22 active speakers for around 1000
> >> pounds for this project - so no suggestions of £4000 amplifiers please !
> >>
> >> On 29 October 2015 at 09:07, Augustine Leudar 
> >> wrote:
> >>
> >> Dear all,
> >>> I am looking to build a budget passive multichannel system - can anyone
> >>> recommend a good value amplifier - maybe 22 channels up to 32 channels ?
> >>> Any speakers that would do well with it (preferable waterproof !). Is
> >>> there
> >>> a wireless system yet ?
> >>>
> >>> --
> >>> www.augustineleudar.com
> >>>
> >>>
> >>
> >>
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
> 
> 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Acousmatic

2015-12-04 Thread Iain Mott
John Dack made an English translation of the guide which is excellent. 
It's probably available online somewhere, but if not, you could email him:


http://www.mdx.ac.uk/about-us/our-people/staff-directory/dack-john


Em 21-11-2015 13:11, Marc Lavallée escreveu:

Thanks Ero for the reference. The contribution of Michel Chion is major
to understand a world where large part of sounds are coming from
loudspeakers. His acousmatic music is also excellent.

A bilingual glossary is available from his web site:
http://michelchion.com/texts
Also a free eBook edition of his "Guide des objets sonores" (in French
only), which is an ambitious study about the "Traité des objets
musicaux" of Pierre Scheaffer.

--
Marc

Le Sat, 21 Nov 2015 12:32:55 +0200,
Eero Aro  a écrit :


Dave Malham wrote:

Not quite sure how we got from defining acousmatic music to film
sound

I mentioned that the word is used with cinema sound. It's not just
music that
can be acousmatic, it's sound as such.

Michel Chion has developed a number of conceptions that were needed
to be able to discuss cinema sound. Such words didn't exist.

Acousmatic is one of them. There's others: acousmêtre, audio-vision,
synchresis, etc. These have nothing to do with technical things.

https://en.wikipedia.org/wiki/Michel_Chion

Eero
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
here, edit account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] binaural theatre excerpts

2014-11-12 Thread Iain Mott
Thanks Bearcat. For the most part the voices are positioned close to the
listener - the female singing in the 3rd recording is positioned further
away although is given quite a bit gain because I'm concerned about
dynamic range issues in the final application - it should have a lower
level as the king is meant to be listening to her voice through an open
window. The 3rd voice that appears in the second recording (close to
2min) should be positioned to the rear at your left - and the second
voice (also male) should wander a bit to the back. For me, binaural
recordings tend to get squashed a bit at the front and at the rear
leaving lobes extending from the sides. If you give the sound a
trajectory, where it leads off in a particular direction, I think that
can help extend the front/rear image. Better quality headphones help
too. Sorry the files didn't stream properly - you can download the files
directly with these links:

http://audiocena.com.br/rei/intro.mp3
http://audiocena.com.br/rei/trono.mp3
http://audiocena.com.br/rei/euteamo.mp3
http://audiocena.com.br/rei/relogio.mp3 

Cheers,

Iain




Em Ter, 2014-11-11 às 14:39 -0700, Bearcat M. Şándor escreveu:
> I'm impressed. You put these together very well. Only the first one loaded
> completely but i was able to hear samples of all 4 streams. I haven't had
> much experience with binaural recordings. To me it sounded everything was
> in a band that was tight around my forehead but extended to my shoulders.
> The far left effect was on my left shoulder and the far right effect was on
> my right shoulder. When something moved across the stage in front of me, it
> sounded like it slid along my forehead but was never "out in front".
> 
> Is this what most people should experience?
> 
> On Tue, Nov 11, 2014 at 9:58 AM, Iain Mott  wrote:
> 
> > hi list,
> >
> > Sending a link with some binaural theatre mixes I'm making with the
> > Soundscape Renderer together with Pd, jconvolver, Ardour 3 and lots of
> > patching in jack. The content of the recordings is all in Portuguese.
> > There are some details on the page - but to elaborate a little, Pd
> > serves as a go-between for SSR and Ardour, converting XML messages to
> > and from MIDI to allow Ardour to record and play SSR control data
> > jointly with the associated raw (unspatialised) audio. It's possible to
> > control 4 moving sources at once - but perhaps more. Pd also does some
> > mapping/attenuation of audio sent to various instances of jconvolver to
> > implement John Chowning's idea of 'global' and 'local' reverberation
> > (where as an source becomes distant, its reverberant properties become
> > more pronounced but also more directional - conversely as the sound
> > approaches, an encompassing "global" reverberation takes precedence,
> > coming from all directions surrounding the listener). There's a Doppler
> > effect implemented too in Pd but it's disabled as it sounds pretty silly
> > with voice.
> >
> > The story used is "A King Listens" by Italo Calvino and it's written
> > entirely in the 2nd person (ie. "you"). So the voices in the binaural
> > mix surround the listener like the King's counsellors or wayward
> > thoughts.
> >
> > Hope you enjoy the excerpts:
> >
> > http://audiocena.com.br/en/rei
> >
> > Iain
> >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
> 
> 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] binaural theatre excerpts

2014-11-11 Thread Iain Mott
hi list,

Sending a link with some binaural theatre mixes I'm making with the
Soundscape Renderer together with Pd, jconvolver, Ardour 3 and lots of
patching in jack. The content of the recordings is all in Portuguese.
There are some details on the page - but to elaborate a little, Pd
serves as a go-between for SSR and Ardour, converting XML messages to
and from MIDI to allow Ardour to record and play SSR control data
jointly with the associated raw (unspatialised) audio. It's possible to
control 4 moving sources at once - but perhaps more. Pd also does some
mapping/attenuation of audio sent to various instances of jconvolver to
implement John Chowning's idea of 'global' and 'local' reverberation
(where as an source becomes distant, its reverberant properties become
more pronounced but also more directional - conversely as the sound
approaches, an encompassing "global" reverberation takes precedence,
coming from all directions surrounding the listener). There's a Doppler
effect implemented too in Pd but it's disabled as it sounds pretty silly
with voice.

The story used is "A King Listens" by Italo Calvino and it's written
entirely in the 2nd person (ie. "you"). So the voices in the binaural
mix surround the listener like the King's counsellors or wayward
thoughts.

Hope you enjoy the excerpts:

http://audiocena.com.br/en/rei

Iain 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sursound Digest, Vol 68, Issue 17

2014-03-19 Thread Iain Mott
Em Wed, 2014-03-19 às 20:21 +, Fons Adriaensen escreveu:
> Another approach would be to sent W only (at reference level)
> to the decoder, and then measure each individual speaker (by
> soloing it, ambdec provides the function) and adjusting for
> reference SPL - 10 * log(number_of speakers). This would be
> less accurate as it doesn't allow for the partial correlation
> between speaker signals (which will depend on frequency if
> you use dual band decoding). 

Great - yes, I was hoping this would work. The SPL meter arrived today
by post - but i've still only two channels of amplification. Will test
as soon as the equipment is organised.

Thanks a lot for your help.

Iain

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] calibrating ambisonic speakers using the k-system?

2014-03-18 Thread Iain Mott
Em Tue, 2014-03-18 às 19:52 +, Fons Adriaensen escreveu:
> On Tue, Mar 18, 2014 at 03:32:55PM -0300, Iain Mott wrote:
> 
> > Thanks a lot Fons. When I pan pink noise with at W at -20dBFS RMS, the
> > individual X and Y channels peak at about 3dB higher. Is that why you
> > said to meter at 86dB and not 83?
> 
> No. In a stereo system, with the levels as 0dB on the K-20 meter, each
> speaker produces 83 dB SPL. Assuming the signals are mostly decorrelated,
> the total level will be 86 dB. So the 'reference SPL' is 86 dB.
>  

OK - I see what you intended.
 
> > For my current purposes, I'd like to reproduce as best as possible,
> > ambiental b-format recordings over an array of speakers - and preferably
> > try to match SPL measurements taken at each recording location. Do you
> > think the formula above would be correct to match levels in this way?
> > ie. if I make a recording at a site where the SPL is 70dB, during
> > playback I meter this material (the W channel) at -13dB RMS on a k-20
> > meter, and in the case of a 14 channel system, calibrate each speaker
> > channel at 71.5dB SPL (x = 83 - 10log14).
> 
> Your only chance to get this right is to calibrate *via the decoder*.
> If you follow the procedure I explained, then 0 dB on the K-20 meter
> for W will corresponds to 86 dB SPL, no matter how the sound is
> distributed over the speakers. That's assuming you don't pan two
> or more strongly correlated signals to different directions (if you
> do that the result is no longer really Ambisonic).

I now understand that W in the metering has a direct relationship to the
total audio output of the array - no matter what the configuration - but
sorry, I'm still in doubt as to how to go about adjusting the speaker
output levels. I initially assumed that during the panning of the signal
and the output adjustment, the speaker that is most in focus (at the
peak level) would be soloed - but this wouldn't work because it wouldn't
factor in the additional output from the other channels. Are you
suggesting that all channels should be left open and the system tuned in
a number of passes? Dare I say it: might the "-10 log (N)" level be a
good starting point for each channel?

Thanks

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] calibrating ambisonic speakers using the k-system?

2014-03-18 Thread Iain Mott
Em Tue, 2014-03-18 às 15:32 -0300, Iain Mott escreveu:
> For my current purposes, I'd like to reproduce as best as possible,
> ambiental b-format recordings over an array of speakers - and
> preferably
> try to match SPL measurements taken at each recording location. Do you
> think the formula above would be correct to match levels in this way?
> ie. if I make a recording at a site where the SPL is 70dB, during
> playback I meter this material (the W channel) at -13dB RMS on a k-20
> meter, and in the case of a 14 channel system, calibrate each speaker
> channel at 71.5dB SPL (x = 83 - 10log14). 


I forgot about the 3dB adjustment - and I just read in Wikipedia that
the W channel is adjusted by 3dB - so to correct my guess above, I
should calibrate each speaker at 71.5 + 3 = 74.5dB

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] calibrating ambisonic speakers using the k-system?

2014-03-18 Thread Iain Mott
Thanks a lot Fons. When I pan pink noise with at W at -20dBFS RMS, the
individual X and Y channels peak at about 3dB higher. Is that why you
said to meter at 86dB and not 83?

It's curious this thing about the summing of channels. When I first read
about k-system monitor calibration, I understood, incorrectly, that to
arrive at this cinema standard of 83dB, one would have to lower the SPL
meter reading of each channel to different a value depending on the
number of speakers. 

ie. 83dB = x + 10log(number of speakers) 

where x = the necessary reading in the meter for each speaker to achieve
an overall level of 83dB.

eg. for a 5.1 system x is aprox. 76dB SPL

I wrote to a forum on Bob Katz's site and he responded saying no, all
should be set to 83dB SPL..."Don't worry about how it adds up. This
increases the headroom for the mix engineer, who doesn't have to push
each channel as hot to get the same loudness and therefore the
enjoyment of multiple channels. It also partly explains why stereo is so
crippled a medium."

I guess also with cinematic sound - it's more rare to have all channels
on full all at once (or maybe that's the films i watch!).

For my current purposes, I'd like to reproduce as best as possible,
ambiental b-format recordings over an array of speakers - and preferably
try to match SPL measurements taken at each recording location. Do you
think the formula above would be correct to match levels in this way?
ie. if I make a recording at a site where the SPL is 70dB, during
playback I meter this material (the W channel) at -13dB RMS on a k-20
meter, and in the case of a 14 channel system, calibrate each speaker
channel at 71.5dB SPL (x = 83 - 10log14).

Cheers,

Iain






Em Tue, 2014-03-18 às 17:18 +, Fons Adriaensen escreveu:
> On Tue, Mar 18, 2014 at 05:09:21PM +, Fons Adriaensen wrote:
>  
> > A stereo system calibrated as you describe will output 86 dB SPL
> > if both channels are -20 dB on the K-meter.
> > 
> > To calibrate the AMB system, send the pink noise through an
> > AMB panner, set the level for -20 dB RMS in W, and pan the sound
> > in a direction corresponding to a speaker. Adjust decoder/amp
> > gain for 86 dB SPL. Check this remains more or less constant in
> > all directions.
> 
> Just to avoid all confusion, -20 dB RMS here means an indication
> of 0 dB on the K-20 meter !
> 
> Ciao,
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] calibrating ambisonic speakers using the k-system?

2014-03-18 Thread Iain Mott
Hello, 

I've been reading about how to calibrate monitors with Bob Katz's
k-system. Using the SPL meter method, I understand that each channel can
be fed a -20dBFS band-limited pink noise signal - and each loudspeaker
output adjusted to read 83dB SPL in the meter (or some other desired
level). This applies to individual speakers of a stereo system or to the
individual satellites of a 5.1 or 7.1 system.  Ardour 3.5 now has k-20
metering (and many other types) as does Fons' jkmeter. When k-20 meters
are reading 0dB RMS, the sound mixer knows that the corresponding
calibrated loudspeakers are putting out aprox. 83dB SPL. A pretty handy
thing I think.

OK: 

What does one do though if one is metering a B-format or other ambisonic
signal? Unlike stereo or 5.1 etc. the relationship between a particular
channel on the meter and a given loudspeaker channel isn't so direct
because of the decoding. jkmeter offers a 4 channel "ambisonic" k-system
meter, so someone (Fons at least) must have thought about this problem
before!

Sorry if it's obvious what to do - but please send suggestions on how to
go about calibrating loudspeaker levels for use with k-20 metering of
ambi-encoded signals.

Cheers and thanks,

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] 2 questions about IRs

2013-11-01 Thread Iain Mott

> > one. The above got me thinking though, if it is a simple sum, wouldn't
> > it be possible to simulate the variable orientation of a speaker (human)
> > by doing an equal power crossfade between adjacent angular IRs as this
> > virtual human speaker turns?
> 
> That is mathematically not correct, but I'm pretty sure it would
> work in practice, in particular if you process the direct sound
> separately (e.g.  by filtering in function of the direction the
> speaker is facing). 
> 

Thanks Fons. Do mean a roll-off of high frequencies in the direct signal
as the head is turned?

Iain

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] 2 questions about IRs

2013-11-01 Thread Iain Mott
re. the second question - sorry, i realise what would be needed would be
a series of IRs for various angles produced in advance, perhaps by an
equal power summing. Using all these IRs would be a more complex
proposition.
iain



Em Fri, 2013-11-01 às 11:14 -0200, Iain Mott escreveu:
> hello list,
> 
> I've been reading a paper recently "B-Format Acoustic Impulse Response
> Measurement and Analysis in the Forest at Koli National Park, Finland"
> by Simon Shelley, Damian Murphy and Andrew Chadwick, where a
> unidirectional loudspeaker is used to collect IRs via a
> sine-sweep/de-convolution method. To simulate an omnidirectional source,
> four recordings are made of the sweep with the loudspeaker directed at
> the mic and at 3 other positions 45 degrees apart.
> 
> The IRs for the simulated omnidirectional source are created by: 
> 
> "summing the resulting impulse responses measured for each angle of the
> loudspeaker. The effect of the summation is to emulate a loudspeaker
> array of four loudspeakers at the same point in space all pointing in
> different directions."
> 
> So, my first question is, is this "summing" really a simple summation of
> the 4 sets (a b-format mic is used) of IRs with some attenuation of
> each, or would there be something more complex involved?
> 
> I'm interested in spatialising the human voice with IRs - and a
> directional source is probably more appropriate than an omnidirectional
> one. The above got me thinking though, if it is a simple sum, wouldn't
> it be possible to simulate the variable orientation of a speaker (human)
> by doing an equal power crossfade between adjacent angular IRs as this
> virtual human speaker turns?
> 
> Am I oversimplifying?
> 
> Thanks,
> 
> Iain
> 
> 
> 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] 2 questions about IRs

2013-11-01 Thread Iain Mott
hello list,

I've been reading a paper recently "B-Format Acoustic Impulse Response
Measurement and Analysis in the Forest at Koli National Park, Finland"
by Simon Shelley, Damian Murphy and Andrew Chadwick, where a
unidirectional loudspeaker is used to collect IRs via a
sine-sweep/de-convolution method. To simulate an omnidirectional source,
four recordings are made of the sweep with the loudspeaker directed at
the mic and at 3 other positions 45 degrees apart.

The IRs for the simulated omnidirectional source are created by: 

"summing the resulting impulse responses measured for each angle of the
loudspeaker. The effect of the summation is to emulate a loudspeaker
array of four loudspeakers at the same point in space all pointing in
different directions."

So, my first question is, is this "summing" really a simple summation of
the 4 sets (a b-format mic is used) of IRs with some attenuation of
each, or would there be something more complex involved?

I'm interested in spatialising the human voice with IRs - and a
directional source is probably more appropriate than an omnidirectional
one. The above got me thinking though, if it is a simple sum, wouldn't
it be possible to simulate the variable orientation of a speaker (human)
by doing an equal power crossfade between adjacent angular IRs as this
virtual human speaker turns?

Am I oversimplifying?

Thanks,

Iain




-- 

Iain Mott
http://reverberant.com
http://audiocena.com.br 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] theatrical ambisonics

2013-05-12 Thread Iain Mott
These are excellent references, thank you! Curious to know why
ambisonics and uhj encoding ceased to be used in the 90s? I know nothing
about digital radio - but is dolby surround or some other surround
format being used presently in Europe, elsewhere? What is the present
state of play in surround broadcasting?

Iain


Em Dom, 2013-05-12 às 18:23 +0300, Eero Aro escreveu:
> Iain Mott wrote:
> > I wonder if people on the list have
> > other references or links on ambisonics applied in theatrical
> > productions, either traditional theatre or theatrical installation?
> 
> In the Ancient Days (1990's), Jussi Lappalainen used Ambisonics in
> the Oulu City Theatre in Finland. He had a Cepiar Ambi 8 decoder and
> an Audio Dimensions four channel joystick panner. The acoustic shape
> of the theatre hall is oval and it caused a lot of misleading directional
> cues to different parts of the audience. However, "background" type
> sounds did work well.
> 
> Mark Decker from BBC Pebble Mill in Birmingham, UK, used UHJ in
> almost all of the radio drama, music and other programmes that he
> recorded. That was hundreds of programs.
> 
> Me myself used UHJ in about 25 radio plays in the 1990's. All of that
> was in Finnish language.
> 
> You can find the lists of the above mentioned radio plays in the 
> Discography:
> http://members.tripod.com/martin_leese/Ambisonic/uhjhtm.txt
> 
> Look up for "Radio Broadcasts", more than halfway down the page.
> 
> Eero
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound

-- 

Iain Mott
www.audiocena.com.br
www.reverberant.com

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] theatrical ambisonics

2013-05-12 Thread Iain Mott
The Earfilms link is interesting. I wonder if people on the list have
other references or links on ambisonics applied in theatrical
productions, either traditional theatre or theatrical installation? Your
own productions or the work of others.

I'd also be interested to know if people have examples of binaural
radio-drama.

Thanks,

Iain




-- 

Iain Mott
www.audiocena.com.br
www.reverberant.com

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] idea

2013-05-04 Thread Iain Mott
Em Sáb, 2013-05-04 às 17:46 -0400, Matthew Palmer escreveu:
> http://vimeo.com/65229978#at=5
> 
> imagine using the oculus & a kinect to be able to assign 3 directional
> information to sounds to make music, virtual speakers corresponding to real
> ones, hand is a brush
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] impulse responses and distance

2013-05-04 Thread Iain Mott
> There is one more refinement you could use to limit the amount of
> CPU use. Of course the IRs for different source directions will be
> different. But the *signficant* differences are only in the first 100
> ms or so. In many cases the 'tail' of the reverb will sound just the
> same even if the IR is 100% decorrelated with one for another direction.
> That means you need to do the direction-dependent convolution only for
> the initial 100 ms or of the IR. For the rest you can share a single IR
> for all sources. 

Thanks Fons that makes a lot of sense. When preparing the tail-IR, is
this matter of taking the file into an editor and silencing the first
100msec of the channels, perhaps using a ramp function to slope up to
the 100msec point? Also, for the early-reflection-IRs, deleting
everything after 100mcsec?

> You could also experiment with changing
> the relative delay of the direct sound and the reverb sends (no standard
> mixer will allow you to do that easily, but if you use Pd or a similar
> tool it is entirely possible. 
> 
Do you mean some sort of variable delay? Not sure what you mean.


> Since you mention jconvolver, there are two preset files in the source
> distribution that are designed to do this: sala-concerti-cdm and 
> santa-elisabetta. Both use ambisonic IRs which you can download from
> my website. For the concert hall there are IRs corresponding to various
> on-stage source position, and for santa-elisabetta (a small church) there
> are 8 early reflection IRs spaced 45 degrees apart.

Thanks!

Iain

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] impulse responses and distance

2013-05-03 Thread Iain Mott
Thanks Sampo and thanks Tim for the link, will investigate.

The project I'm trying to get up and running will involve taking impulse
responses with a soundfield mic in a variety of outdoor environments.
While it was never absolutely necessary to simulate moving sound sources
with convolution/decoding (and I plan to take impulse responses at
several locations for each chosen environment and mic position to allow
the discreet projection of a number of sources), I was curious to know
if it might be possible simulate moving sources following a convolution
approach. 

Dave mentioned modelling of IRs as an alternative to interpolation - but
this he mentioned would be based on not only the impulse responses
taken, but on the architecture. These are wilderness environments - so I
guess modelling won't be possible and I imagine the number of IRs needed
would be high anyway.

If I do implement moving virtual sources - what I think I'd do, would be
to make 4 nearfield IRs in north, south, east, west locations for
example and an additional 4 IRs at a greater distance. Spatialisation of
virtual sources would be done via ambisonic encoding with something like
"iem_ambi" in puredata. In addition, a single IR would be chosen on the
basis of its general proximity to the movement of the object, convolved
with the sound (eg. using jconvolver) and mixed with the ambi-encoded
signal to give it some ambience. Without ruling out moving sources
however, and if the interpolation and the modelling of IRs are out of
the question, perhaps the nearest IRs can be selected as the virtual
source moves from one region to another, and the convolved signal,
cross-faded to provide a smooth transition.

Perhaps this is the most practical approach?

Cheers,

Iain

 


Em Sex, 2013-05-03 às 19:23 +0300, Sampo Syreeni escreveu:
> On 2013-05-03, Iain Mott wrote:
> 
> > Theoretically then, for a given listening position (for which we 
> > position a mic in order make impulse responses), if we make impulse 
> > responses for every possible location in the space, it would be 
> > possible to spatialise a sound with both angular and distance cues, 
> > through a process of convolution with the various impulse responses 
> > and subsequent ambisonic decoding?
> 
> Yes. The main difference to binaural work is that the responses are 
> considerably longer and capture all of the room acoustical cues as well. 
> That means they are even less safe to sum to each other than HRTFs, and 
> harder to decompose, so interpolating between them is likely out of the 
> question. Also, at progressively higher orders you start to capture some 
> spatial detail as well, which would eventually lead to proper auditory 
> parallax when the channel count goes into the hundreds -- a potential 
> further cue. Unfortunately we're nowhere near anything like that at the 
> moment. Sources inside the rig are also a bit problematic because their 
> nearfield has considerable amounts of higher order energy, which reject 
> with variable (and often unquantified) success.


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] impulse responses and distance

2013-05-03 Thread Iain Mott
Thanks Dave. Theoretically then, for a given listening position (for
which we position a mic in order make impulse responses), if we make
impulse responses for every possible location in the space, it would be
possible to spatialise a sound with both angular and distance cues,
through a process of convolution with the various impulse responses and
subsequent ambisonic decoding?

Iain



Em Sex, 2013-05-03 às 12:10 +0100, Dave Malham escreveu:
> Hi there,
> The two main mechanisms (which are certainly not the only ones!)
> for distance perception are the ratio of direct to reverberant sound
> and the pattern of early reflections. Both of these are captured with
> B format impulse responses but of course only for the given
> source/listener/room configuration You can edit the impulse response
> to some extent to change these or record multiple IR's and interpolate
> between them - but frankly it might be better to build models based on
> the recorded IR's and/or room plans and then then synthesise the IR's.
> 
> 
>  Dave
> 
> On 3 May 2013 11:00, Iain Mott  wrote:
> Hi list, I wonder if someone could clear up some doubts I
> have:
> 
> Does an ambisonic impulse response recorded in a space, with
> microphone
> and impulse source at specific locations, reproduce any
> distance cues
> when convolved with an anechoic mono source and decoded
> ambisonically
> over a speaker array, or just angular cues?
> 
> I know that HRTF filters are recorded anecoically, so distance
> of the
> impulse wouldn't matter, as i understand it. But what if
> impulses were
> recorded at various angles and a particular distance in a live
> room? How
> would one set of angular responses at a given distance compare
> with
> another set made with the same angles but at a different
> distance?
> 
> Thanks,
> 
> Iain
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
> 
> 
> 
> 
> -- 
> As of 1st October 2012, I have retired from the University, so this
> disclaimer is redundant
> 
> 
> These are my own views and may or may not be shared by my employer
> 
> Dave Malham  
> Ex-Music Research Centre
> Department of Music   
> The University of York 
> Heslington
> York YO10 5DD
> UK
> 
> 'Ambisonics - Component Imaging for Audio'
> 
> 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] impulse responses and distance

2013-05-03 Thread Iain Mott
That's right, ambisonic. I was thinking though, since in the case of
binaural spatialisation, an anechoic source can be spatialised (in an
angular sense) through convolution with various impulse responses taken
at various angles (and I guess interpolation is performed for angles
between those angles used to make the impulses), could a source be
spatialised in an ambisonic array by convolution (and later ambisonic
decoding) with a series of impulse responses taken at various angles?
Further, if these angular measurements were taken at various depths
(distances between impulse source and mic), is it possible to impart the
perception of various depths of the rendered sound-source?

Hope this makes sense.




Em Sex, 2013-05-03 às 13:02 +0200, Augustine Leudar escreveu:
> Youre only interested in ambisonics right ? Because generally the
> further away something is the less high frequencies it has due to air
> absorbtion of sound frequencies with shorter wavelengths. Also the
> reverberation to soundsource ratio will be higher than for more
> distant objects. These are psycoacoustic effects though so excuse me
> if my answer is not relevant.
> 
> On 3 May 2013 12:00, Iain Mott  wrote:
> Hi list, I wonder if someone could clear up some doubts I
> have:
> 
> Does an ambisonic impulse response recorded in a space, with
> microphone
> and impulse source at specific locations, reproduce any
> distance cues
> when convolved with an anechoic mono source and decoded
> ambisonically
> over a speaker array, or just angular cues?
> 
> I know that HRTF filters are recorded anecoically, so distance
> of the
> impulse wouldn't matter, as i understand it. But what if
> impulses were
> recorded at various angles and a particular distance in a live
> room? How
> would one set of angular responses at a given distance compare
> with
> another set made with the same angles but at a different
> distance?
> 
> Thanks,
> 
> Iain
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
> 
> 
> 
> -- 
> 07580951119
> 
> 
> augustine.leudar.com


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] impulse responses and distance

2013-05-03 Thread Iain Mott
Hi list, I wonder if someone could clear up some doubts I have:

Does an ambisonic impulse response recorded in a space, with microphone
and impulse source at specific locations, reproduce any distance cues
when convolved with an anechoic mono source and decoded ambisonically
over a speaker array, or just angular cues?

I know that HRTF filters are recorded anecoically, so distance of the
impulse wouldn't matter, as i understand it. But what if impulses were
recorded at various angles and a particular distance in a live room? How
would one set of angular responses at a given distance compare with
another set made with the same angles but at a different distance?

Thanks,

Iain

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonic hydrophone?

2013-02-19 Thread Iain Mott
the worm-mic reminds me of this installation in Brazil "Som da
terra" (sound of the earth) at the garden/gallery "Inhotim" - it's by
American artist Doug Aitken - where 5 mics are lowered 202m down into
the earth. Here's a video showing the installation and an EV re20
protected in a plastic water bottle. Someone describes the sound as
being violent and noisy sometimes and other times more peaceful. 

http://www.youtube.com/watch?v=ITr5NzDSqlw





Em Ter, 2013-02-19 às 21:46 +, Dave Malham escreveu:
> The trouble with the sort of materials used in condoms is that they
> are inherently stretchy. Under any sort of pressure (more than a quite
> small depth of water)  the material presses through any holes and
> either (a) rips or be (b) comes into contact with the diaphragm which
> is potentially almost as big a disaster. That's why the first DIY
> hydrophone I linked to uses an electret capsule immersed in oil in a
> canister. I have uses both condoms and cling film to waterproof
> microphones but only really for splash proofing.  For those situations
> you can measure the mic before wrapping and after so that compensation
> can be made for the inevitable resonances. Probably not possible for
> underwater systems without the the same problems of needing a
> calibrated source and a lot of underwater space which, given the fact
> that it probably won't be possible to use the assembly at any sort of
> depth, is not going to be easy.
> 
>Dave
> 
> PS The wackiest thing I ever sealed a microphone for (with cling film)
> was to listen to worms under the ground for a biologist who was trying
> to find a way to assess the number of worms in a given volume of soil
> without crushing them up with the soil and extracting the (now dead)
> biological material.
> 
> On 19 February 2013 17:48, Martin Leese  
> wrote:
> > Fons Adriaensen wrote:
> >
> >> Don't know what Len will think of it, but putting a Tetramic
> >> (or any such mic) in a plastic bag isn't likely to produce
> >> anything usable. Basic problem is that the acoustic impedance
> >> of water is around 3400 times higher than that of air, so the
> >> water/air interface will reflect almost all energy. You need
> >> a transducer that is more or less matched to the acoustic
> >> impedance.
> >
> > I have read that the standard trick is to use a
> > condom.  However, I puzzle whether this would
> > work with a Tetramic.
> >
> > Regards,
> > Martin
> > --
> > Martin J Leese
> > E-mail: martin.leese  stanfordalumni.org
> > Web: http://members.tripod.com/martin_leese/
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound
> 
> 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] how not to advertise binaural

2013-02-01 Thread Iain Mott
Here's my binaural haircut from 1999/2000:

http://reverberant.com/cl/video.htm

Iain


Em Sex, 2013-02-01 às 11:34 +, Peter Lennox escreveu:
> 1983 was the first binaural haircut I heard. It was billed as "Holophonics", 
> I think, but really it was binaural - I think (glad to be corrected if anyone 
> knows)
> 
> Dr. Peter Lennox
> 
> School of Technology,
> Faculty of Arts, Design and Technology
> University of Derby, UK
> e: p.len...@derby.ac.uk 
> t: 01332 593155
> 
> -Original Message-
> From: sursound-boun...@music.vt.edu [mailto:sursound-boun...@music.vt.edu] On 
> Behalf Of etienne deleflie
> Sent: 01 February 2013 00:40
> To: Surround Sound discussion group
> Subject: Re: [Sursound] how not to advertise binaural
> 
> What's interesting is that the demo is actually totally cheating. It relies 
> on cognitive cues, perhaps even more than on presenting realistic stimuli.
> It does this in two ways:
> 
> Firstly, it extensively uses symbolism, through language, to create 
> expectations of spatial experience... "now over here on the left ... now on 
> the right", and "these scissors are very close to your head ...". etc.
> 
> Secondly, it relies on experience-based referential cues. The successful 
> perception of distance, in the sound of the scissors, can be at least partly 
> (if not mostly) attributable to the fact that we can only hear scissors if 
> they are close to our ears. When you hear scissors, you always get an 
> impression of proximity.
> 
> Begault (2000) makes this point in his text "3D sound for virtual reality and 
> multimedia" ... and funnily  enough, he speaks specifically of 3D demos where 
> there is "the sound of scissors cutting hair, as if very near your ear." !!! 
> (Page 29) ... so, as far as binaural demos goes, I'm going to call the sound 
> of scissors "the oldest trick in the book" (its been around at least 12 
> years!)
> 
> The other examples he gives are the sound of lighting a cigarette and 
> drinking a glass of water. It is also for this reason that any demonstration 
> that includes whispering, to demonstrate ability to create cues of proximity, 
> should also be treated as somewhat bogus.
> 
> Alternatively, for the spatial music composer, if the composer would like to 
> create a sense of proximity in space they dont need to encode sounds using 
> any particular spatialisation technology, they just need to use the sounds 
> that we only hear in proximity ... such as whispering, scissors, matches and 
> drinking a glass of water!
> 
> Actually, to my mind, this very point is one of the big issues with the 
> strategy of 'mimicking reality' to create realistic perceptions of space.
> The cognitive dimension is largely ignored. And so really ... the 'oldest 
> trick in the book' is perhaps more of a rather sensible strategy. Although 
> once you try to encode a sound that is not typically heard near the ears, 
> then you are stuffed.
> 
> Etienne
> 
> 
> 
> On Fri, Feb 1, 2013 at 4:20 AM, Dave Malham  wrote:
> 
> > For a truly cring-making demo of binaural, check out the "Virtual 
> > Barber Shop" video at
> >
> > http://www2.electronicproducts.com/Surround_sound_vs_3D_sound-article-
> > fand_sound_feb2013-html.aspx
> > .
> > Can't say it works much better (if at all) than any other I've heard 
> > in 4 decades in the business. It would also be interesting to know 
> > what people think of the demo further down the page of the crosstalk 
> > cancelled stuff that's supposed to work on laptops - it's barely 
> > perceivable as stereo on my MacBook Pro.
> >
> >  Dave
> >
> > --
> > As of 1st October 2012, I have retired from the University, so this 
> > disclaimer is redundant
> >
> >
> > These are my own views and may or may not be shared by my employer
> >
> > Dave Malham
> > Ex-Music Research Centre
> > Department of Music
> > The University of York
> > Heslington
> > York YO10 5DD
> > UK
> >
> > 'Ambisonics - Component Imaging for Audio'
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound
> >
> 
> 
> 
> --
> http://etiennedeleflie.net
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
> 
> _
> The University of Derby has a published policy regarding email and reserves 
> the right to monitor email traffic. If you believe this email was sent to you 
> in error, please notify the sender and delete this email. Please direct any 
> concerns to info...@derby.ac.uk.
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/m

[Sursound] ambisonic installation - video

2012-11-09 Thread Iain Mott
Dear list,

I'm sending a link for some video documentation of my installation with
Simone Reis as well as Nelson Maravalhas, Alexandre Rangel and others:
"O Espelho" (the mirror), which is now on show in the Federal District
of Brazil.

Please see the first video on this page:

http://reverberant.com/oe/videos.htm

The video shows a "Pepper's Ghost" illusion. The sound is a stereo mix
of what is played over a hybrid ambisonic and audio-spotlight system in
the installation. The audio spotlight is used to produce an "in-head"
sound, particularly for the lip-synced voices of characters viewed in a
faux-mirror. The ambisonic system reproduces "exterior" sounds. 

I'd like to thank Fons Adriaensen in particular for his Linux ambisonic
software and others on this list for their advice on getting the system
working.

For more information on the project please see:
http://reverberant.com/oe 

Hope you like the video.

Cheers and thanks,

Iain





___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-17 Thread Iain Mott

> do i understand correctly that the listener will be sitting on this very 
> off-center chair for most or all of the performance? i'm asking because 
> ambisonics won't perform at its best so far out from the sweet spot.
> you seem to have adjusted the delays already to this position, right?

Yes that's correct, the listener sits in the chair the entire time and
the sweet-spot is adjusted for that position. This offset position is
necessary to accommodate various visual illusions.

> if possible, move the speakers to the correct angles relative to the 
> chair as well. they won't have to leave the walls, only the side 
> speakers should go forward a bit, and the rear pair spread out more. you 
> might get into trouble with corner reflections and might need to apply 
> some eq, but it should be possible to make it sound ok.

What are the correct angles? Could you please clarify? The two speakers
to the front of the dressing table can't be moved any closer to one
another because they are placed either side of an oval mirror. The
lateral speakers can be moved. The bottom-left speaker can be moved in
either direction however the bottom-right can't be moved further to the
right as there is a doorway. At a pinch, it could be moved to the
right-hand wall - but the speakers are to be flush-mounted and i think
the angle of the speaker would be skewed too far away from the listener.

>if your listener is staying in the chair, second order should be fine. 
> tweaking the angles should help.

that's a relief

> if you end up having phasiness problems due to the dead room, maybe a 
> height speaker or two could fix that...

would be good - but i think we're limited to the 6. 

> ps: where can i listen to this?

Brasília, in a few months - there's some info here which needs updating:
www.reverberant.com/oe 
> 
many thanks,

Iain


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-17 Thread Iain Mott

> 
> Quite a challenge I'd say, in particular if those voices have to
> come near to the listener. With most the speakers at larger distance
> you'll need a room that is quite 'dead' by itself, and then synthesize
> the interaction of a virtual room and the voice.
> 
> Would there be any way to move the side and back speakers closer ?

The room has the dimensions: 3.73 x 5m. There will be carpet on the
floor and we can also put carpet on the ceiling - there will also be a
bed with coverings and objects on the walls. I don't think we can bring
the rear and side speakers any closer. 

At the moment the tail reverb level is at a fixed level so that you only
become conscious of it when the voice is distant (at the doorway for
instance). The early-reflection simulation too only becomes prominent at
a distance. So this does add to the sense of proximity/depth. There will
rarely, probably never, be a voice placed in front of the listener -
most of the talking will be done behind and to the sides.

Here is an extract from the ambdec config file showing distances and
angles:

/speakers/{
add_spkr 1 1.297 54.0  0.0system:playback_1
add_spkr 2 2.721111.0  0.0system:playback_2
add_spkr 3 3.290156.0  0.0system:playback_3
add_spkr 4 3.290   -156.0  0.0system:playback_4
add_spkr 5 2.721   -111.0  0.0system:playback_5
add_spkr 6 1.297-54.0  0.0system:playback_6
/}

> As order goes up AMB panning will concentrate more energy in the
> nearest speakers and the two systems converge, the main difference
> being that AMB will never use just one speaker, and this results in
> smoother movement. The difference between the two systems becomes
> smaller, and the advantage moves towards AMB.

VBAP does have a spread value - which i have set to 10 (out of 100)
presently. Does help smooth the movement - but yes, I need to test with
full system (it's not been easy to arrange this).

Cheers,

Iain


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-16 Thread Iain Mott
Fons and Jörn,

If i can stretch your time a little further:

The material to be projected in the installation with some degree of
"precision" is voice. Will also be projecting more atmospheric sounds
that don't require accurate placement. I've uploaded a plan of the
installation (which also functions as a user interface in Pd) here:

http://reverberant.com/tmp/plan.jpg 

The red cubes are the speaker-positions. The blue cube is the listener
seated at a dressing table. The listener will view in a false-mirror to
his/her front, various characters entering and walking through the room
- speaking as they go. It's these voices that need to positioned in a
reasonably convincing way behind and to the sides of the listener.

Fons - when you say a "corner case" you're saying that 6 speakers aren't
enough? Just how many do you think is necessary (i've been trying to
keep costs down - the other two channels of this 8 channel sound card
are being used for other things).

Cheers and thanks for your attention,

iain


On Thu, 2012-02-16 at 22:14 +0100, Jörn Nettingsmeier wrote:
> On 02/16/2012 09:59 PM, Iain Mott wrote:
> > On Thu, 2012-02-16 at 20:34 +, Fons Adriaensen wrote:
> >> On Thu, Feb 16, 2012 at 06:22:15PM -0200, Iain Mott wrote:
> >>
> >>> To clarify, for a 6-speaker 2d array - ambdec can decode at maximum, the
> >>> following inputs (3rd order horizontal, 0 order height): W X Y U V P Q ?
> >>
> >> Yes, but it doesn't make sense doing so. With six speakers you can't
> >> reproduce 3rd order correctly - 2nd is the limit.
> >>
> >> Ciao,
> >
> > that's a shame - i've only got two speakers to monitor with presently -
> > but strange, 3rd order seemed to be a sharper image than with the 2nd
> > order - wishful thinking perhaps.
> 
> well, you can use 3rd order hypercardioids to drive six speakers, and 
> indeed channel separation will improve, which results in less phasiness 
> and more pleasant diffuse-field sound.
> but the soundfield will no longer be homogeneous - a phantom source 
> between two speakers will be softer than one on a speaker. the sound 
> field will be pretty garbled.
> 
> > Would you recommend then for 6 speakers, 2nd order ambisonic for the
> > simulated room reflections (and jconvolver tail reverb) and
> > vector-panning (vbap) for the direct?
> 
> you mentioned moving sources - for those, i would advise against vbap, 
> because the timbre and apparent source width will shift a lot as the 
> source moves. otoh, if you have static sources, vbap for direct sound 
> could outperform 2nd order ambi.
> personally, i wouldn't bother with such a hybrid system, but there may 
> be good reasons to do it.
> 
> one i could think of is when you have signal sets which are sensitive to 
> crosstalk, say an a/b stereo set. it will always sound nicer when routed 
> discretely to a pair of speakers than as two second-order ambisonic 
> phantom sources panned 60° apart
> 
> best,
> 
> 
> jörn
> 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-16 Thread Iain Mott
On Thu, 2012-02-16 at 20:34 +, Fons Adriaensen wrote:
> On Thu, Feb 16, 2012 at 06:22:15PM -0200, Iain Mott wrote:
> 
> > To clarify, for a 6-speaker 2d array - ambdec can decode at maximum, the
> > following inputs (3rd order horizontal, 0 order height): W X Y U V P Q ?
> 
> Yes, but it doesn't make sense doing so. With six speakers you can't 
> reproduce 3rd order correctly - 2nd is the limit.
> 
> Ciao,

that's a shame - i've only got two speakers to monitor with presently -
but strange, 3rd order seemed to be a sharper image than with the 2nd
order - wishful thinking perhaps. 

Would you recommend then for 6 speakers, 2nd order ambisonic for the
simulated room reflections (and jconvolver tail reverb) and
vector-panning (vbap) for the direct?

thanks for the advice,

iain



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-16 Thread Iain Mott

> > When ambi_encode is set to 4th and 5th order, it has 9 and 11 output
> > channels respectively in 2D. I'm not sure how these should be named.
> > Can ambdec decode higher than 3rd order on the horizontal plane? When I
> > try a "new configuration" in ambdec it seems to offer only as high as
> > 3rd order.
> 
> 
> Yes, Ambdec handles up to 3rd order, full 3-D. Looks like you have
> an earlier version - the latest doesn't have the config editor.
> 
> Above 3rd order there are no more single character names, you have
> to use either the l,m (degree,order) notation or ACN (Ambisonic
> Channel Number).

Just upgraded to 0.5.1. 

To clarify, for a 6-speaker 2d array - ambdec can decode at maximum, the
following inputs (3rd order horizontal, 0 order height): W X Y U V P Q ?

cheers,

i

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-16 Thread Iain Mott

> 
> One more thing: if you can go up to 3rd order or higher you may
> prefer that to VBAP, in particular for moving sources.
> 
Thanks for the suggestion - actually I was only using 2nd order encoding
because I'd been unable to get the higher orders to work (using
ambi_encode from iem-ambi). Looking again i see what i was doing wrong.
Have got 3rd order running now and yes it sounds better. In 2D this
gives the 7 outputs which i understand are named: W X Y U V P Q 

In jack, these outputs can be patched through to their corresponding
ambdec inputs - which named: W X Y Z R S T U V P Q 

When ambi_encode is set to 4th and 5th order, it has 9 and 11 output
channels respectively in 2D. I'm not sure how these should be named.
Can ambdec decode higher than 3rd order on the horizontal plane? When I
try a "new configuration" in ambdec it seems to offer only as high as
3rd order.

Thanks,

Iain



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] matching speaker gain and delay compensation

2012-02-16 Thread Iain Mott
Thanks a lot Fons. Re. near-field compensation, I'm guessing it wouldn't
hurt to keep this enabled in ambdec when combining with vector-panning.
Is this correct?
Cheers,
Iain

On Thu, 2012-02-16 at 10:09 +, Fons Adriaensen wrote:
> On Thu, Feb 16, 2012 at 07:35:16AM -0200, Iain Mott wrote:
>  
> > Does anyone know what equations I need to base these delays and gain
> > attenuations on to match the compensation of ambdec?
> 
> This is quite simple, linear gain and delays are both is proportional
> to the distance of the speaker to the 'sweet spot'. If D_max is the
> distance of the speaker with the largest distance, the for a speaker
> at distance D 
> 
> gain   =  D / D_max
> delay  = (D_max - D) / 340.
> 
> Distance in meters, delay in seconds.
> 
> 
> Near-field compensation is another matter but you can't do that
> with VBAP anyway.
> 
> Ciao, 
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] matching speaker gain and delay compensation

2012-02-16 Thread Iain Mott
Dear list,

I'd like some advice on implementing gain and delay compensation for an
irregular speaker array with a listener in a fixed position. This
probably is a question for Fons Adriaensen if he is here, as it concerns
a hybrid ambisonic/vector-panning system using ambdec: 

I am combining in this irregular array, ambisonic sound positioning of
simulated early reflections with vector panning of the direct signal for
better focus of the moving sound image. This is being done in Pd with
various IEM externals performing the reflection simulation, ambisonic
encoding and GUI interface as well as ambdec/jack for the decoding and
Ville Pulkki's vbap for the vector panning. It also uses jconvolver/jack
to provide an ambisonic tail reverb.

Ambdec offers gain and delay compensation for irregular speaker arrays
and I would like to implement this in Pd for the outputs of vbap. So the
question finally is: 

Does anyone know what equations I need to base these delays and gain
attenuations on to match the compensation of ambdec?

Cheers,

Iain

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Wanted: ambisonic impulse resp. of farmhouse

2011-10-06 Thread Iain Mott
Dear list,

I need to simulate the acoustic of a small farm house. I'm lookng for a
B-format impulse response. Can anyone please suggest a public-domain
source for this? Unfortunately I don't have access to a b-format /
a-format mic to make my own.

The type of room i'd like to simulate has a wooden floor, plastered
stone walls and an unsealed terracotta ceiling. If you can imagine the
interior of this building:

http://www.djibnet.com/photo/4458985265-casa-de-fazenda.jpg 

At the moment I'm convolving the direct signal (voice mainly) with a
shortened tail impulse from  a small chapel (an Angelo Farina response i
think) in combination with vector-panning of the direct signal and
ambisonically rendered early reflections. I'd like the tail to sound
more authentic. Can anyone suggest a close fit?

Thanks,

Iain

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] B format mic using omnis?

2011-06-20 Thread Iain Mott
There's also his article on generating ambisonic impulse responses with
a single omni mic in 7 locations - might be relevant?:
http://pcfarina.eng.unipr.it/Public/Papers/113-ICA98.PDF
There is a fortran utility written to convert the 7 mono impulses into
B-format.
cheers,
iain





impulses On Mon, 2011-06-20 at 11:54 +0100, Dave Malham wrote:
> May seem a strange question, but anyone ever had any experience of 
> building/using a soundfield type 
> mic using omni's? I have been asked by one of the artists featured on The 
> Morning Line if there's 
> anything he could do with his collection of 4 DPA's (4060-bm's). Not 
> something I'd ever really 
> though about before, but as Angelo's B format hydrophone uses omni's ... 
> (http://www.angelofarina.it/Public/UAM-2011/)
> 
>  Dave
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] ambisonic spatialisation with reverberation

2011-06-04 Thread Iain Mott
Thanks for the message Fons, yes that was me - I must have accidentally
let that earlier email fly from the wrong account. 

I think my problems with spat3d were or are in relation to the
room-configuration - have since followed Jorn's suggestion and combined
the early reflection modelling of spat3d with a convolution reverb tail
(one of your IRs with jconvolver) - which at the moment I am sending a
low-level unmodulated feed from the source. Have toned down the
reflectivity of the modelled surfaces for the early reflections (there
was a comb-filter effect before with the small room size that i'm using)
and it's sounding much nicer.

Thanks for the tip with cross-fading between ER IRs - I'll try this.
Need to create some IRs too...

Re. the voice - yes, distance is important - at times the voice will be
heard from a position of several meters away to the rear-left - at other
times, at the two sides of the seated listener and from close behind -
and everything in between. The listener will have a visual reference for
the source and the sound of the voice needs to match as well as
possible. 

A question about creating IR tails: is this just a matter of trimming
some milliseconds off the beginning of a complete IR with an appropriate
envelope? How many msecs? Just the rising part of the impulse?

Good luck finding a new home!

Cheers,

Iain


On Sat, 2011-06-04 at 22:02 +, Fons Adriaensen wrote:
> On Fri, Jun 03, 2011 at 09:11:35AM -0300, Iain Mott wrote:
> 
> > I would like to know what jack compatible systems are available on Linux
> > to spatialise a moving mono source in realtime with ambisonics
> > (including distance attenuation) that also provide some reverberation
> > cues. 
> > 
> > I've tried spat3d in csound with a GUI - and while this spatialises the
> > source very well, I've not been so happy with the sound of simulated
> > room reflections - perhaps it's the way I am setting it up. I'm also
> > looking at the SoundScape Renderer (SSR) which is very nice - but
> > doesn't seem to have an implementation of reverberation for moving
> > sources.
> > 
> > Is convolution reverb a possibility for a moving source - eg. is it
> > possible to cross-fade between the inputs of ambisonic IRs set up in
> > jconvolver recorded at various angular locations? This would be ideal
> > for my application which involves the spatialisation of an actress'
> > voice within a room - so that she appears to be walking (and talking)
> > through the space in relation to a seated listener.
> 
> I assume you are AKA 'acousmetre' ?
> 
> Sorry for the delay, I've been quite busy the last week - having
> to move home at very short notice (within 2 weeks actually, and
> I haven't found a new one yet ...)
> 
> Using a set of different ER with the same reverb tail works well
> with static sources, but I've never used it with moving ones.
> I've no idea if crossfading ERs for a moving source will work. 
> If you want to try it, the way to do it is use e.g. 3rd order
> AMB panning for the reverb send, decode it to e.g. an octagon
> and use the decoder outputs to drive the inputs of the convolver.
> 
> OTOH I'm surprised about your results with spat3d, it's assumed
> to be one of the most advanced tools available. 
> 
> For your project, how important is the _distance_ of the virtual
> source  w.r.t. the listener ? Is the actress supposed to move
> very close ?
> 
> Ciao,
> 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] ambisonic spatialisation with reverberation

2011-06-03 Thread Iain Mott
Thanks Jorn that seems a great idea - and I'm pretty sure spat3d allows
for simulations of early room reflections without the tail. I'll try
this now in combination with jconvolver. Are there other programs that
you can suggest to do the early reflection synthesis?

cheers,

Iain


On Fri, 2011-06-03 at 17:47 +0200, Jörn Nettingsmeier wrote:
> On 06/03/2011 02:11 PM, Iain Mott wrote:
> 
> > Is convolution reverb a possibility for a moving source - eg. is it
> > possible to cross-fade between the inputs of ambisonic IRs set up in
> > jconvolver recorded at various angular locations? This would be ideal
> > for my application which involves the spatialisation of an actress'
> > voice within a room - so that she appears to be walking (and talking)
> > through the space in relation to a seated listener.
> 
> that might be too hard in practice - recording a huge number of IRs is a 
> lot of work. the binaural guys have mastered the crossfading problem 
> afaik, but you'd still need to record your own IRs.
> 
> why not go for a combination of early reflection synthesis and tail 
> reverb convolution? should give you the best of both worlds... then 
> again, i have never actually walked the walk, although i'd be very 
> interested in such a system myself.
> 
> all i've done so far was create a few artificial reflections for soloist 
> microphones (using ardour busses, panning, eq and delay, very fussy and 
> not really feasible in day-to-day work.)
> 
> 

-- 

Iain Mott
www.reverberant.com

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] ambisonic spatialisation with reverberation

2011-06-03 Thread Iain Mott
Hello list,

I would like to know what jack compatible systems are available on Linux
to spatialise a moving mono source in realtime with ambisonics
(including distance attenuation) that also provide some reverberation
cues. 

I've tried spat3d in csound with a GUI - and while this spatialises the
source very well, I've not been so happy with the sound of simulated
room reflections - perhaps it's the way I am setting it up. I'm also
looking at the SoundScape Renderer (SSR) which is very nice - but
doesn't seem to have an implementation of reverberation for moving
sources.

Is convolution reverb a possibility for a moving source - eg. is it
possible to cross-fade between the inputs of ambisonic IRs set up in
jconvolver recorded at various angular locations? This would be ideal
for my application which involves the spatialisation of an actress'
voice within a room - so that she appears to be walking (and talking)
through the space in relation to a seated listener.

Cheers,

Iain




-- 

Iain Mott
www.reverberant.com

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound