Sun, 01 May 2011 20:17:32 +0100,
Richard Dobson <[email protected]> wrote :

> On 01/05/2011 17:25, Marc Lavallée wrote:
> >
> > I have a naive question for experts: would it be possible to
> > recreate the acoustics of the Philips Pavillon using room
> > simulation techniques and ambisonics spatialization?
> >
> 
> 
> That is what they/we did for the "Virtual Electronic Poem"  Project:
> 
> http://www.edu.vrmmp.it/vep

Wow! :-)

> Sadly I never got to hear the final result. My contribution was
> strictly compositional (composing the "sound routes" in the almost
> complete absence of original data - the original 30-channel
> perforated control tape which controlled both the sound movements and
> the visual elements exists physically but is unplayable).

Sadly, electronic art is very ephemeral...
I hope you will hear the final result one day.
I also hope that the VEP will come back to North America;
I can see it was showed in New-York last year at The Drawing Center
during the Xenakis exhibit:
http://www.fonurgia.unito.it/wp/?tag=poeme-electronique
The same exhibit came to Montreal for the whole summer, 
and I went many times, but the VEP was not part of it. :-(

> The acoustic reconstruction was handled by the Berlin team. The
> project is described in CMJ 33 Vol 2, andd presetned at ICMC 2005; I
> don't know offhand if the CMJ paper is downloadable externally
> anywhere.

You mean CMJ Volume 33, Issue 2:
http://www.mitpressjournals.org/toc/comj/33/2
The article is downloadable (for a fee).

> As is the way of such things, it is rare indeed to get any funding
> etc for follow-up work, so the reconstruction software is probably
> stowed away somewhere obscure, never to see the light of day again.
> You would need to contact members of the team to see if any sort of
> access is possible. We always hoped to be able to create a publicly
> usable model of the space that could be used e.g. in Csound, so
> composers could explore their music as it might sound in that space.

When it will be forgotten and all the technology supporting it will be
obsolete, then a reconstruction of the reconstruction will be
needed... This is the kind of work that should go public domain now.

> For the acoustic modelling they created a huge amount (GB-worth) of
> hrtf impulse responses for every speaker (350 of them), for a
> particular central listener position. These were cross-faded
> according to the head-tracked motions of the listener.  The modelling
> was pretty comprehensive, even taking into account the properties of
> the interior surfaces. Resolution was 1deg horizontal and 5deg
> vertical.

Using 350 IRs is probably not that crazy compared to the original Poème.

> The binaural rendering was programmed in SuperCollider,  and the
> newly published SuperCollider Book (MIT Press) includes a chapter on
> this aspect.
>
> Richard Dobson

This is very interesting. Thanks for sharing the info.
(I should read the CMJ and visit The Wire web site more often...) 
--
Marc
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to