On 24 Oct 2012, at 06:56, Richard Dobson <[email protected]> wrote:

> Interesting (in its way), looks like a combo of HOA Ambisonic scene 
> description (using multiple HOA streams possibly of different orders) and 
> bandwidth compression;

A real scene description would describe the space, the sound sources, and the 
listener positions independently from each other, and then render the final 
output based on movement of both listener position and sound sources within the 
described space; just as anything in 3D graphics does (there it's light 
sources, camera and objects; whereby objects and light sources have 
physical/optical properties such as reflective surfaces, dispersion patterns, 
opacity, etc.)

Anything that describes audio as things swirling around the listener in one way 
or another has things as backwards as astronomers describing the orbits of the 
sun and other planets around the earth...
...and hardly can be called "sophisticated".

So as far as I'm concerned: there are audio file formats that contain a 
pre-determined audio stream that may be rendered adaptively to account for 
different playback systems (binaural, stereo, N+M horizontal layouts, or N+M 
periphonic layouts), or I describe an entire scene with a 3D scene description 
language and animate the various items along a time line, and calculate the 
resulting audio. Anything between these two approaches is more or less a hack, 
and it's not even clear to me what problem it's supposed to solve, because in 
essence it describes just what's on the time line of an arbitrary DAW software, 
which is really nothing a consumer would need, nor anything that's new to the 
mixing world.

What am I missing here?

Ronald

_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to