Timothy Schmele wrote:
The industry is moving towards object oriented encoding of 3D
soundtracks anyway. This is perhaps the least elegant, but the most
accurate, as every sound is stored in isolation of the others, with
exact meta information of its spatial position. Theoretically, you
could take this soundtrack and render it over any system you, be it
ambisonics, higher order ambisonics, vbap or wave field synthesis
among possibly others...
Audio objects with spatial position work only for the direct part of
sounds, limitation which is often ignored. Reflections and ambience you
actually would have to render on some kind of cinema sound processor. Or
would you prefer to mix a real 3D soundfield in a studio environment,
anyway?
(The rendering process you were referring to above is just the rendering
of the direct sound parts on different cinema layouts.)
My fear is that audio objects work only if the system very defined, say
Audio Atmos. (The speaker system has to be defined at least more or
less.) Then, maybe... But this is actually not the convincing
layout-independent solution people are looking for
Sell something as the "most accurate" solution, and don't compare to
anything else?
Best,
Stefan Schreiber
P.S.: The industry (which industry?) < currently thinks > that object
oriented encoding of 3D soundtracks is the "right way".
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound