Re: [Sursound] Ambisonics decoder to hrtf with VR support
> On 13 Mar 2015, at 10:19 pm, Jörn Nettingsmeier > wrote: > >> On 03/09/2015 12:12 PM, Tobix wrote: >> >> I've read that ambisonics is good for listener in center, right? This >> means that if player can move the sound effect will be distorted? > > If you're using pre-rendered Ambisonics files, the listener will never move > from the sweet spot, translations are impossible. What you do is track the > rotations of the listener's head and rotate the rendering accordingly. > > If you want to do translations, you will have to render the scene in realtime. > It's very much like 3D cinema: you can produce fixed content for a > pre-defined viewpoint with a pair of spaced cams, but if you want to allow > the viewer to move, you need to model the whole scene. > There are techniques that with HOA will give you some translation. That being what makes it higher order. >> The way that openal handles source positions and listener is good for >> me, but could it be reproduced with ambisonics? > > Yes. Ambisonics can just as well be used as a realtime rendering format. But > there is a tradeoff: if the number of discrete sources is small compared to > the number of virtual speakers, direct rendering is cheaper. > > Consider the case of a virtual 3rd-order 3D rig, let's assume an icosahedron. > The cost of decoding the 16ch B-format to 20 speaker feeds is negligible, but > you will have to convolve those with 20 pairs of HRTFs, tracked in realtime. > You do realise that you don't have to use virtual speakers for the actual audio. If you take the impulse response of each Ambisonic channel and pass it through the chain, then you can convolve directly with that. (What with the linear, time invariance). That means that you have to do 20 FFTs, multiplication for filtering and 2 IFFTs. Not saying that this will end up faster in all cases but a good thing to note. > This rendering effort will be constant, regardless of the number of sound > sources in your scene. So if it's just a few, it's easier to just convolve > each source with the two HRTFs. At 20 sources, you're break-even, above that, > 3rd order Ambi is cheaper. > > The situation changes a bit if you consider the diffuse field for > reverb/ambience: it can be mixed into the Ambi signal at no extra cost, but > if modeled with individual sources, it's expensive, because you need quite a > few. > > > Best, > > > Jörn > ___ > Sursound mailing list > Sursound@music.vt.edu > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit > account or options, view archives and so on. Regards Alexis. ___ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.
[Sursound] BBC Radio 3 surround broadcast
May be of interest to some on here - I don't know if it will be UK only though. http://rdmedia.bbc.co.uk/radio3/ http://www.bbc.co.uk/programmes/b05202hw ___ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.
Re: [Sursound] Ambisonics decoder to hrtf with VR support
On 03/09/2015 12:12 PM, Tobix wrote: I've read that ambisonics is good for listener in center, right? This means that if player can move the sound effect will be distorted? If you're using pre-rendered Ambisonics files, the listener will never move from the sweet spot, translations are impossible. What you do is track the rotations of the listener's head and rotate the rendering accordingly. If you want to do translations, you will have to render the scene in realtime. It's very much like 3D cinema: you can produce fixed content for a pre-defined viewpoint with a pair of spaced cams, but if you want to allow the viewer to move, you need to model the whole scene. The way that openal handles source positions and listener is good for me, but could it be reproduced with ambisonics? Yes. Ambisonics can just as well be used as a realtime rendering format. But there is a tradeoff: if the number of discrete sources is small compared to the number of virtual speakers, direct rendering is cheaper. Consider the case of a virtual 3rd-order 3D rig, let's assume an icosahedron. The cost of decoding the 16ch B-format to 20 speaker feeds is negligible, but you will have to convolve those with 20 pairs of HRTFs, tracked in realtime. This rendering effort will be constant, regardless of the number of sound sources in your scene. So if it's just a few, it's easier to just convolve each source with the two HRTFs. At 20 sources, you're break-even, above that, 3rd order Ambi is cheaper. The situation changes a bit if you consider the diffuse field for reverb/ambience: it can be mixed into the Ambi signal at no extra cost, but if modeled with individual sources, it's expensive, because you need quite a few. Best, Jörn ___ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.
Re: [Sursound] AmbiExplorer for Andriod version 2.1 out
On 03/09/2015 06:21 AM, Hector Centeno wrote: ... and by the way, I just released version 2.2 of AmbiExplorer with the LISTEN library added back. There are now both ATK/CIPIC and LISTEN HRTFs available. Great news - I had found one among the LISTEN sets that I liked quite well. Thanks for this wonderful app - I've been using it to show people what's possible, and geeking off about smartphones is a wonderful conversation starter that can then be steered gently to Ambisonics ;) All best, Jörn -- Jörn Nettingsmeier Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487 Meister für Veranstaltungstechnik (Bühne/Studio) Tonmeister VDT http://stackingdwarves.net ___ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.