Hi Dave,

Getting to your questions:

> By “current products” I meant hardware audio mixers and DSP processors with 
> multiple inputs and outputs that are readily available, rather than what can 
> be built using things like the Sharc DSP chips or cards. This is certainly 
> beyond my capabilities, but seems to be what you and others are doing.

Yes, I did understand your meaning, and MOTU products were the first of the 
“current products” we have used. We are moving onto others now.

Regarding the miniDSP speaker implementation, you are correct, the volume/delay 
matrix was specially built for the Sharc chip inside the PoE AVB speakers.

Your description of Spatial Audio processors matches my understanding. The 
Dolby Atmos spatial algorithm is an interesting one - from what I recall they 
have a 'dual balance' or even 'triple balance' panner for their object based 
channels, particularly good for movie theatres.

> I presume that each of your speakers receives the 32 output channels from the 
> matrix


A 32 x 2 matrix sits inside each speaker, not in the computer. The 2 output 
channels are there because each speaker has an amplifier for a further 
ancillary speaker. Filtering happens after the matrix. We send the same 32 
channels to each speaker. The matrix cross point values for volume and delay 
are controlled via AVB control messages from the computer. We do have to smooth 
the delay changes. As I mentioned, this allows  system processing capability to 
grow incrementally with speaker additions.

> I’ve looked again at the available MOTU products, which are excellent audio 
> interfaces, but they alone do not seem to provide what is needed for an SAE.


The MOTU devices, like many mixers have multiple aux buses, each of which can 
be assigned to an analog output for speaker connection. Each input channel can 
be assigned to an analog/USM/ADAT/.. input. Each aux bus can have a 
controllable mix of input channels. So you effectively have a mixer matrix 
inside the device. The control is via OSC commands over Ethernet. Updates are 
driven by quarter frame messages derived from a DAW, or from our own 
multichannel wav player, depending on where you want your sound sources to come 
from. As I mentioned, our goal was to move a substantial part of the spatial 
processing to currently available devices.

I hope that answers your questions.

Regards,

Richard.


---
Richard Foss (PhD)
Software engineer/director

ImmersiveDSP
46 Silver Ranch Estate
Keurbooms River Road
Plettenberg Bay 6600
South Africa

Email: rich...@immersivedsp.com
Web:  www.immersivedsp.com
Cell:   +27832889354

> On 17 Nov 2020, at 14:39, Dave Hunt <davehuntau...@btinternet.com> wrote:
> 
> Hi Richard,
> 
> In the interests of avoiding too much repetition, I’ve edited the previous 
> conversations.
> 
> By “current products” I meant hardware audio mixers and DSP processors with 
> multiple inputs and outputs that are readily available, rather than what can 
> be built using things like the Sharc DSP chips or cards. This is certainly 
> beyond my capabilities, but seems to be what you and others are doing.
> 
> Having looked at your website, I’ve been trying to understand your approach 
> to this, and how it may agree with or differ from mine and others'. 
> 
> A large number of inputs needs to be mixed to a large number of outputs in a 
> controlled way. A 32 channel in/out matrix mixer with independent amplitude, 
> delay (and possibly other processing) at the crosspoints for each input to 
> each output is indeed desirable. The inputs are mixed to the outputs in a way 
> that depends on the spatial algorithm (ambisonics, Dolby Atmos, VBAP, DBAP, 
> WFS etc.). This requires computer control as the number of instructions is 
> large. If an input sound source needs to move spatially the instructions need 
> to be smoothed, to avoid it jumping between positions, with a ramp or curve 
> driven by time.
> 
> All this could be built into a digital mixer, but incorporating a good user 
> interface is far from trivial, and this is unlikely to happen as general 
> demand for it is low, and it will be expensive.
> 
> So, we end up with a separate DSP spatial audio engine (SAE for short) that 
> sits between a large mixer or DAW sending many channels, and a large number 
> of amplifiers and speakers. The connections are best done with a digital 
> audio network (AVB, Dante, MADI or other). The mixer and DAW are used 
> relatively normally, though mixes are to several “stems”, which are then 
> "spatialised" , rather than to a stereo or 5.1 output. This avoids adding 
> extra processing loads to the mixer or DAW. This SAE could be software in 
> another, or even the same computer, though processing load and latency would 
> be problematic. Less so when using a DAW (even with video) than with realtime 
> events.
> 
> I presume that each of your speakers receives the 32 output channels from the 
> matrix and the channel it is using can be remotely selected. The DSP in the 
> speaker is used to modify the response of the speaker (EQ, dynamic 
> processing, delay, etc.), and that this too can be remotely controlled.
> 
> The challenge then moves to linking the spatial audio engine to the DAW or to 
> controls on the mixer. In the case of a DAW, this can be done using plug-ins 
> that send messages (OSC or something similar) to the SAE, or the timeline to 
> recall (with smoothing) memories of states in the SAE. In a mixer it would 
> involve reallocation of controls for this purpose, or adding extra controls 
> (e.g. a joystick or several). 
> 
> There seems to be a general consensus on this approach.
> 
> I’ve looked again at the available MOTU products, which are excellent audio 
> interfaces, but they alone do not seem to provide what is needed for an SAE.
> 
> Unfortunately Covid has put a huge damper on progress in this direction, as 
> large scale public events are untenable, and the world economy is being 
> severely damaged. These are indeed “interesting times”.
> 
> I wish you good luck with your products, though at this stage of my life I am 
> unlikely ever to be able to use them.
> 
> Ciao,
> 
> Dave Hunt
> 
> 
>>  1. Re: DBADP (Richard Foss)
>>  2. Re: DBADP (Augustine Leudar)
>> 
>> From: Richard Foss <rich...@immersivedsp.com>
>> Subject: Re: [Sursound] DBADP
>> Date: 15 November 2020 at 20:03:04 GMT
>> To: Surround Sound discussion group <sursound@music.vt.edu>
>> 
>>> Current products do not allow progress to true Delta Stereophony (DBADP)
>> 
>> 
>> Well conceptually it should be possible if, beyond aux mixes, you have a 
>> further layer of mixes that can comprise aux bus sends (with controllable 
>> delays/filtering/volumes) as well as input channels. A possible problem is 
>> not having sufficiently small delay increments, and not having smoothing 
>> within the device. Anyway, its worth doing some experimentation! 
>> Implementing DBAP or VBAP is fine.
>> 
>>> DSP chips are now capable of providing it
>> 
>> 
>> Yes, there is a Sharc DSP in the miniDSP speakers we use, and a controllable 
>> 32x2 matrix with delays/attenuation at the cross points.
>> 
>> As you say, running Spat and a DAW is processor intensive. This was one of 
>> the reasons we have turned to using the processors in current devices to do 
>> the post-render mixing/delays. Having this capability in a speaker is great, 
>> because your processing capability grows with each speaker. Having it in an 
>> audio interface/mixing desk means that all the inputs - analog/usb/ADAT/… 
>> can have spatialisation applied to them.
>> 
> 
> _______________________________________________
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to