On 2013-04-20, Eric Carmichel wrote:

But here's what I don't understand about the quaud (quaud.io) mic: They say the four omnidirectional mics lie on the corners of a tetrahedron--essentially same arrangement as Soundfield, but with omni mics and positioned on corners of tetrahedron.

For a near-coincident array this is not such a problem, because if you think about the modes which might fit within it, they happen at such high frequencies (above 10kHz) that we can largely neglect them. In this very special first order only case, you can approximate three pressure gradients (and by filtering them, velocities XYZ) and an average pressure (W) using for monopoles as well. In this case the mic itself is the singularity and the physics doing the heavy lifting is the fact that soundfields do have four independent degrees of freedom even pointwise; the first four ambisonic components are present even in a point, and even if you approximate the directional ones by differences of monopoles, you can get the job done. In the theoretical sense the price you pay is reduced sensitivity at low frequencies, which leads to noise amplification when you do the differencing. Practically the mic assembly itself is a physical barrier which gives you some leverage -- and in this case they're actually talking about mounting the whole thing on the surface of a PCB too. The trouble with inwards and outwards propagation goes away because there is no inside in a coincident array.

So, the mic itself is nothing new, just yet another realization of a soundfield mic, quite possibly cheaper but also less sensitive at LF due to size. The real contribution appears to be in the source separation algorithm.

For that they do an intensity analysis in the Fourier domain much like DirAC does. Then they apply a sizable bank of beam forming filters in all directions under a planewave assumption, which is one way to do infinite order decoding (in older forms called steering, or a nonlinear, dynamic matrix). Finally instead of picking one of the beamformers per source they do a principal component reduction and pick the leading eigenterms. The combination of the last two steps is essentially equivalent to just doing nonorthogonal factor analysis of the instantaneous directions of arrival we got from the real (propagating, nonreactive) part of thefirst step, except that the second step helps us avoid a number of basis selection problems (or permutation problems as the authors call them) so that coherent sources stay together.

That sort of stuff always works as long as the problem is at most complete, which is why with four mics they never try to go beyond three or four sources. Any extra sources -- from reflections and the like -- will end up being distributed into the derived signals based on the eventual virtual mic patters. In this kind of a system those patterns are then pretty haphazard in that while they yield maximum response towards each of the first four estimated sources (arbitrary directionality) and each pattern gets a null in the direction of the other three, in between the directions there is zero control of sidelobe direction (its maxima are somewhere in the direction of the reciprocal overcomplete "basis", i.e. highly unpredictable if the sources deviate from tetrahedral placement) or amplification. In general that sort of thing probably shouldn't be analysed in the spherical framework in the first place, but simply as a MIMO beamforming problem with rigid spacing of nulls. Such things yield optimal separation of direct sound, but in a busy space they can royally mess up reverb and especially any attempts at recombination of the derived signals (there's frequency selectivity and maybe phase stuff going on here as well, so in that sense similar to SRS's to-5/7.1 upconversion stuff, which I wouldn't easily use in a studio environment but at most as an active matrix).

This is clearly evident when the sphere is large enough to be a human head. So I'm not always clear as to whether it's the mics' virtual orientation in space, or the physical boundary of a spherical surface, that *shapes* the sound and creates the requisite time and pressure differentials.

Omnis obviously don't have any directionality. Cardioids (and most fig-8's) derive their directionality (i.e the mixing stuff) from their physical design, which has a boundary, with its boundary conditions, somewhere. E.g. the capsules used in a SoundField carry their own "wall" with them. MicroFlowns and the like are the exception because they touch velocity directly.
--
Sampo Syreeni, aka decoy - [email protected], http://decoy.iki.fi/front
+358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to