On 2013-06-01, David Pickett wrote:

What I take this to mean is that if one is using WXY (derived from A-format) for horizontal only playback, W will contain unwanted vertical information that should be discarded.

Correct for W, but also for X and Y. That's not the end of the story either: you can't nicely and linearly subtract even the cleanest, purely up-down information from the signal set. If you try to do that by subtracting Z from W, it works for signals which come from above. But now signals coming from below are suddenly doubled. All you ended up doing is to put in a cardioid weighting on the signal set, and you can't have the cardioid pointing more than one way at the same time. That holds for the notional cardioid pointing in other directions as well, which shows you X and Y too are affected.

That kind of reasoning leads you to believe WXYZ is one integral unit, which cannot be torn apart. It's necessarily and integrally 3D. If there's any content at all apart from the horizontal plane, it will be reproduced wrong unless you're doing full periphony. And then, there's *always* stuff happening away from the horizonta. plane, if only because the XYZ basis functions/directivity patterns blur stuff out in all directions. (At higher order the problem seems to diminish, because the directional spreading is so much less.) The theory works beautifully if you record a full 3D sound field, with a proper SoundField mic, and then play it back on a periphonic rig.

But if you then record even the ideal infinitely distant point source in the horizontal plane using the same setup, and try to reproduce it using a pantophonic/2D rig, what you get is the same problem WFS gets. The directionality is right, but the distance-wise attenuation is wrong by a constant factor in distance, because energy is escaping from the horizontal plane into the Z-direction as well. If you had those extra speakers away from the XY plane which you have in periphony, they would help focus that recreated wavefront of yours into something closer to a planewave, which does not attenuate over distance. But if you don't have such speakers, then even the basic horizontal point source attenuates wrong over distance.

This doesn't much matter if you're aiming at recreation at a single point. Then you could compensate for the effect much like the WFS folks do. All it takes is a bank of simple filters, and if I'm not mistaken, jointly optimizing the distance compensation filters with a BHJ decoder already does the job. In fact, maybe it's already in those kinds of equations from the beginning? Can anybody see whether it is?

Still, there's prolly a ton of math behind this, much of which is useless in practice and some of which seems kinda interesting. This basic "critique" of ambisonic by Christof Faller is clearly true as far as pantophony and wavefront syntesis go. I on the other hand choose to interpret that result as saying that, well, you just should't ever do anything below full periphony, or if you have to, e.g. because your source came as BHJ, well then prime your decoder to at least mind this sort of an argument.

That's also why I mentioned a couple of math gurus in my last point: when you put Faller's argument this way, it ought to be pretty obvious, so how do we then take the next step in decoder design, minding the implications? There are clearly some things we can just neglect, especially in the XY plane, there might be some extra analysis to be done for higher orders, and maybe this sort of reasoning could help build better (active/adaptive/nonlinear?) decoders. Dunno, and this is where my math-fu drops off: I certainly can't go through the math even at higher orders. But maybe somebdy can, and maybe it leads to something better. Is'll. :)
--
Sampo Syreeni, aka decoy - [email protected], http://decoy.iki.fi/front
+358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to