Re: [Sursound] A comparison of fifteen ambisonic microphones

2023-10-19 Thread Sampo Syreeni

On 2023-10-17, jack reynolds wrote:

It's quite a large project 
<https://drive.google.com/drive/folders/1w0UZt9XCPCQOkxoZBRRiVfwnkohUhnd-?usp=share_link> 
due to the multichannel files


Quite the job, so I added the link into the Motherlode as the first 
outbound link: 
http://decoy.iki.fi/dsound/ambisonic/motherlode/index.en.html . If 
you're willing to ship me the actual files, and to have them shown 
publicly, I *think* I can host a gigabyte or two. Terabytes I can't.


(I'm woefully late on this project of mine/ours, but at least the base
material stays online.)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] is it *necessary* to mix orders together?

2023-06-11 Thread Sampo Syreeni

On 2023-06-12, Sampo Syreeni wrote:

But my emphasis is on the question, if a decode of 3rd *and* 7th 
order information - yielding in one encoded file - would be 
mathematically correct if it comes to the decoding of the higher 
order content. Would there be missing something (maybe an overall 
lower amplitude of the third order content)?


If you do it *wrong*, you'll get spatial aliasing. This is a big part of 
getting the original first order decoding equation right. It won't sound 
right even in the quadraphonic LTI Makita framework which the founders 
of ambisonics were aiming at.


It's even more difficult to do active decoding from there.

In the higher order content, let's talk about it in private, and then in 
public. Because I know about this a bit already. Note: it's probably 
about directional or locational interfence.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Is it possible to mix ambisonic encoded information of different order?

2023-06-11 Thread Sampo Syreeni
.


So in reverberation, why not try out a SoundField, for a measurement? 
The original Ambisonic mic? Because it's actually calibrated to measure 
not only the pointwise pressure, as its W, but also velocity in XYZ. 
The latter are where you get the reverberant, echoing, reactive field 
measurements from.


But my emphasis is on the question, if a decode of 3rd *and* 7th order 
information - yielding in one encoded file - would be mathematically 
correct if it comes to the decoding of the higher order content. Would 
there be missing something (maybe an overall lower amplitude of the 
third order content)?


As said, it will not be. As your order goes higher, in the higher order 
decoder, you'll get better and better decodings at the higher 
frequencies. Just as Fons Adriansen said, above. Doing it right, you 
will necessarily start to approach the far field diffraction limit of 
your array, both low and high.


However, at the same time, your decode for the lower order will not be 
psychoacoustically optimal, and won't approach it by these principles. 
If you mix in lower order content, it won't decode optimally without 
severe extra work. At something like 3-7 differentiation, you probably 
won't hear the difference, but if you mix together even first and third 
order, you definitely will; an optimal third order decoder does not work 
well with a first order superimposed signal to the degree a specialised 
first order (esp. four (panto) or six (peri)) rig would do.


The higher order stuff will though mix in when done right. It will 
spread ought to a lower order rig, even if the solution is rather 
difficult to find. E.g. on-list we've talked about many numerical 
solutions to the problems, like Wiggins's Tabu search. But if you try to 
apply the higher order optimization problem to the lower orders, it 
doesn't pan out.


My long term problem is how to at least partially, blindly, tell 
arbitrary order decompositions/additions apart from each other, at 
least in part. I'm not there, even yet. :/

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] two ideas, with questions of active decoding

2023-05-10 Thread Sampo Syreeni
For as long as ambisonics has been around, there were also ideas of 
active decoding. Even "infinite order decoding", actively. There's been 
Harpex for two specular sources, derived from the theory leading upto 
Dolby Pro, and then there's been DirAC, which additionally tries to 
separate the reverberant field from the specular sources, which it then 
at "infinite order" renders onto a rig via VBAP. So it's nice in all and 
might work, but I'd say this work is not too principled or 
generalizable.


I mean, none of this theory or technique actually seems to take the 
ambisonic framework seriously. For example, none of it generalizes to 
arbitrary order, or works with the harmonical functions. None of it 
really works with spherical harmonics as HOA of does. For some reason 
this kind of analysis leaves me...lacking.


So tell me, researchers, professors on-list, and your post-docs, might 
you maybe delve into a couple of my amateur ideas and questions?


1) Express any active decoding solution in an infinite series of 
spherical harmonics, utilizing only them. Becaause this kind of solution 
would necessarily solve the active solution problem for any distribution 
of sources, instead of just one, two, or in periphony even, four.


2) The mixed order problem. Years back I thought it'd be optimal to just 
sum first and higher order order ambisonic signal sets together, and 
decode them en masse, using a single decoder.


I was wrong: the optimal decoding for first and second order sets is 
different. So, since people obviously mix first and higher order signal 
sets, the optimal decoder needs to separate the contribution of each 
successive order from each other, in order to optimally decode them. All 
of the orders, overlapping slightly, to a degree.


This all can be systematically implemented via some variant of 
non-negative matrix factorization, or a couple of related methods. I 
believe.


Please try it out, or make someone try out.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front 
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [off-topic] Spirals

2023-03-12 Thread Sampo Syreeni
euvre. Maybe even use a slight (1') one-sided flap to a slight 
degree on the left, in order to add lift on that side and even more 
drag; then the rudder might even have to be turned, because the plane 
would now be tending against the wind, and flying half sideways.


Yet I think would be in a controlled flight, and since we didn't rev the 
flaps or the spoilers too much, they would be usable even now. (After a 
death spiral recovery, with the gain in airspeed, it's never a given any 
control surface is in a condition anymore, and might go into a high 
stall if used. My theory of recovery tries to distribute the load of the 
recovery over as many surfaces as can be used. I don't think my theory 
is too far from what NTSB, and many others, have recommended.)


3. As the wings return to level, the excessive speed will put the 
aircraft into a steep climb.


Only if you're in steady flight, then. You might not be, and the airfoil 
might not steady itself. After such a manoeuvre, often taking it into 
slowly dying oscillation about its three separate rotational access, 
coupling those into its translational axes, and finally into its forward 
and downward momenta, it often goes into pilot induced oscillation ( 
https://en.wikipedia.org/wiki/Pilot-induced_oscillation ). Which is very 
difficult to correct after a graveyard spiral, since the pilot still has 
to recover, actively controlling the aircraft. Se cannot let go of the 
craft for fear of the craft doing a slower nose-down, but se'll now have 
to deal with induced oscillations from the airframe, say phugoid 
oscillations, which can feed back into hir decision-making. ( 
https://en.wikipedia.org/wiki/Aircraft_dynamic_modes ).


That steep climb, if not corrected, leads to just this, and after the 
drop in altitude from a spiral, even if you let it be. ( 
https://www.youtube.com/watch?v=rFWfrmjAQxY )


Let it happen but keep the pitch angle under control. You will regain 
some of the lost altitude, and airspeed will decrease.


As I said, not necessarily. In a graveyard spiral you might already have 
reached the maximum design speed of your airframe, and you might be, 
beside your intentions, closer to ground than you thought you were. 
Recovering from such a situation — if even possible in those 176 seconds 
— doesn't quite follow your normal pilot guidelines.


4. As you approach normal airspeed, bring back power and level off.

I'd rather say, level off, of course, but proceed at something 
approaching minimum speed, at a low level. Because you don't know how 
badly you just damaged your airframe in recovery. You would have pulled 
multiple g's, and nothing apart from fighter jets is rated for anything 
like that. Especially laterally they aren't.


If you then fly a potentially damaged airframe, it's better to do it so 
in lower altitude and at a lower speed, because there's 1) lower 
turbulence to shake things aloof, because 2) in thicker air you'll get 
more lift per mile so that less fuel keeps you up longer in case of a 
recurring fault, 3) it's safer to do a crash or a "water landing" from a 
lower altitude, with the lower glide slopes and lower airspeeds, and 4) 
the radio coverage makes more sense in this case when flying low, at 
least over high turnover airfields such we'd have in the busyest of the 
US; we'd want to have it local here/there.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [ot] off-radio

2023-03-11 Thread Sampo Syreeni

On 2023-03-10, Eero Aro wrote:

The hum can be heard as a horizontal noise somewhere in the 
distance.


Now this is rather interesting. "The hum?" Tell me more, what is this 
"hum" all about?


I didn't mean "The Hum". I meant the noise that comes from the 
engines, motors and tyres of the vechicles and from the ventilation 
and air conditioning fans of the buildings.


That's really, *truly* difficult to deal with. Filter out. Because we 
don't have an overarching mathematical theory of such mixed noises.


It's also a mixed bag. We do have an extended theory of how to deal with 
that in case it happens to mix to something like Gaussian noise, or the 
its extension towards stochastic processes of the same kind.


But then it often does not. Impulse noise is a particular problem, 
especially in the radio bands I'm trying to learn about, year on end. 
The statistics, both in time and over frequency, any transform domain 
known, on that sort of thing, are just horrible. Nobody really has a 
coherent theory on which to work with, there.



Let's not go to the Hum. please.


Let's not. Just saying, the whole idea of "The Global Hum" really just 
comes from our lack of understanding of how to understand wave 
mechanics. More in radio than audio, but still in both. (And then fuck 
the dynamics of plasma in the near-earth conditions. Or stellar winds. 
Solar coronal heating after a sunspot. That shit is just *wild*, and 
*wildly* under-/misunderstood. Just, no, I don't get it what it does to 
my audio setup, and nobody else does either.)




Sounds pretty much like what our tyres here in Finland do, in noise.


Sampo, I am in Finland, not far from you.


Yet do you understand how tires fold into the discussion. Because they 
do. They do *all* sort of nonlinear, resonant things. And so does the 
road. Especially our winter tires do, at the higher frequencies.


Did you know they nowadays build the studs in our winter tires and their 
toroidally varying tension *by design* in a pattern which evokes 
multiple nonharmonically related, complex vibration modes over the tire? 
In order to drive off water, and regain traction.


This is actually a thing. Your tires are manufactured to do internal 
acoustics.


Well, the distant traffic noise comes from the tyres pressing against 
asphalt, partly or totally covered with ice. On snow, that noise is 
pretty quiet.


It actually is not, because snow crystals being crushed down at speed is 
kind of loud. Part of the reason we salt the road is also about taking 
down the loudness.


Obviously, there's a lot more to be reckoned with here. Say, skidding on 
an icy road. You might think it's just a one time thing. But on a busy 
highway, this too happens at about ten second intervals per kilometre, 
the sound power put out is extremely high, and averaged over a stretch 
of a usually rather straight section of a highway, the scaling law of 
how this all radiates outwards tends to go a bit down. It slowly starts 
to go towards coherence, so that it radiates further, more efficiently.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [off-topic] Spirals

2023-03-09 Thread Sampo Syreeni

On 2023-03-08, Steven Boardman wrote:


I would love to make a sound do a death spiral motion .


As would I. So let's make one.

The thing is that it would have to be slowly elevating, stereophonic. 
Very slowly, because you don't really *understand* a graveyard spiral 
before you die. Any music of this ilk would have to reflect...it. This 
spiral.



With an LFO,


Definitely not to be heard.


on each axis,


Yes, but do you even know how it's heard? I obviously do.

bit a load of doppler, distance attenuation and filtering, i think i 
could kill the thread quite quickly


Over SSB demodulation you probably could not hear any of that. Not 
unless you were talking with an orbital module, crashing down. Doppler 
modulation isn't much of a problem unless you do wideband digital 
thingies, at about fighter jet speeds. Transsonic to something like 2 
Mach, changing all the time, fast, and in angle, so that the antenna 
doesn't track well either.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] [ot] off-radio

2023-03-09 Thread Sampo Syreeni

On 2023-03-08, Eero Aro wrote:


Chris Woolf wrote:

Anyone any ideas how one could provide an audio horizon that could be a 
mimic of the gyro artificial horizon?


A vague thought, that applies only to a small amount of surround sound 
recordings. [...]


I also maybe should say, as a vague thought, many of the early and still 
extant aeronatical radio thingies do just this. The nondirectional 
beacons spell out their identities in slow Morse code. Typical 
instrument landing systems, even if they're decoded digitally by now, 
originally were designed to be listened to by ear. Both laterally and 
vertically; if you listen to them by ear, you actually *can* still fly 
by it.


I do mostly nature recordings and record also in urban areas, where 
the distant traffic hum is always present.


Ever heard what happens when you get an inversion layer, so that there's 
suddenly a vertical component to the soundfield? Because it does happen, 
from time to time.


Around here in Finland, it often happens in the summer over our many 
lakes. The inversion layer over the lake beams sound for 
miles/kilometres on end, and since it does so, you'll have a stark 
Z-component upon reception, even if you sent it as a purely cylindrical 
wave. (Rarely do you do that even, since pantophonic isn't right 
cylindrical to begin with. And then, no antenna or microphone array 
really does either right pantophony or periphony.)



The hum can be heard as a horizontal noise somewhere in the distance.


Now this is rather interesting. "The hum?" Tell me more, what is this 
"hum" all about?


Here in the north the distant traffic noise is also different in the 
winter and in the summer. We use studded tyres in the cars and they 
cause more high frequencies in the noise than unstudded tyres.


Sounds pretty much like what our tyres here in Finland do, in noise. 
Though, is it really the same?


Another thing that changes the sound scene in the winter is snow, it 
makes the general acoustics more dry and then it is easier to detect 
the direction of single sound sources.


I'd argue snow attenuates single sound sources, and so makes easier to 
multilaterate them from a distance. But if you try to measure the sound 
source which is a tyre pressing on snow, it's almost im*possibly* 
nonlinear of a source.


It really is. Just do a high order Volterra series minded correlation 
between what a driven tyre does, and how it sounds. Especially on show, 
ice, and gravelled now. It's almost stupidly nonlinear, and how it 
skids, can take almost half a minute to die down. The analysis of such a 
thing is even as of now almost impossible to do, optimally.


The problem is that a constant wide spectrum noise (the traffic hum) 
is more difficult to localize than signals that have transient 
content.


True. If you want to localize the stuff. But then at the same time, you 
can localize noise better than any impulse, by averaging it in time. 
Here we even have information theoretical optimality theorems to bear.


Having said that, we _do_ localize an above flying jetplane, although 
it produces a noise type sound. We know from experience, that an 
aeroplane almost always is flying above us.


Yes. We have our neural circuits, and then many of the deep learning 
circuits of today do the precice same thing.


However, the reason we (and then the deep learning AI circuit) is not 
just about a "noise type sound". What happens is that our hearing does 
wideband, even Doppler analysis. We just hear, via all of that noise, 
how things move, and even parts of what they *are*, as *initial emitters 
of sound*. We're just that good, as the best of mammals.


But are we actively aware of the fact, that distant traffic hum 
appears as a zone above the horizon.


Are we, though? I tend to be aware of such things, but most of the time 
I still can't tend to anything horizontal. And then, when I *can*, 
suddenly it's the vertical axis I can't tend to.


Also, it would be somewhat strange to put artificially some kind of 
signal "beacons" at the horizon level around the listener, because 
they aren't part of the actual recording.


In the radio amateur circuit this has been done, in so many ways. In 
fact in right "beacons", as on WSPR and such. There *is* the idea of QRP 
radiofinding, as a sport, and who knows whatelse.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [off-topic] Spirals

2023-03-09 Thread Sampo Syreeni

On 2023-03-08, Marc Lavallée wrote:

The article is freely available here: 
https://ntrs.nasa.gov/citations/20080042307


Fuck. What the actual fuck, right here. You just returned to audiditory 
stuff, and showed how a blind pilot could return from a graveyard 
spiral, by ear.


I *certainly* didn't know something like this would be possible. Jesu 
Kristu, this is just...wrong.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] about aviation [ot]

2023-03-08 Thread Sampo Syreeni

On 2023-03-01, Fons Adriaensen wrote:


What you do there is:



1) you push down on the yoke to recover airspeed,


Do that if you want to crash even sooner.


You also do that in order to avert an impending stall.


2) you use ailerons to level off,


You need to use ailerons, but certainly not to level off.


To level off from the spiral or as the case may be, from an excessive 
roll. The fall is an another matter, to be dealt with separately.



3) apply lift and drag via flaps


Your flaps may be ripped off if you do that.


That's why you don't apply them in full, or willy-nilly.

This is all *so completely wrong* that I don't know where to start to 
correct it.


Is it, though? I kind of think I know what you are talking about in 
pilot training. However what I'm talking about is the optimized control 
law of a modern fly-by-wire fighter jet, or say something like an 
Airbus jumbo. Especially the former *will* know it's aerodynamically 
"unstable" (so as to say it's more "general" in its solution), so that 
in general the optimum path back from a death spiral will almost 
necessarily go through things like intermittent stalls on all flight 
control surfaces.


Sure, human pilots typically haven't been trained to do anything of the 
like. Commercial pilots probably shouldn't, at all. But if you think 
about how to control an aircraft in full, how to control its full 
state space while observing the same, control theory wise, you'll fast 
see the easy and safe manoeuvre taught to pilots is *not* the optimum 
one. And in fact it's not what highly automated fighter jets such as the 
F22 or the F35 really do; nor does any one of the modern Airbus jets. 
They in fact employ spoilers and sometimes even flaps, by automation, 
they do pull down even in a heavy spin in order to preserve planform 
stressess, and so on.


Fons, surely it shows I've not flown an aeroplane in my life. But at the 
same time, it surely also shows you've never written a line of code 
which would automatically and optimally take an airplane optimally out 
of . (Neither have I. But at least I've thought 
about it all, rather systematically and seriously. I even see immediate 
solutions which the pilot academy does not; say, spoiling your energy 
from the death spiral by putting your craft into maximum structurally 
permitted overall stall. You *can* do that and recover from it, you 
know.)



I posted the essentials of getting out of a spiral a week ago.


Please then post a link into the archives. Apparently I could benefit 
from them.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] [ot] Death Spiral

2023-03-08 Thread Sampo Syreeni

On 2023-02-22, Sigmund Gudvangen wrote:


What has this aviation stuff to do with surround sound?


Perhaps nothing at all. That's why you're supposed to put [ot] 
(off-topic) in the subject line.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [off-topic] Spirals

2023-03-07 Thread Sampo Syreeni
ur 
instrumentation is probably fucked up already; believe you me, no 
inertial thingy ever survives the kind of vibration an aircraft induces 
on itself when put into a multiple g's acceleration, combined with a 
wide stall.



Let it happen but keep the pitch angle under control.


Exactly so. "Let it happen." Many of the worst accidents on record have 
happened because pilots fought their planes, instead of "going with the 
flow" which a plane, designed to be statically stable from the start, 
would have done by itself. For example, (ya'll, prolly not Fons) take a 
look at: https://en.wikipedia.org/wiki/Pilot-induced_oscillation .


You will regain some of the lost altitude, and airspeed will 
decrease.


Recovery from a near miss death spiral is still more involved. Because 
you might have to operate the aircraft at structural load, and do a 
recovery from a prolonged stall over all of the airframe. You might 
actually have to "fly" your airframe over a minute in a full stall over 
every part of it, and then try to regain aerodynamic control. "After 
sinking, flying, and shaking like a rock from a cannon."


It can be done. But nobody teaches you how to do this, and in fact, I 
don't know of *one* algorithm which has flown this route.



4. As you approach normal airspeed, bring back power and level off.


That should be obvious, then. It's that third stage before "Profit" 
which always slights the eye. ;)



Ciao,


Moro.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] about aviation [ot]

2023-02-28 Thread Sampo Syreeni
lf there, think about the lift your wings give you: going to 90 
degrees, you suddenly have no horizontal lift at all. You will sink. You 
might not lose control right yet, but you will be in a situation which 
needs fast correction manoeuvres if you don't want to run into terrain. 
And you typically don't know that you have this problem, because you 
feel like you're in level flight, even if you're factually falling out 
of the sky.


I've never flown a plane. But I'm a nerd to try to fly one. In here, 
I've taken heed of instrument flying rules, the synthetic horizon, and 
all of them accidents which have happened over the years. I know how 
airplanes behave, and just, *just*, in theory, I might even be able to 
land one or at least control one. Because I *do* know what these things 
have, their basic controls, airbrakes, thrust reversers, autopilots (two 
of them on Boeings, better yet on an AirBus), all of the usual stuff.


Then I'd have to still tell you what the death spiral really is. Because 
it's a about bank and losing lift by it. You'll never feel it as a 
pilot, because se airframe will keep you at 1G downwards. What happens 
is that you'll bank, and you'll progressively start to lose lift, making 
your way sideways. You'll lose altitude, without even knowing you drop 
as fuck, continuously. Your situational awareness is bunk, and you only 
have a nigh minute now to recover; if you go into this, you can crash 
straight down in a minute, without knowing you'd done it, especially 
coming down through a clowd cover. You might find yourself coming 
straight down at over Mach 1, without ever having thought to do so.


( https://en.wikipedia.org/wiki/Spatial_disorientation I think there is 
another name for this as well. )



 A 'spiral', very different from any spin, was mentioned,
 and Sampo seemed to think that recovery from that would
 require regaining speed.


Take rapid suceeding accelerations against your XY and then XZ planes. 
That will feel much worse than one of either. When done harsly, it might 
lead to unconsciousness, or even permanent physical injury. Because of 
the "spiral".

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF?

2023-02-15 Thread Sampo Syreeni

On 2022-12-31, Chris Woolf wrote:

It has always struck me that we can indeed adapt remarkably quickly to 
local changes in our personal HTRF, and that therefore this needs to 
be considered as a dynamic affair, rather than a purely static one.


By the way, there are even more remarkable examples of that adaptability 
in psychophysics. Perhaps the most dramatic I know of is the one of 
inverting goggles. Apparently, if you consistently wear a headset which 
flips your vision upside down, in about two to three weeks your circuits 
adjust to compensate, and then back again once you stop the experiment. 
That happens even if you're an adult, so that this is not an example of 
early childhood, low level plasticity and the irreversibility that comes 
with it. (Pace kittens only shown vertical stripes and that sort of 
thing.)


So how much precision is really needed for an HRTF? And how inaccurate 
can it be for our normal correction ability to deal with it?


Perhaps even more to the point, what precisely are the mechanisms which 
enable us to compensate like that? Because if we really understood what 
they are, maybe we could take conscious advantage of them, to rapidly 
train people to work with a generalized HRTF set, instead of going the 
hard way of measuring or modelling individualized head, torso and pinna 
responses.


One obvious answer is feedback. I'd argue the main reason head tracking 
works so well is that we're tuned to correlate how we move with the 
sensory input provoked by the movement. That's for instance how children 
appear to learn first occlusion and then by extension object constancy. 
In audition, I've had the pleasure of trying out a research system in 
which different kinds of head tracked binaural auralization methods were 
available for side by side comparison. The system worked surprisingly 
well even with no HRTF's applied, but just amplitude and delay variation 
against an idealized pair of point omni receivers. I also adapted to it 
*really* fast, like in ten minutes or so.


But is there more? Head tracking, especially in a directionally solid 
and low latency form, isn't exactly an over the counter solution yet. So 
could you perhaps at least partially substitute the learning from 
feedback with something like synchronized visual or tactile cues, in a 
training session? Because if you could, you'd suddenly gain a lower cost 
yet at least somewhat effective version of binaural rendering; there 
would be money to be made.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF?

2023-02-13 Thread Sampo Syreeni

On 2023-02-13, Fons Adriaensen wrote:

People listening to surround sound usually are not in a graveyard 
spiral, nor experiencing turbulence or using sex toys.


Not usually. But if they're in a hard hitting, oscillating, theatre, 
with some 120dB bass, some of which is conduced via the seat... Well, 
yes, they will start to experience that same somato-sensory dysphoria 
we're talking about?


As to Peter Hornfeldt, his YT channel is one of the very few serious 
one on aviation related issues. I'm pretty sure he never used the term 
'synthetic horizon'. The instrument is called 'attitude indicator'.


Granted. But whoever cares, really? We all know what we're talking 
about. Especially when you have to use the instrument in order to 
deliver yourself and your passengers from a death spiral.


Seriously, I'm no flier. I've never once piloted *anything*. But if what 
I call a synthetic horizon went sideways, I'd put counter-aileron, 
maybe some rudder, and often pull down to recover airspeed. Simply 
because of theory. "Trust your meters."

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF?

2023-02-13 Thread Sampo Syreeni

On 2023-02-12, Fons Adriaensen wrote:


What can happen to pilots is something very different.


https://en.wikipedia.org/wiki/Sensory_illusions_in_aviation

Most of it has to do with the vestibular system, yes. But not all of it. 
Some of it also has to do with audition as well.


I've been watching through the Mentour Pilot channel on YouTube. 
https://www.youtube.com/@MentourPilot That Swedish pilot minds pretty 
much everything, while running down the Final Reports of accidents. If 
you take a look at it, audition is often mentioned.


He's no sursound fiend. But he does mind audio. As do NTSB and the like.

The vestibular system (in the inner ear, sensing rotation and 
acceleration) can generate very strong illusions which can lead to 
complete spatial disorientation even when clear visual cues are 
available.


Obviously. https://en.wikipedia.org/wiki/Spatial_disorientation

Even if the ears are involved, this has nothing at all to do with 
audio.


It actually does. Because when you go into a graveyard spiral or one of 
the other nasty things here, like a climbing stall, you'll experience 
turbulence. As such, a strong resonant bass note or a narrow band, 
strong, hum. That'll fuck up your low frequency hearing as well, and 
since the vestibular tract and the inner ear *are* neurally coupled, 
strong low frequency sounds *can* fuck up your vestibular system.


I've actually taken a test towards that. I put my (our) Magic Wand at 
maximum contact and force to my right lower skull, below the ear. I 
dropped out. Then I re-did the experiment simply by exciting the 
earlobe. I didn't drop out, but I still experienced hard disorientation. 
Didn't know what was up or down, left or right. Nausea, nystagmus, all 
of it.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF?

2023-02-11 Thread Sampo Syreeni

On 2023-01-01, Bo-Erik Sandholm wrote:

The problem for us with ambisonics is in most cases we do not have any 
visual reference to confirm or adjust the acoustic cues to any 
reference.


This is actually a good point. I've been looking at airline crash videos 
right now, and in many of those what went wrong was that the pilot 
didn't mind hir synthetic horizon.


So, comes to mind, why don't we deliver a synthetic horizon with audio 
systems? Something like that could well help the listener oriented, in 
hir synethesia. Many less channels and speakers might do the deed, if 
helped out by some kind of visual aid.


There exists papers showing that the we humans locks in to visual cues 
and our experience and allows vision to win.


Just ask them pilots. Yes.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF?

2023-02-11 Thread Sampo Syreeni

On 2022-12-30, Marc Lavallée wrote:

Because individualized HRTF measurements are made using in-ear 
microphones, using in-ear monitors for binaural reproduction seems an 
easy strategy to avoid effects of headphone cavities, [...]


I'd also argue most in-ear monitors have been done wrong. Because the 
nice, painless way of doing them interrupts the acoustic canal to the 
ear from the outside. The instrumentation tends to obstruct the canal, 
so that it varies the measurement.


The proper way to do it would be to inject a slight and sensitive probe 
from behind, via tissue, right in front of the tympanic membrane.


This could now be done, but none of the studies I'm aware of *still* 
haven't done it so. Quite probably because something like this would be 
painful, and if analgesia would be tried as close to the inner or middle 
ear, it'd then also foul up your hearing.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF? (Joseph Anderson)

2023-02-11 Thread Sampo Syreeni

On 2022-12-30, glardner wrote:

They had four loudspeakers in each side of the headphones and fancy 
software to control them.


Just stupid: within a confined headphone cover, it's going to be a 
resonant space. No array of speakers is going to be directional. Adding 
more speakers is just going to mess things up.


Salesspeak, fuck that shit. It ain't sound acoustics.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF?

2023-02-11 Thread Sampo Syreeni
, and then towards extended 
area holophony. Because this sort of thing utilizes the normal 
directional hearing of a human being to its max. It doesn't feign to 
trick it like imperfect binaural does, but just tries to reproduce a 
progressively better soundfield, to be listened to by your extant ears, 
shoulders, big head, and whatever girth you have.


The main issue in my experience has been the acoustics within any type 
of headphone cavity which make creating directional wavefronts almost 
impossible. For example, the prototype of Greff used an open grid with 
speakers, providing interesting results, but once enclosed for better 
frequency bandwidth and a commercial device the same results were not 
achieved.


Also take my amateurish analysis. You see, even if you somehow can 
reproduce a soundfield inside the headphones, how the fuck did it 
interact with the pinnae and the torso? As the short HRTF already said 
it should? "Head related transfer function".


Well it didn't. So it sounds sterile. And if you even tried to do *that* 
your earphones would have to be rather huge, multiply active, ... No.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF? (Joseph Anderson)

2022-12-30 Thread Sampo Syreeni

On 2022-12-30, umashankar manthravadi wrote:

a few years ago i tried to solve these problems by building an eight 
loudspeaker (1" speakers) cage to rest on my shoulders. The array 
moves with the body, but not with the head. The aim was to get rid of 
HRTF and head tracking.


Somehow I seem to get it didn't work too well in the end. Maybe a more 
compact and more intelligent setup would be warranted instead?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF? (Joseph Anderson)

2022-12-29 Thread Sampo Syreeni

On 2022-12-26, Bo-Erik Sandholm wrote:

As a amateur I had a idea that I could use a CIPIC HRTF but I did not 
find an easy way to select one that had any chance to be a a good fit 
for me.


This has always been a problem with in-ear measurements and HRTF/HRIR 
processing reliant on them. While in theory that's the ultimate way to 
deliver binaural sound to our two ears, fitting the transfer functions 
has always been a pain, and rarely doable right without going with 
personalised ear molds and the like.



In my naivity I hoped for at least skull diameter and som pictures of 
ear shape.


Not going to happen, because skull, upper torso shape, and e.g. 
subcutaneous fat content in the face and upper torso areas influence the 
near field reaching the ears quite a lot, especially at the lower 
frequencies. Even the uneven cartilage development of the pinnae, and 
the hairstyle worn, appear to heavily influence the field impinging on 
your ear canals. So does clothing and apparel, as does instantaneous 
posture.


So, at least to my amateur's eye (ear) it seems almost impossible to 
average over all of those separate and temporally variable locational 
cues so that we could somehow find a way to calibrate binaural in-ear 
phones to work truly well. When headtracked, they sort of work, but even 
then I know from a couple of tests they aren't perfect. The two friends 
I have who've actually had their ear canals molded and have taken a 
KEMAR-like test set on theirselves, aren't too impressed by the results.


So how about going about it a different way for a change? Would it be 
possible to design a set of headphones which actually locally reproduced 
a high order soundfield, for any set of pinnae to utilize? As they 
naturally do? Kind of like do very high order ambisonics or WFS, but now 
right besides the ear, and headtracked? I mean that ought to take the 
HRTF modelling aspect fully out of the picture: the pinna would do what 
it does for each, and then the upper torso reflections would also be 
much easier to simulate numerically, since they are of lower order and 
at lower frequency.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Reynolds mics

2022-12-25 Thread Sampo Syreeni
 frequency and distance.


So what happens is that while the pressure field is fully symmetric 
in the horizontal plane, there still has to be a Z component in order 
to recreate the field to full first order.


Not for it to be correct for a listener in the same horizontal plane
as the speakers.


But yes. Faller's analysis was that you miss a 1/r component in 
intensity by distance. The analysis I think holds as well. So pantophony 
doesn't cut it. Even if you want to recreate a well-recorded pantophonic 
field, you actually need to recreate it using periphony, in order to 
avoid that 1/r fading in amplitude.


Of course if the listener moves up or down the sound field he/she 
senses will be incorrect. What do you expect?


Well that'd just be stupid. What do you expect.

No, no, I'm talking about a fully distinct phenomenon. About something 
far more interesting and intricate.


Or if I'm perchance poking holes in my head, at least I'm doing so in 
good company. Analysing stuff rationally, instead of just shouting 
into the wind. So do poke me. Let's see where this gets us. :)


...and that's precisely why pantophony is an idea born dead. We don't 
have infinite vertical line sources, nor microphone arrays which 
mimic their directional patterns. The only thing we really have is 3D 
mic arrays and 3D rigs.


Indeed. But we also have situations in which most sources are in the 
horizontal plane or close to it, and as listeners we tend to stay on 
the ground and not fly around.


Then I think the most interesting thing is to adapt our mic arrays *to* 
this situation. While we also adapt our mathematical machinery to it as 
well. Our reconstruction machinery.


The HOA machinery, WFS too, is pretty good at analysing what then 
happens. Only, they haven't much been used to deal with such uneven, 
anisotropic kinds of problems.


Maybe we/someone ought to take the theory towards those kinds of 
problems as well?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] So long CIPIC HRTF? (Joseph Anderson)

2022-12-25 Thread Sampo Syreeni

On 2022-07-14, Braxton Boren wrote:

Also, a reminder that the CIPIC HRTFs are all available (in SOFA 
format) on the SOFA Conventions website:


https://www.sofaconventions.org/mediawiki/index.php/Files


For once something beyond the age old KEMAR thingies. Thank you. 
Profusely!

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] basic questions

2022-12-25 Thread Sampo Syreeni

On 2022-05-31, Carôt, Alexander - Prof. Dr. wrote:

(Before I answer or even post more questions on-list, I'll have to admit 
I'm a long term, severe alcoholic. Trying to go into recovery, but much 
in decline as of now. Take what I say with precisely that grain of salt 
in mind. Shoulda admitted to it much earlier, true... :/ )


1.) I have a Sennheiser Ambeo Mic which gives me 4 channel Ambisonics 
A-Format. In order to achieve 2-channel binaural Sound via Headphones 
I have to use a respective binaural decoder (as part of e.g. 
https://github.com/videolabs/libspatialaudio), however, is it correct 
that I first have to convert it from A to B-Format ?


In general B-format is the interchange format between tools which is the 
most convenient, and which different tools most understand. It's the 
easiest to handle since it's so regular, mathematically speaking.  So in 
general, yes, go via B-format.


But since you mention boom mics — which are of much higher order and 
more irregular than first order POA from a sounfield mic and more 
irregular, and even binaural rendering as the eventual D-format — there 
is a certain caveat here. You might be able to work directly from A to D 
while retaining more fidelity on the way. You'd lose isotropy, but in 
frontal work, you might gain also many more orders of directional 
accuracy, locally, beyond what going via B (or C) admits.


That ain't then ambisonic. The ambisonic framework might help you 
analyze what you're doing, but if you go from A directly to D, your 
transmission chain is something besides.


2.) If 1.) applies what is the best way to convert it ? I found this: 
http://www.matthiaskronlachner.com/wp-content/uploads/2013/01/2012-09-27-Kronlachner-homepage.pdf


and it states quite straight-forward equations to do it:

W = 0.5 * (A + B + C + D);
X = (A + C) - (B + D);
Y = (A + B  - (C + D);
Z = (A + D) - (B + C);

Is this correct and is B-Format what is typically called the Ambisonics Sound 
field ?


The general *idea* is about right, but the execution isn't perfect. 
First, the first ordersound field mic is built for Gaussian quadrature 
of four points, in 3D, using cardioids, ABCD. They are not in a 
plane, so the amplification coefficient over the terms is not as simple 
as you'd have it. You can scale the signals how you want, but the basic 
scaling would have the .5 factor in all of the signals, WXYZ.


If you then want to end up with an average energy scaling, W will end up 
being scaled down less than XYZ. These sorts of reasons are why even 
B-format is ill-defined to date: what are you trying to scale *for*?


Finally, the really nasty thing here is the possibility of spatial 
aliasing, particularly towards the higher frequencies. Because even with 
the best classical SoundField mics, somewhere in the vicinity of 
10-11kHz, the mic no longer physically works as a directionally coherent 
soundfield mic. The wavelength of the sound being captured starts to 
undercut the distance between the mic capsules.


That means that a truly excellent mic has to apodize in direction what 
it measures, against a rising frequency. It has to do so both by 
physical design, and the signal processing which leads from its A-format 
to its B-format output. It cannot be a perfect B-format sensor, but 
instead it has to compromise. Degrade gracefully at the higher 
frequencies. And in order to do that in a controlled fashion, it cannot 
just do a real matrix from A to B, but it needs to do a MIMO LTI matrix 
of filters on the way.


That filtering can be done in the B-domain to a degree, as the early 
SoundFields did it. But really, nowadays, it'd be better to do it via 
DSP. To formulate the whole thing as an inverse MIMO problem.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Dave Malham

2022-12-19 Thread Sampo Syreeni

On 2022-12-10, Chris Woolf wrote:


Does anyone have a current email address for Dave Malham, please?

I have someone who wants to contact him about an old article in 
Microphone Data.


I'd be interested in both the address and the article, too.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] 3 point XY - Anyone ever heard about this?

2022-12-13 Thread Sampo Syreeni

On 2022-12-05, Fons Adriaensen wrote:


Does anyone ever heard of a system described as "3 point XY"?


Sounds like hogwash and snakeoil.


It certainly is not.

If the three mics are coincident (in the horizontal plane),
you can combine their signals to obtain

- An omni response
- A front/back fig-8
- A left/right fig-8


Okay, crap. XY stereo apparently is not well-defined. Your version is 
about XY as frontally oriented cardioid responses. There you can indeed 
recover full pantophony from any three such mics, which are linearly 
independent. And in fact I've already gone with this definition when I 
wrote up the OggPCM channel map definition. I even defined XY like you 
do, there.


However, the basic version of XY I was originally taught utilizes two 
pure velocity mics in quadrature. I believe that comes from Blumlein's 
early work. If you go by that definition, no combination of such 
coincident "XY"'s can ever recover W.


Whereupon my outburst. Might we agree to call it a glitch in 
nomenclature? Shoulda known better, but reverted to an earlier time?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] 3 point XY - Anyone ever heard about this?

2022-12-09 Thread Sampo Syreeni

On 2022-12-05, Fons Adriaensen wrote:


On Mon, Dec 05, 2022 at 07:07:08AM +0200, Sampo Syreeni wrote:


On 2022-12-04, Thorsten Michels wrote:


Does anyone ever heard of a system described as "3 point XY"?


Sounds like hogwash and snakeoil.


It certainly is not.

If the three mics are coincident (in the horizontal plane),
you can combine their signals to obtain

- An omni response
- A front/back fig-8
- A left/right fig-8


Fuck me. XY isn't well defined. Even I believed, when writing up the 
OggPCM specification (drart still) that it's all cardioids 45 degrees 
apart from center front.


Reducing that to the ambisonic framework, well, obviously you can *then* 
recover WXY as soon as you have a linearly independent combination of 
any two such pairs, or any three cardioids (which should not be called 
XY then, because they are not paired).


What I was refererring to above was the older crossed pair. Perchance 
from the Blumlein era. "Crossed eights". So, two dipoles, not cardioids. 
Per standard theory, no amount of those in an coincident mic can ever 
recover W, or anything above first order.


Maybe the XY terminology led me astray? Because whether you take it in 
its established sense of, yes, cardioids, or the older one of crossed 
dipoles, even three such linearly independent pairs (6 channels) leads 
to an overcomplete problem wrt POA, and no real help towards HOA.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] 3 point XY - Anyone ever heard about this?

2022-12-04 Thread Sampo Syreeni

On 2022-12-04, Fons Adriaensen wrote:

All such schemes with three coincident mics (e.g. three cardiods, an 
omni and two fig-8, etc.) are equivalent in the sense that there is a 
simple linear transform from each of them one any other.


Though since two already suffice, the linear transform back to the 
instantaneous soundfield is overcomplete. As such you can perchance use 
three microphones to cut back on directional errors, or maybe average 
out some noise. Presuming your hypercardioid has some higher order 
directional components instead of just being of just first order, maybe 
you can utilize the now-trifold symmetry to bring some extra 
directionality into play. (Cut into TriField, frontally.)


But most likely not. Most likely you'll end up with a compact mic array, 
with little or any capability above the classical SoundField.


So unless you exclude such trivial processing, and insist on a system 
with a one-to-one correspondence of mics and speakers, it really 
doesn't matter which one you use.


Even that one-to-one correspodence wouldn't help, if the microphones are 
cardioids, hypercardioids, any which way. Because if their directional 
patterns do not possess characteristics above first orrder, they can be 
returned back to a first order mic by some linear transformation. They 
are all equivalent to each other — and where they are not, at first 
order, they fail the isotropicity POA Ambisonic tries to achieve.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] 3 point XY - Anyone ever heard about this?

2022-12-04 Thread Sampo Syreeni

On 2022-12-04, Thorsten Michels wrote:


Does anyone ever heard of a system described as "3 point XY"?


Sounds like hogwash and snakeoil.

If you do XY, you only need two orthogonal mics. It's *possible* that 
you could utilize extra mics in order to do directional, amplitude and 
spectral stabilisation. But it's unlikely to be useful, because the 
physical quadrature in increasing numbers of directional microphones in 
the horizontal plane is apt to become more unstable at the margin 
(leading to more noise), and it's very difficult cancel XY-style 
pantophonic components unless you truly to go a periphonic and as such a 
much higher order mic.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] contact with Umashankar -

2022-12-04 Thread Sampo Syreeni

On 2022-11-25, Shashank Aswath wrote:


I was able to reach Umashankar at umasha...@hotmail.com last month.


Just saying, we have bit of a community, here. OMG, a good one. 8)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] about principled rendering of ambisonic to binaural

2022-12-04 Thread Sampo Syreeni

On 2022-09-17, Ralph Jones wrote:

I’m a composer, not a mathematician, so while I try, I don’t get very 
far at understanding discussions like this.


It's not such a long road from being a composer to being a math geek, 
you know. Especially to becoming a tehcno fiend, of the surround sound 
kind... ;)


My music, in my twenties or so, sounds like this: 
http://decoy.iki.fi/decoy/download/drone_1.mp3 . It's wholly by the 
numbers, so that quite literally it might have a DC offset. Certainly 
most of it's about high order nonlinearity, because I applied straight 
up aliasing on purpose.


Also, http://decoy.iki.fi/decoy/download/smokin.mp3 .
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] HOA mic comparison

2022-12-04 Thread Sampo Syreeni

On 2022-10-01, Jack Reynolds wrote:


Myself and several colleagues have recently tested most of the ambisonic mics 
available at the moment.


Please also test them by popping balloons at a distance, at random 
angles, against the theoretical incoming Bessel solution. If you'd will, 
also against the outgoing Hankel solution after dispersion from the mic.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front 
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] about principled rendering of ambisonic to binaural

2022-10-16 Thread Sampo Syreeni

On 2022-09-17, Ralph Jones wrote:

But the subject is of real concern for me, because I am currently 
working in 5.1.4 surround format (channel-based, not Atmos) and I 
would dearly love to find a mac-compatible VST plugin that would 
convincingly render my work in binaural.


Hmm. Sursound has been a bit of a mathematicians' list for some time. 
Ambisonic and later on WFS. Adverse to the more usual 
x.y.z-sound-systems.


What if we now finally did some usable code or examples? Us fiends?

So, is there a plugin that does what Fons describes here? (i.e., given 
azimuth and elevation for each channel, render the signals to binaural 
convincingly, including an impression of elevation for height 
channels.)


You gave a channel arrangement. Or a speaker arrangement. That 5.1.4 
arrangement sort of tells us what you have, or where, but not 
*precisely*. It doesn't tell at which precise angles or at which 
distances.


As such, it doesn't tell us how those various channels *sound* to a 
listener. Not precisely. So it's impossible to even start to render them 
into binaural. Also, you'd need to specify a model of your ears, which 
you didn't give. (The KEMAR-model is a model of ears, so I'd probably 
start with those. But they are not *your* ears, but a make-do 
average-sounding one. Plus the set is symmetrized, unlike anybodys 
real.)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] about principled rendering of ambisonic to binaural

2022-10-16 Thread Sampo Syreeni
call "immersive VR".


Sure, there have been many compatibility formats on the way, to ease 
"The Transition". But quite surely the whole of e.g. Gerzon's vision has 
been for all of us to go into something like full holophony.


Simple fact is that if you want really good results you need very high 
order,


Yes, though you can do better in limited circumstances. There's lots of 
stuff in this vein in the early pantophonic ambisonic work, and beyond. 
TriField I think was one, or Gerzon's work in compatible frontal stereo.


1. such content isn't available from direct recordings (we don't have 
even 10th order microphpones), so it has to be synthetic,


True. Which is probably why we have things like Dolby Atmos.

But at the same time, all of these formats nowaday include at least 
first order ambisonic, and often right upto third order. Because it's 
very difficult to make synthetic sound from a recording of real, live 
sound. The only real, systematic way of recording full 3D sound, even 
now, is via ambisonic principles.


2. rendering it from an Ambisonic format would be very inefficient. 
For example for order 20 you'd need 441 convolutions if you assume L/R 
head symmetry, twice that number if you don't.


I am not too certain that is true. If you want to implement that one 
efficiently, you would want to do a reductiont to 3rd order, or at most 
4-5th order, isotropic. Because 3rd order sounds pretty damn exact 
already. (I've had the distinct privilege of going into an anechoic room 
to listen to that, thanks to Ville Pulkki and Archontis Politis, at 
Aalto University Lab of Acoustics and Signal Processing. With their 
tweaks, no less.)


I'm thoroughly sure you can even in the LTI regime reduce those 
convolutions to a very small part of the original, while retaining 
full perceptual quality. And then, as I once surmised, you can also, in 
the ambisonic framework, exchange the order of convolutions and the 
matrix, for lesser work at lower directional order. (And for gaming and 
whatever VR, for zero latency in processing, at a cost.)


The fun thing here is that I come about this as a signal processing 
fiend, who has never actually implemented a single algorithm. But who 
knows all. Starting from things like the Gardner convolution algorithm. 
Which would be how we implement zero delay convolution even in multiple 
channels/dimensions. Karhunen-Loeve-transform, as the optimum entropy 
gathering one, even above the second best Discrete Cosine Transform, 
which we'd probably use here, over channels and time.


I in fact have been thinking, for the longest of times, how to optimally 
pack ambisonic signals, well. None of the current systems work, because 
they are aimed at intensity stereo; the precise thing ambisonic 
originally was against.


Compare this to rendering from object encoded content (i.e. mono 
signals plus directional metadata). You need only two convolutions per 
object.


Atmos, yeah. Here we go again. :D

If you encode those in a principled fashion, as a part of a soundfield, 
1) you don't have to encode their direction any more accurately than 
your hearing requires, and 2) their statistical similarity, especially 
if they are close to each other, will lead to denser coding in toto. 
Both in analog, and especially in digital, encoding. You can fit more 
directional hearing into lesser bandwidth, whichever way.



Starting from a sufficiently dense HRIR set, you can easily generate a
new set on a regular grid with a few thousand points, and interpolate
them (VBAP style) in real time.


That's the point. "Sufficiently dense." What if it isn't dense in some 
parts of the sphere, such as in below? How do you interpolate for your 
integration *there*? I mean very few sets of HRIR/HRTF sets, such as the 
KEMAR set, include adequate coverage towards right-down.


That makes it very difficult to extrapolate towards a whole sphere 
solution. It makes the whole sphere solution irregular, and 
often downright illposed.


This can give you the same resolution as e.g. order 40 Ambisonics at 
fraction of the complexity.


No, it can't. It seemingly can, but if you go through the theory which 
led to the idea of perfect quadrature, *actually* in the Fourier domain 
you will be introducing quite a number of aliasing artifacts, in 
space/direction. They will also be difficult to control or prove, even 
if we know they will be minor an sich. For instance their interference 
products might sometimes be arbitrarily large.


Ciao,

Moro, ystäväni. :)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] about principled rendering of ambisonic to binaural

2022-10-16 Thread Sampo Syreeni

On 2022-09-11, Picinali, Lorenzo wrote:


https://acta-acustica.edpsciences.org/articles/aacus/abs/2022/01/aacus210029/aacus210029.html


Thank you, will read.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] about principled rendering of ambisonic to binaural

2022-09-11 Thread Sampo Syreeni
Hi y'all, for the longest time now... There's been a lot of discussion 
about rendering ambisonic soundfields down to binaural of late, and in 
the past couple of years. I don't really think this is a problem that 
has been solved in any principled fashion as of yet, so I'd like to 
invite some discussion. Especially since I once was about to try my hand 
at the problem, but found my skills woefully lacking.


AFAICS, the thing here is to have a set of HRTF measurements -- the 
well-known and open KEMAR set, but also any other -- and then to derive 
from it an LTI coupling from a representation of the soundfield to two 
ears perceiving the field. The representation ought to be isotropic, as 
per basic ambisonic principles, and it ought to be matched to the order 
of the ambisonic field. If you had a neat set of measurements, over the 
whole sphere of directions, which was designed to be in perfect 
quadrature, this would be easy as cheese.


The trouble is that no set of measurements really behaves this way. 
They're not in quadrature at all, and almost *always* you'll have a 
sparsity, or even a full gap, towards the direction straight down. If 
the directional sampling was statistically uniform over the whole 
sphere of directions, and in addition the sample of directions probed 
was to be in quadrature, it would be an easy excercise in discrete 
summation to gain the transform matrix we need. But now it very much 
isn't.


It truly isn't so when you have those gaps of coverage in the HRTF data 
to the above, and especially below. It leads to divergent, numerically 
touchy problems in *very* high dimension: if even one of your points 
in the KEMAR set happens to be out of perfect quadrature, you're led 
to an infinite order contribution from that one data point.


It also doesn't help that, directionally speaking, our known HRTF/HRIF's 
don't really come in quadrature, so that they actually contribute to 
directional aliasing, *statistically*. To negate their individual 
error contributions out, to a degree. But then, again, I know of *no* 
global, stochastic error metric out there, nor any optimization 
strategy, proven to be optimal for this sort of optimization task.


So the best framework I could think of, years past, was to try and 
interpolate the incoming directional point cloud from the KEMAR and 
other sets, to the whole sphere, and then integrate. Using a priori 
knowledge for the edge, singular cases, where a number of the empirical 
observations prove to be co-planar, and as such singular in inversion. I 
tried stuff such as information theoretical Kullback-Leibner divergence, 
and Vapnik-Cervonenkis dimension, in order to pare down the stuff. The 
thing I settled on was a kind of mutual recursion between the 
directional mutual information between empirical point gained/removed 
and Mahalanobis distance to each spherical harmonic added/removed. It 
ought to have worked.


But it didn't. My heuristic, even utilizing exhaustive search at points, 
didn't cut it even close. It didn't even approach what Gerzon did 
analytically in 4.0 or 5.1.


So, any better ideas on how to interpolate and integrate, using ex ante 
knowledge? In order to go from arbitrary point clouds to regularized, 
isotropic, optimized, ambisonic -> binaural mappings?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] On 3D audio rendering for signals with the low sampling frequency

2022-08-12 Thread Sampo Syreeni

On 2022-08-12, Fons Adriaensen wrote:


On Fri, Aug 12, 2022 at 08:57:33AM -0400, Marc Lavallée wrote:


You can't transport a grand piano on a bicycle.

It's possible with an extra wheel:
https://www.rimskipiano.org/rimski-s-bicycle-piano


:-) Would like to see that going downhill...


As I remember, one of the modern classical composers, maybe even Cage, 
once dropped a rock/boulder on a grand piano, as a performance piece. 
Call it the penultimate prepared piano, second only to the explosive 
one.


There's *always* a slope downhill...

Many years ago (1970s) I was at a concert in which an upright piano 
was suspended on a steel cable going from the top of the auditorium 
down to the stage. The (short) performance ended when the instrument 
smashed into the rear wall.


But did it so purposely? Did it kill anyone?

An there was the French street theater group 'Royal de Luxe' who used 
to launch pianos with a ballista. But they cheated, taking out the 
metal frame first.


Right now I'm watching Queen of the South on Netflix. Reportedly it has 
been largely inspired by narcocorrid0.


Forget ballista, forget metal, anything at all. Narcocorrido is the 
worst sort of music you can ever find. It's all polka, with tuba 
baseline, melody from happy men and accordions/harmonicas, all telling 
about real life where men and women are killed, and their children 
raped. Being rid of their eyes, to polka tune.



None of them had anything I'd call artistic about them... :-(


I know the type. Being a libertarian and a well-versed amateur economist 
myself.


Most people who talk economics, do not in fact understand economics. My 
kind included.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] On 3D audio rendering for signals with the low sampling frequency

2022-08-12 Thread Sampo Syreeni

On 2022-08-12, Marc Lavallée wrote:


You can't transport a grand piano on a bicycle.



It's possible with an extra wheel:
https://www.rimskipiano.org/rimski-s-bicycle-piano


More generally, it's called a lever, or leverage. And as we all know, 
there is no theoretical limit to it; it tends towards infinity, as the 
Halligan lenghtens.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] basic questions

2022-06-30 Thread Sampo Syreeni

On 2022-06-06, Thorsten Michels wrote:

I take your word for it. It is just what I have been told. Due to the 
fact, that I don't have a Ambeo VR mic myself, I can't proof.


You can test and proof pretty much any mic or mic array, using readily 
available, standardised, impulsive sources, recorded from afar. A 
hundred shots all-round from a starter pistol can't go too much off. 
Even a couple of dozen popped air balloons of similar size from medium 
range can't. Or clapping your hands, the clipper traditionally used to 
synch audio and video, if you get your mechanics right and systematic.


Trying something like that, Sennheiser is not the best option. Even 
their headphones are dull, despite the impressive curves they delve out 
on paper.


Of course I then went with Grado, because "they sounded better and 
livelier". I'm then not too sure that was because Grado has it more 
accurate, or because they fuck up the differential phase more.


Quantifying headphones' nearfields is *significantly* more difficult 
than what mics and mic arrays do, because in the latter case you can 
employ free field conditions and far fields. Everything else then 
follows.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] auditory coronagraphy

2022-06-27 Thread Sampo Syreeni

On 2022-06-27, Fons Adriaensen wrote:


1. Scale the optical instrument by the ratio of wavelengths and
  then consider if the size is practical.


But it really is. An occulter the size of a metre or so is right in the 
range, and doable.



2. The required optical processors may not exist for sound waves.
  And if they existed, they probably wouldn't work over a frequency
  range of several decades.


In the audio range we already have metamaterials and nonlinear 
ultrasonic processing which is proven to work. In fact we even have 
continuously varying refractive auditory lenses, coming from certain 
whales.



3. Pressure waves do not work in the same way as EM waves. For
  example, they are not transversal, so there is no concept of
  polarisation.


Sure, but much of the optical work with wavefield sensing doesn't rely 
on such concepts either. They are a plus, but not needed. For instance 
the idea of "closure phase" in optical interferometry is purely an 
amplitude one, trying to get around from phase to amplitude. If we work 
with pure longitudinal pressure waves, as we do in audio, there's 
nothing to be got around, only what is there.


And yet the technique works. Either the Fourier-optics or the 
ray-optics, depending on the acoustical-optical regime and the 
apparatus.


Interestingly, I just bumped into this one, from Las Vegas, on one of my 
engineering channels. https://www.youtube.com/watch?v=ydOn8qwLJzA


Towards the end, it's noted that the sphere will beamform content 
towards individual audience members. If there's any veracity to that 
claim, it will have to come from something like an order-1000 HOA 
system. Though likelier, it does something like a few nonlinear 
acoustics pointbeams from an ultrasonic active array, blending AESA with 
https://en.wikipedia.org/wiki/Sound_from_ultrasound , leading to a 
smaller emitter, yet an active one so that you can "shoot" even sound.


The point is there are plenty of active acoustical techniques which can 
mimic or even overthrow (within their bandwidth) optical techniques. 
It's the same wave equation we're working with, after all. So I wonder, 
why aren't the the optical techniques borrowed more often onto the 
acoustic side? All of them could be; the same equation governs all, and 
on the acoustic side, we even have nonlinerity to help with further 
continuous time processing, before the DSP machinery. Why?


I'd guess nobody really needed something like acoustic coronagraphy. We 
don't surveil the surrounding acoustic field like we do the more 
photogenic one. Also, the normal acoustic field around us is pretty 
uniform and governed by low level reverberation plus noise; as them 
astronomers put it, "it has a high level background". And in an acoustic 
field, you typically don't have any high interest or exceedingly 
high intensity point sources, which you'd want to follow, or take a 
detailed spectrum of. So no, there's not a lot of interest, for a 
reason, for doing this sort of thing on the auditory, receiving end.


At the same time, there apparently *is* real interest in solving the 
reciprocal problem: how to project sound to a single person from a 
limited array.


That one then is at least a second or maybe even third order nonlinear 
optimization problem, of the partial differential kind. Highly 
nontrivial, provably non-convex, and definitely of class NP even when 
discretised.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] auditory coronagraphy

2022-06-26 Thread Sampo Syreeni
I've been rather active in following commissioning of the James Webb 
Space telescope, in its optical near infrared and middle infrared 
wavelenghts.


Why don't we do everything alike in audio? In particular, why don't we 
do audio range coronagraphy, in order to separate the Loud from the 
Feeble? Also, why don't we do physical, spatial modulation even on the 
projection side, backwards? Because the huge constrast *does* go both 
ways.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] format convention of Ambisonic Sound Library Files

2022-05-29 Thread Sampo Syreeni
ensibility machinery of the IFF-type TLV 
machinery, and even the standard says the reader ought to disregard it 
if it doesn't recognize the tag. So sayeth IFF/RIFF/CAF/BMFF, all of 
them.


[1] Except for the channel description chunk which is the usual 
mishmash of everything the authors could imagine, and still not 
capable of describing arbitrary channel uses or just saying 
'undefined'. Luckily it is not required for the Ambix format.


In that OggPCM work of my and Leese's, I actually thought of doing the 
mapping so called "right". As in it being fully general for ambisonic 
and even WFS work.


The trouble is that it would have been almost unimplementable. Just 
think about it: first you want to do compatibility conding from L/F to 
M/S and backwards. That's because you want to hold your original signal 
set as-is for preservation purposes. Then you want to do the same for 
4.0, 5.1, whatever intensity panned sources you might have. Then you 
want to do a static POA folddown to stereo, or maybe a slant octagon. 
You want to do all of that adaptivity using current, minimum audio 
hardware, which knows pretty much nothing at all about the ambisonic 
framework, while being forwards compatible. You want to support *all* of 
the extant intensity panning frameworks, as the reigning paradigm, while 
being at least somewhat compatible with ambisonics, esp. pre-encoded 
C-formats like BHJ and G, plus mixed order, so that you can in general 
do pantophony as well, cheaply.


Pretty much the only way to do that even to the ballpark is to have the 
kind of sparse decoding matrix I said, and to leave it at real, fixed 
point, 16 bits. That's the only format you can universally work with, 
given current CPUs, DSPs, slicers, network controllers, and the rest of 
the hardware/software. If you do something else, all of the I/O 
combinations aren't (as) workable.


Obviously what you'd *really* want in the metadata is a fully time- and 
frequency agile matrix, which incorporates both the central soundfield, 
*and* the whole of the surroundings. I certainly know how to encode such 
a thing: just take your favourite ambisonic central decomposition, 
sparse it out and put out the metadata. Do your favourite sparsing out 
as you wish, sample per sample frame, feeding out metadata for 
deconstruction as you go.


Then model your environs the same way. Just as you model the incoming 
waves on the Bessel side, model the outgoing ones via Hankel functions. 
To whatever accuracy you want them to be. Because then your decoder will 
have to compute what is coming back from your environs, by reflection 
and diffraction, from your now-arbitrarily scaled environs. Say, like 
what happens when you go in an RPG from a room to an arena; and say, 
what happens when the arena has a big, flat wall, with a specular 
reflection to the right. At worst, falling down back to you because of 
the mortar shot you just fired.  So that the overall system isn't LTI, 
but just multichannel LI, while you're moving against the shockwave you 
just put out, against the reflector.


All of this is easily describable. In an audioformat. It just needs lots 
of coefficients. The trouble is that it needs *lots* of coefficients, 
and they need to *stream* in order to properly describe where you are in 
the game landscape. Quite probably you need more coefficients for you 
filters than you actually have audio dataa, even in high order 
ambisonic.


That's why you need to find useful, common, a priori bases, in which to 
express your soundfields, and not just the most general spherical 
harmonical decomposition. You need to be able to do both, like Dolby 
Atmos and MPEG-4 do. In both time and space. You probably need to have 
spaces in your overall acoustic model, which encode directional 
reverberation, even, statistically, without modelling the precise wave 
propagation via spherical Bessel or Hankel functions. You need to be 
able to simplify, and *not* paass on all of those various coefficients. 
Even to decode your signals properly.


Finally, how would I encode the true, lossless matrix into the OggPCM 
stream? Well...


As a multidimensional wavelet tree, zero-tree encoded, using a 
sufficiently high order Daubechies mother-wavelet.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] format convention of Ambisonic Sound Library

2022-05-29 Thread Sampo Syreeni

On 2022-05-27, Fons Adriaensen wrote:


Hello Richard,


Having been caught up in beach bum matters for some years,


You are like...just so cool. Never have I ever... 8)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] format convention of Ambisonic Sound Library Files

2022-05-24 Thread Sampo Syreeni

On 2022-05-24, Alan Kan wrote:

Ah… it seems one has to actually click on the file to see that detail. 
I was downloading from the list page.


Why not embed the a standardised format descriptor into the file itself? 
Most formats permit that. In fact that's what we've been doing all along 
e.g.in RIFF WAVE.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Recording in a studio of a Jazz-Piano and singer

2022-04-28 Thread Sampo Syreeni

On 2022-04-26, Martin Leese wrote:

In three weeks I have the opportunity to record a Jazz Duo with a 
concert grand piano (Steinway) and a singer in a recording studio, 
with controled acoustic.

...

The job is to record the duo in the classical way (Stereo) and now comes
the fun part: in Surround and / or Ambisonic, too.


I have never recorded anything, so please ignore what I say.


Me neither, but...

Perhaps I have misunderstood something, but I assume the studio will 
be acoustically dead (no early reflections or reverberation).


It is never so. A proper studio is silent, rather to be told, but not 
fully. Instead it's controlledly silent and dispersive at the same time, 
in order to at a low level mimic a reverberant concert hall. Also, some 
of the studios, in their early echo rooms ( 
https://en.wikipedia.org/wiki/Echo_chamber ) actually multiply miced 
their output, first from a mono speaker to one mono pickup, and later 
from multiple mono speakers within the room to multiple mono pickups, 
leading to full 4x4 decorrelation of the Lexicon kind, in DSP reverbs.


When micing a Steinway grand piano, the technique is even more involved. 
Because it's done for effect, divestiture and ambience, using very 
little machinery at the same time. Some of the machinery being 
substandard as well. What you do is employ contact mics for the low 
frequencies, from the frame, you do two or three directional mics 
towards the opened case of the grand piano from behind and from above, 
and then you do an ambience mice (preferably a SoundField) from above to 
the audience, slightly beyond the radiance bound. Time align all of the 
signals, empirically.



I am not sure what will be the point of reproducing this in surround.


It can be exquisitely nice in surround, if you know what you're doing. 
Especially if you mic your jazz cymbals right.


Also, close miced fields can be rather forceful if miced right. I don't 
think Björk knew what she was doing on her eponymous disc, but just 
listen to the result: what the fuck is this, even: 
https://www.youtube.com/watch?v=PF_k6mIw9w0 ("There's more to life than 
this (recorded live at the milk bar toilets)".


It actually might be about SoundField and BHJ encoding.

You really need an acoustically live studio so that there is some 
surround to record.


In fact a live acoustic is even needed in order to average over spatial 
kinks, both in mic setup and in the coarser features of the room. Hence, 
dispersing toppings overall.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Call: Europe's Fifth Student 3D Audio Production Competition, Nov. 2021, Virtual Event(s)

2022-04-23 Thread Sampo Syreeni

On 2022-04-21, Matthias Frank wrote:

Call: Students who are interested in 3D-spatialized sound and work on 
spatial music, spatial audio productions or recordings are invited to 
participate by submitting short works (4min/11min) to the Sixth 
Student 3D Audio Production Competition.


Are autodidacts welcome to participate?
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Reynolds mics

2022-04-06 Thread Sampo Syreeni

On 2022-04-06, Jack Reynolds wrote:


I am here if there’s anything you want to know.


Pricing, for starters, of course. :)


My marketing approach so far has been word of mouth really.


Appears to be working if you reach the list, and even someone like me.

I have been developing the designs with much field testing help from 
Axel and intended the mics to be as quiet and clean sounding as 
possible and also to be very difficult to break while camped out in 
the jungle. Waterproof Lemo connectors and 3D printed nylon makes for 
a much more robust product than the soundfields.


Have you been able to maintain the constant directional patterns, the 
high spatial aliasing limit, and the low noise floor of, say at best, 
SoundField V? What would you say the tradeoffs are for ruggedness, if 
any?


They are based on the classic tetrahedron of cardioids and there’s not 
much more going on that.


No sense in reinventing the wheel. But say again, which capsules do you 
use, and how do you support them, e.g. to insulate them from rattle and 
undue offsets? I mean, SoundFields are notorious for being fickle 
beasts. Bang them once against a corner of a table, and they need to be 
sent back to factory for a recalibration. You already said you take 
precautions, but how robust are they recording-wise? Have you measured?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Reynolds mics

2022-04-06 Thread Sampo Syreeni
 down the 3D soundfield into a 2D 
compatibility signal set.


Try it out if you don't believe me. Follow the math in the NFC-HOA 
papers to first order, with a horizontal near source. What you'll get is 
a W signal with its directional velocities and pressure slighly out of 
phase (remember, this is not the acoustical approximation anymore, but 
the full soundfield, with four fully independent components per point; 
the stuff Angelo Farina's reactive field work incited way back). What it 
does is counteract from above and below that 1/r term in amplitude 
Faller naïvely thought would topple the whole pantophonic framework.


The same applies to decoder design. Unless your speakers are 
infinitely long vertical line sources and radiate only in the 
horizontal plane, your system is a 3D one.


...and that's precisely why pantophony is an idea born dead. We don't 
have infinite vertical line sources, nor microphone arrays which mimic 
their directional patterns. The only thing we really have is 3D mic 
arrays and 3D rigs.


Let's not pretend there is some 2D thingy anywhere there. Because while 
a noble aspiration, it's also a topological impossibility. Separate 
dimensions just do not and cannot mesh like that. I believe it'd be 
better to center ambisonic work around how to do with lesser vertical 
resolution -- which is topologically speaking somewhat workable -- than 
to think pantophonic formats would ever actually live upto Gerzon-like 
rigor.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Antw: Re: [opus] Antw: [EXT] Opus merging streams

2022-04-06 Thread Sampo Syreeni

On 2022-04-06, Ulrich Windl wrote:


I cannot for the life of me understand why Atmos exists. Except for


I don't know the theory behind, but I guess they want a format open to 
future sound designs (i.e.: how many and where to place speakers).


Third order ambisonic is already nigh correct for a central, static 
listener. Especially when augmented with dynamic decoding for specular 
sources, akin to the higher order extensions of DirAC. It really does 
not *need* hundreds of separate parametric sources. It's just a good 
enough description of the central soundfield, taken from any angle.


If parallax was involved, such as in games where you don't stand still, 
I could understand the point. But as far as I understand it, neither 
Atmos nor AC-4 support anything of the kind. So where's the point? The 
selling point is adaptation to different speaker layouts, but high 
enough order ambisonic can do that already, 3rd order is well enough for 
regular speaker layouts, and no irregular layout is going to work even 
for re-rendered specular sources. That's just basic math: irregular 
enough meshes don't admit stable quadrature.


Plus in the meanwhile, Atmos is about synthetical, specular sources. You 
can't *mic* an Atmos field, but have to *compose* it. Third order 
ambisonics on the other hand *is* a technology you can so-and-so capture 
from the real world. Plug-and-play with spatial sound, to a degree.


To me it sounds just stupid to bring in all of the "spatial objects" 
nonsense, when all it does is to add complexity and weight to the 
abstraction.


Similar why DCPs (Digital Cinema Packages) encode color in CIE XYZ 
(you can encode invisible "colors").


Don't even get me started... XYZ is there because it's derived from the 
tristimulus theory at the physical level. Its weighting functions take 
on negative values only because the optimum transform to "color" call 
for them, and because we want the XYZ space to stay positive. Human 
vision then isn't quite like this, nonlinear as it is. So when charted 
out in the linear XYZ space, vision takes on a wonky, though convex, 
shape.


As such, the "invisible" colors are a mathematical artifact. They are 
not an intrinsic part of the color space, but rather the complex side 
conditions of which XYZ values you are allowed to use in order not to 
encode imaginary colors are the way we model the truth about how human 
vision works. If you went out of gamut, it's not that "an imaginary 
color has suddenly been discovered", but that you failed to respect the 
boundary conditions engendered by the -- rather well-defined -- 
restrictions of the gamut.



I guess the idea also was not to restrict the color space to the 
limits of any existing device.


Indeed it was not. The idea was to start with linear tristimulus theory 
which was already known to be true, and then to model the newer (still 
rather old) opponent process theory of color within it. CIE's various 
standards rather successfully do just that.


Or maybe they just want to fight pirated copies by filling up the 
disks faster ;-)


I think this is the most likely explanation. Because, you know, Dolby is 
Dolby. It's always gone the Intellectual Property route, throughout its 
existence. Hell, at one time they refused to let anybody licence their 
noise reduction algorithms except as implemented by their analogue 
chips. Fuck, I don't think they even now permit anybody to implement SR, 
A, B, C, S, god forbid Prologic-II(x) in software. And they just keep on 
"inventing" -- not unlike one of our Finnish national prides ABLOY (lit. 
"Aktie Bolaget Lukko Osakeyhtiö", roughly "plc. Lock Plc.") keeps 
churning out newer and newer designs of keys in order to keep them under 
trademark and design rights.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Reynolds mics

2022-04-05 Thread Sampo Syreeni

On 2022-03-26, Chris Woolf wrote:

Jack has been conversing with me since then off-list, and I've clearly 
managed a bit of unintentional promo on his behalf!


A bit of promo is not bad in these circles: quite obviously ambisonic 
(and other principled high order) technologies need a bit of a 
commercial and social boost nowadays.


But why don't you then bring Jack back into the fold, too? It'd be a 
*hoot* to discuss his technical and marketing choices in here. Maybe 
have a bit of to-and-fro abotu them. :) <3

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Reynolds mics

2022-04-05 Thread Sampo Syreeni

On 2022-03-25, Drew Kirkland wrote:

I have one, I can send some recordings. It is a flatter response than 
the sennheiser Ambo. It's very light and the capsules are reasonably 
well matched.


How does it compare to full-blown, classical designs such as the Mark IV 
or V Soundfields?


I ask because theory-wise pantophonic and periphonic soundfields 
shouldn't be captured the same way, nor do they represent even the same 
encoding system. The mics used are thoroughly different, and especially 
with pantophony (at worst some derivative of just XY), people seem to do 
whathever, and don't even quantify their secret sauce.


What are the equations, here? Also, how and *why* does it sound?
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic UHJ Stereo decoder to speaker feeds

2022-03-01 Thread Sampo Syreeni
such, but it's just linear algebra; 
an easy peace compared to the psychoacoustics which would be necessary 
in order to construct the optimal open loop controller for something 
like this.


Actually you *do* need Z. That's the point where I alluded to 
Christoph Faller above: if you cut out the third dimension, your 
reconstructed field will show a 1/r extra attenuation term from the 
rig inwards, because you're bleeding off energy to the third 
dimension.


You need the third dimension for realism. Not for correct decoding.


In fact you do. At least for extended area reconstruction.

Think about it. Suppose you have a far source which we can think as 
exciting a plane wave at the origin. If you detect it and or reconstruct 
it using point sources over a circle, you'll miss the fact that it has a 
component in the third direction. Even if it's just that simplest 
planewave, the sensing and the reconstruction will 1) miss that 1/r 
attenuation Faller pointed out to me at his time, 1a) which has had to 
have been compensated in the planar WFS literature already, 2) there's 
hell to pay in sparse and not-too-regular/quadrature/point-like arrays 
because of how Hyugens's principle works with them, yielding secondary 
point radiators which interfere (unlike continuous arrays of the theory 
would, especially in full 3D), and 3) the most interesting thingy: it's 
*not* theoretically sufficient even to have a full 3D rig in order to 
solve the propagating wave equation.


What you actually need in order to solve the system, is to have, as part 
of your continuous rig, pointwise control of not just the pressure 
field, but its normal derivative. That for a purely inwardly propagating 
field from afar. If you also consider outwards propagation, you actually 
need a "speaker" capable of doing everything a SoundField mic does at 
the center of the array, at every point in 3D space around some closed 
surface around the center point. Nothing else will do, if you want your 
solution to the wave equation to converge over the whole area, even 
given outwardly radiative, near-field solutions to the wave/acoustic 
equation.


Which then means even in the reduced analysis that you must explicitly 
tell *why* you only analyse the sound pressure field, and not the 
vectorial velocity field too. You might not *want* to, but it's there, 
and it's hugely relevant. Especially in regards to resonances, standing 
waves, convergence of solutions of the acoustical equation to the very 
edge of any given bounded, convex set, outwardly bound energy (even 
given the Sommerfeld radiation condition), and the lot.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic UHJ Stereo decoder to speaker feeds

2022-02-28 Thread Sampo Syreeni
e system function to set a variable time-frequency 
tradeoff. Wiener filtering theory might come in handy, when dealing with 
the noise-signal tradeoff.


If you set it out this way, an arbitrary field with arbitrarily many, 
not perhaps even resolved sources, can be handled just as well as a 
planewave from a single source. The amount of shapening for a source 
would depend continuously on its coherence properties, and if more than 
one source was present, their possible mutual coherence would naturally 
be taken into account. DirAC processing would also take heed of 
anisotropic reverberation, such as when recording close to a wall or 
close to an orifice to a wider space. And the funkiest thing would be 
that the math stays of finite order: every operation necessarily would 
be of at most square the order of the original spherical harmonical 
decomposition. While describing any and all distributions of point 
sources and ambience over the unit circle/sphere (pantophony/periphony).


I have quite a lot of Ambisonic UHJ CD:s and I'd rather listen to them 
as decoded into a surround setup than listening to them in stereo with 
two speakers.


Actually, even if you can't invert UHJ to B-format, really, it's even 
more difficult to go from B-format to something like 5.1. The optimum 
decoding equations are a full nonlinear mess, even in the basic Makita 
sense, and suffer from multiple local optima. (I believe Bruce Wiggins's 
thesis which used tabu search to find viable non-symmetric decoders was 
an attempt to deal with the problem.)


DirAC is stupendously good at this, at least perceptually speaking. It 
adapts to pretty much any speaker array, and from five unevenly spaced 
speakers onwards sounds like there is no rig at all.


Yet I'm perfectly certain, based on the above, I could derive a physical 
signal which the system would decode badly. Pretty much every current 
system would. Say, a narrow band signal coming from around a corner, so 
that it has a large out-of-phase component, spreading sharply in space.


And by the way, the majority of UHJ encoded music releases _was_ 
recorded with a Soundfield type microphone, because the largest number 
of them were made by Nimbus Records.


Also thankfully so: the SoundField series is an unusually robust piece 
of work. Solid theory, high engineering, unbelievably high adherence to 
acoustical theory which wasn't really even understood at the time the 
mics were designed.


Mk4 and Mk5 have been used as *measurement* mics, in acoustical 
research. I don't think any other mic, in any other audio discipline, 
really has.


Nimbus didn't use the Soundfield-made microphone, they used their own 
setup made of two fig of eights and an omni.


Yes. Them idjits. 'Cause there is going to be some high frequency 
phasing them. It might sound good, pace the ORTF crowd, but it isn't 
*real* or *accurate*.


They did that mainly because the Soundfield was too noisy and they 
didn't need the Z signal, as it couldn't be encoded into UHJ and 
carved onto vinyl anyway.


Actually you *do* need Z. That's the point where I alluded to Christoph 
Faller above: if you cut out the third dimension, your reconstructed 
field will show a 1/r extra attenuation term from the rig inwards, 
because you're bleeding off energy to the third dimension. This is not 
much of a problem when you have a fully propagating field, but when 
you're attempting to reproduce standing waves, the problem grows much 
wilder. Then you really, *really* need at least some modicum of 
periphonic control, in order to keep the central pressure field to what 
it was supposed to be.


So, all Reaper users out there, please tell Mark how to do the routing 
in Reaper. David already was in the business.


Does anybody want to sketch out that library I talked about? I'm a 
theoretician, so not much of a coder. Yet I could guide a seven-year-old 
through the process of writing such a thing, in plain C.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic UHJ Stereo decoder to speaker feeds

2022-02-25 Thread Sampo Syreeni
f you want to invert such thingies even in an ambiguous, arbitrary 
way, you cannot really do so using the typical shift-invariant linear 
machinery. That is prohibited by topological, symmetry considerations. 
What you need to do is go into nonlinear variational calculus. Using 
nonlinear perceptual measures, rate-distortion theory, and the like.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] the facebook group

2022-01-08 Thread Sampo Syreeni

On 2022-01-08, Jack Reynolds wrote:


I still haven’t had much luck finding it


Truth be told, since FB deleted my previous account, I've been 
experiencing difficulty as well.


I think it's this one https://www.facebook.com/groups/355566654536126 . 
Or hope. :/

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonics with AirPods head tracking

2022-01-07 Thread Sampo Syreeni

On 2022-01-04, Hugh Pyle wrote:

Is there a simple process for listening to FOA Ambisonic recordings on 
AirPods, with head tracking?


More interestingly, is there some low latency API which lets you 1) get 
the current orientation of the phones and 2) in real time feed them 
something better than Apple does?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] the facebook group

2022-01-07 Thread Sampo Syreeni

On 2022-01-02, Sigmund Gudvangen wrote:

Bad idea! Not everybody have Facebook accounts. I wouldn't want one 
even if I got paied for it.


The idea always was to have the FB groop as a secondary to the list. 
Which it is, and how it remains.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] the facebook group

2022-01-07 Thread Sampo Syreeni

On 2022-01-02, Ralf R Radermacher wrote:

Hi. How about if you all also join the Facebook-group, corresponding 
to this one? Because it'd be nice as always... ;)


Don't!

I've already seen the Zoom Forum and another one on the Zoom H3 dragged
to Fakebook for no good reason.


It seems Surround Sound on Facebook is still under my control. As such, 
if somebody fucks around there, just drop a message. Them idjits will 
vanish (for reason).



I'd hate to see this one go as well.


This one hasn't been going anywhere for something like decades. No 
reason to think it'd change course any time soon.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Environmental Context Podcast Listening Pilot Survey - Invitation to participate

2022-01-07 Thread Sampo Syreeni

On 2021-12-25, Jay Harrison wrote:

I am a PhD student based in the AudioLab at the University of York and 
working in collaboration with BBC R My research focuses on the 
listener perception of context aware personalised audio experiences.


I believe an operationalized definition of the "listening "context"" 
might help us contextualize your proposition a bit...


As Buffy said, "But what *is* *it*?" ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Into Sound - Headphone Localization Hearing Test

2022-01-07 Thread Sampo Syreeni

On 2021-12-19, Stefan Schreiber wrote:


But you can't have it both ways:

Either these small changes are noticeable, but then you also will get 
some spectral = tonal errors. (You don't get tonal errors in the case 
of normal head movements, according to Dr. Theile and others. Even if 
the spectral perception changes because of  head movement...)


OR the changes are too small to get noticed, or tp do anything 
meaningful in the first place...


You might be dealing with the uncertainty principle, here. Not sure, but 
it sounds like you might.


It is impossible to localize in time and frequency at the same time. 
That's just basic math. If you do a Fourier analysis of things, the two 
sides of the tranform obey the Heisenberg principle.


If you want to go into a more Surround Sound or DSP minded discussion, 
I'm more than willing and here. :)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2022-01-07 Thread Sampo Syreeni

On 2021-12-16, Fons Adriaensen wrote:

Absolutely, I’m happy to make that recording available. Give me some 
time for that, I’ll need to adapt the implementation so that it 
outputs the ambisonics signals in a useable format.


The original mic signals would be interesting as well...


I can freely host a few GiB at a time, and a few GiB permanently, if 
something proves interesting. Just ask.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Nimbus promo video

2022-01-07 Thread Sampo Syreeni

On 2021-12-13, Eero Aro wrote:


I uploaded a Nimbus Records Ambisonics promotion video into YouTube:

https://www.youtube.com/watch?v=DrGJxlrv08M

All audio in the video is UHJ encoded Ambisonics. The video includes 
presentation about Ambisonics and several video/audio clips, for 
example from a concert, Zoo, steam railway and from the Farnborough 
Air Show.


Eero was also kind enough, again, to send me a copy. As such, the videos 
can now be found under the Motherlode too, at:


http://decoy.iki.fi/dsound/ambisonic/motherlode/videos/

Do download them, and copy. In this preservation business, it's always 
the more the merrier.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] the facebook group

2022-01-01 Thread Sampo Syreeni
Hi. How about if you all also join the Facebook-group, corresponding to 
this one? Because it'd be nice as always... ;)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] about Nimbus promo video

2021-12-13 Thread Sampo Syreeni

On 2021-12-14, Sampo Syreeni wrote:

Could we make it so that the video, in its original form, becomes a part of 
the Ambisonic Motherlode? Without any extra compression on the way?


(Now, Finn to Finn...)


And back, from Finn to Brit, that certainly was a Harrier float and 
another flyby. A feat of British engineering, of the Thatcher and Reagan 
era. It ought to be memorialized not just because of how Nimbus (having 
bought the technology from the NRDC) tried to commercialize the sound 
stuff, also willingly participated in the geopolitics of acoustical 
technology.


Please, give me the original version of *any* such video. Uncompressed. 
I'll make sure they will stay around for the time to come. Just as I've 
done with the rest of the Motherlode. (Apologies for not having got 
around to organizing the lot...)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] about Nimbus promo video

2021-12-13 Thread Sampo Syreeni

On 2021-12-13, Eero Aro wrote:


I uploaded a Nimbus Records Ambisonics promotion video into YouTube:

https://www.youtube.com/watch?v=DrGJxlrv08M


Could we make it so that the video, in its original form, becomes a part 
of the Ambisonic Motherlode? Without any extra compression on the way?


(Now, Finn to Finn...) Eero, oikeesti tällanen harvinainen materiaali 
pitää saada talteen alkuperäismuodossaan, ja mä edelleen olen se joka 
ottaa sen riskin, että jos joku valittaa tekijänoikeuksista, "I'd be 
willing to take the bullet".


Pliis, toimita alkuperäinen, pakkaamaton, tai ainakin vähän pakattu, 
raina. Jollain tavalla. Se menisi Motherloden video-osioon. 8)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni

On 2021-12-02, Jens Ahrens wrote:


It’s hard to tell how exactly the high orders contribute.


No, it is not. You can calculate via normal linear field theory, how 
exactly anything contributes. From the field to your ostensibly linear 
sensor, over an ostensibly rigid sphere, upon which your sensors have 
been imbedded.


That's just math. Comlicated field math, to be sure, but eminently 
doable, and deterministic to boot.


One aspect is the interaural coherence that needs to be appropriate. 
The other main aspect is what I typically term the equalization: Below 
the aliasing frequency, things are fine anyway.


So why not give us the geometry of your ball-and-mic-array? We don't 
need any derivative measurement, because given the primary measurement, 
we can calculate yours on our own.


Above the aliasing frequency, the spectral balance of the binaural 
signals tends to be more even the higher the orders are that are 
present. The deviations from the ideal spectral balance also tend to 
be less strongly dependent on the incidence angle of the sound if 
higher orders are present.


This is already well-known from the WFS work, of them French and German 
friends/fiends of ours. That WFS lot. Only they mostly talk about things 
in rectangular coordinates, whereas us ambisonic fiends do the spherical 
kind.


Going between those two coordinate systems isn't easy. The 
transformation spreads any excitation or normal wave *terribly* badly 
and unintuitively, over the modes of the other representation.


Much of the angle dependent deviations of the spectral balance can be 
mitigated, for example, by MagLS [...]


What is "MagLS"?

[...] so that the perceptual difference between, say, 7th order and 
infinite order is small.


That has been done via 3rd order periphonic, with active decoding, 
already. It certainly needs less channels than straight 7th order 
pantophonic. So what are you doing here, really?


I can’t tell if it gets any smaller with higher orders. My (informal) 
feeling is that somewhere between 5th and 10th order is where the 
perceptual difference to the ground truth saturates, both in terms of 
equalization and the coherence.


My hearing is that it in fact seems to cohere at about 3rd, or 4th, 
order, periphonically. That's about 16 independent channels over the 
whole sphere. Maybe with active, nonlinear, dynanamic matrix processing, 
as in the case of DirAC.


In the case of 7th order pantophonic processing, the independent 
channels would have to be 14. So rather close in DSP power. And yet at 
the same time, they couldn't come close to isotropy, as in the case of 
3rd degree ambisonics. They couln't come close to the kind of work 
needed for full 3D VR work, vis-a-vis, holding a ferry-wheel or roller 
coaster ride perceptually constant, over the whole ride.


This system would alias, noticeably, unlike full, isotropic ambisonic.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni

On 2021-12-07, Hannes Helmholz wrote:


(Also: SMA here refers to spherical microphone array)


Thank you for the clarification.

It's not self-evident that it is spherical, though, since it's really 
just circular, by said symmetry.


As a wannabe-mathematician, I kinda worry about the precise topology and 
symmetry. Especially since it does goes to my argument about how oblique 
modes in the acoustic field excite a discrete array of microphones.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni

On 2021-12-02, eric benjamin wrote:

I believe that Nando may have been thinking about reproduction with 
loudspeaker arrays. He has a system with eight loudspeakers on the 
horizontal plane, as do I. So good up to third order.


What is interesting here, to me, is that sampling on the recording side, 
and reconstruction on the playback side by discrete speakers -- also an 
instance of sampling in space -- are not the same, and they deteriorate 
the reconstruction of the soundfield separately. Sampling in recording 
array and sampling in reconstruction array...I've never really seen them 
analyzed at the same time, in the same framework. It's always been so 
that we go to an intermediate domain, which is continuous, with a little 
bit of wobble angularly, in noise or gain figures, and then back the 
same way.


It's all whole and good, if you can assume independence in all of the 
errors on the way. But then, you can't: the above Swedish case which 
I've been arguing, *certainly* doesn't admit such symmetry or 
independence assumptions.


So, the statistical asummptions which underlie e.g. Makita theory, and 
there Gerzon's, don't go through. In particular, since we're dealing 
with wave phenomena, there is interference to be contended with. That 
doesn't come through at *all* in statistical analysis, across 2D and 3D 
analyses; 3D coupling to a 2D sensor is *wildly* uneven, and if you have 
a box around the sensor, it can be shown that the sensor coupled with 
its idealized surroundings, can exhibit resonant modes which run off to 
an infinite degree, within an infinitely small degree, in angle. It will 
*always* be nasty, at the edge.



But I actually have 24 full-range loudspeakers available. Would it be 
advantageous to expand our systems to higher order?


When you have those, the next thing is, you need an anechoic chamber, 
and well-calibrated microphones. I mean, you have the machinery to 
launch physical signals, in 3D. Now you need measurement machinery to 
catch what you launched, and a silent space between which doesn't 
perturb your signals. Is it that not so? ;)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni

On 2021-12-02, Fons Adriaensen wrote:

If I’m not misreading, then the 7th order is available somewhere 
between 2 kHz and 3 kHz and higher. Aliasing kicks in at around 4 
kHz-ish.


So the question is if this small range (less than one octave) actually
contributes anything useful.


1-2 (atmost 3-) kHz is the so called phoneme range. In there both 
spectral contour and synchronized neural firing of the auditory neurons, 
(via subharmonics, and en masse, because ne firing rate of few neurons 
goes above a kilohertz), helps us to hear what kind of an implement or 
person a sound comes from. That particular range actually serves a known 
and useful function, even if it doesn't constitute *it'll*.



My guess is that it is not more or less sensitive than SMAs.


Again, what is an SMA?
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni

On 2021-12-01, Fernando Lopez-Lezcano wrote:

Cool. The correctly recovered harmonics for 7th order span about 1 
octave of useful range, if I understand correctly.


I'd argue in order to have proper field reconstruction, you at least 
need to have aliasing artifacts below the noise floor of hearing, or if 
you don't expect full reconstruction, then the noise needs to be 
well-matched to the expected noise floor, and its joint coding. It needs 
to follow something like rate-distortion theory.


Since that kind of theory comes from information theory, it expects to 
know all of the possible sources of information, from all round. So, if 
you know of some 3D information, it will have to be incorporated. In 
this case, it to my mind hasn't been.



Is it perceptually significant to have 7th order components?


I've heard upto third order, in a research setting, in an anechoic room, 
using dozens of speakers. So, full periphony. I've also been presented 
with pantophony in various configurations. (Ville Pulkki is the 
professor of acoustics and signal processing here; Eero Aro the hard 
hitting practitioner, and avid Ambisonic amateur, on the broadcasting 
side of things..)


That 7th order try at pantophonic ambisonics probably is nice, because 
even the third order is good. Even the third order leads to very good 
localisation, over the sphere of horizontal directions. Though at the 
same time, what you're doing here, is seventh order analysis, 
oversampling, while not doing seventh order transmission: that'd even 
periphonically lead to a lot more mics than you have. So somehow you're 
downsampling from what you have. And because you only sample spatially 
on the equator, that will lead to lots of missampling of obverse 
wavefronts; say, reverb modes which go up and down. Even of those 
wavefronts, which hit the near field of the mic, slightly transversely, 
and excite ringing modes around the sphere transducer.


Those cannot be controlled without a transducer over the poples. Not 
even theoretically. Which is why ambisonic traditionally leads to 
gaussian quadrature over the entire sphere: there *anything* at all can 
be computationally controlled. At least in theory.


Or, in other words, as you add spherical harmonics to your encoding 
process, how does the spatial perception change?


Exactly. And how does it work if the field exciting your mic contains, 
physically, components which aren't equatorially symmetric? They *are* 
going to be there, after all.


Or from the other end, if you start with a 7th order recording and you 
start truncating the order to lower and lower values in the decoding 
process, how does the perception of the recording change? Is there a 
decrease in order for which you can say, "well, that one did not add 
much, did it?"


Actually this reminds me of how Gerzon (perhaps Craven as well) 
optimized POA for 5.1 linear decoding. Maybe that's what they do at 
seventh order now, because Gerzon did it at fifth already.


That leads to rather an unsymmetrical decoding solution. Which would fit 
with how badly the above matched symmetrical field behaved -- maybe they 
just don't understand how to do a dual decode, over all of the field, 
and over the frequencies?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni

On 2021-12-01, Jens Ahrens wrote:

For this type of array, the spatial aliasing frequency f_a is 
dependent on order N and radius R of the array in the exact same 
manner like with spherical microphone arrays (SMAs): N = (2 pi f_a / 
c) R


But it is also dependent on the angle of incidence above the equatorial. 
In wideband theory, if a plane wave hits a ring of discrete sensors just 
right, obliquely, from the third dimension, there is hell to pay in 
aliasing.


And of course there's the near, reactive field to consider, with your 
sort of hard core sensor. Monopoles on top of a rigid sphere, right? The 
fields near a hard ball, and their equivalent far fields in free space, 
under Sommefeld, are highly nontrivial, and they couple lateral to 
vertical field components. Such near fields can of course be symmetric 
over the equator, but only as long as the overall acoustic field is 
symmetric that way, too.


In practice it never is. No source, or ambient reflector, like a room, 
never is. No source really lies on the equatorial plane. And also, if 
I'm not thoroughly mistaken, the sampling over the sphere, and the 
sphere-induced near field, amplify the problems.



  0th and 1st order are available for all frequencies.
  2nd order approx. above 200 Hz
  3rd order approx. above 500 Hz
  etc.


You mean the cutoff, right? Do you quantify the bands in rise above the 
equator, too?


I cannot comment on calibration requirements because we did calibrate 
the array…


Against which precise standard? Over the whole of the sphere of 
directions?



(Nor did we measure how well it was calibrated out-of-the-box.).



Which you should. :)


I don’t actually think that there are any special requirements.


I think there are. And you know, I think you came to the right place: we 
might even be able to tell you where you're wrong, where you're right, 
and help you measure and quantify what your product is really about.


Sursounders really like products of your kind to hit the market. They're 
just the *thingy*, in our beloved technology. It's just that we like to 
know what they're about, and how to make them the best they can be. 8)


As before, much of the physical limitations are qualitatively (and 
also quantitively) similar to SMAs.


Pray tell, what is a SMA?
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] A 7th-order array with 16 microphones

2021-12-07 Thread Sampo Syreeni
lique mode in the 3D room, any excitation of it, will 
spatially alias onto the equatorial array, sometimes leading to nasty 
troughs or hills in the amplitude response.


Which in fact happened when you talked onto the mic, close range: the 
proximity effect from above wasn't as it's supposed to sound, even if it 
was pretty close in the ecliptic. Above the equatorial, the sound was 
indistinct in direction, and I'm perfectly confident that with a wider 
band test signal, it would have sounded even more amorphous, because of 
spatial aliasing.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] binaural to FOA?

2021-06-23 Thread Sampo Syreeni

On 2021-03-05, Ralph Glasgal wrote:

Checkout the AES papers at www.ambiophonics.org.  But basically if you 
have a front pair that was recorded with accurate values of ITD and 
ILD and a rear pair isolated from the front that also has accurate ITD 
and ILD then you can have a full circles of very realistic sound in 
the horizontal plane using RACE, BACCH or similar crosstalk 
cancellation apps with just four speakers.


I've been down and under for quite the number of months now, or even a 
couple of years, thanks to COVID-19. Sorry. As such, let me return into 
the fray for a while?


Binaural to FOA is an underdetermined problem, mathematically speaking. 
It's two nasty channels to four straight. That means that if you want to 
somehow "solve" the inverse problem, you will have to bring in hard a 
priori knowledge.


Statically you cannot have that knowledge. If the ears stay put, it 
remains a linear underdetermined problem.


However if the ears move, you can posit the problem as a superresolution 
one. In space/orientation. Any steady sound source can be taken as a 
probe of the binaural soundscape, and the HRTF. You *can* in fact do 
statistical inverse MIMO analysis of if, as long as you have some a 
priori knowledge of the signal's true statistics.


That means you *do* tend to be privy to some a priori information, 
towards the solution of the binaural-to-FOA (HOA?) problem. It's just 
that you'd need to know or infer which way the ear-pair were, from 
auditory analysis alone, given a binaural reference, found out by 
automation.


It *can* be done. It's only that the inverse problem is a highly 
dynamical one, and one which hasn't been solved, nay, even attempted at 
yet.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] binaural to FOA?

2021-06-08 Thread Sampo Syreeni

On 2021-03-05, Ralph Glasgal wrote:

Well, I think 2.0, 4.0, 5.1, etc. sources can be spatialized using 
Ambiophonics which is not just for audiophiles but for video sound 
tracks and any form of industrial surround localization.


Certainly, and I'm known to like the technique.

I'm also known to like physical acoustics, as represented by HOA and 
WFS. Abmbiophoics is definitely not a something else; cheaper, maybe 
better, but nothing going near the physical acoustical ideal.


For example, only the physioacoustical reproduction methods can ever 
achieve full auditory parallax. That is, at full, the possibility of 
walking around and jumping over a sound source, while fully perceiving 
it as being "there".

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] binaural to FOA?

2021-06-08 Thread Sampo Syreeni

On 2021-03-05, Augustine Leudar wrote:


You can try Spats Transaural thing. [...]


Theoretically you cannot gather any more information from a static 
binaural recording than the first and second spherical harmonics. 
W and X.


Dynamically, and using something like onset-offset-thinking, and 
phasing, you can gather a whole lot more. Presuming the microphones 
moves and the pinnae around them keep even semi-constant.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] binaural to FOA?

2021-06-08 Thread Sampo Syreeni

On 2021-03-04, Marc Lavallée wrote:

Mike, I already know how to use the ATK kernels in custom applications 
(a bit of convolution and the job is done).


It's not quite that easy. Going from FOA (especially HOA) to binaural is 
rather easy, since you only have to average over the spherical harmonics 
and integrate over your HRTFs.


The other way, it's far more complicated, because the equations are 
under-determined, instead of over-. You have to be be able to bring in 
some a priori knowledge to fill in the gap, if you want to do it right, 
and then the prerequisite knowledge just isn't here just yet.


In *theory* you can filter out the higher spatial harmonics brought in 
by the pinna, and in *theory* you can at least in the lower spectrum 
work out something right out of the separated microphones (at pinnae). 
But in practice, that will be high science, and doing it right calls for 
assumptions which we cannot readily formalise. They wouldn't be within 
the ambit of linear signal processing, either, but within highly dynamic 
psychoacoustics. If done optimally.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] binaural to FOA?

2021-03-16 Thread Sampo Syreeni

On 2021-03-04, Augustine Leudar wrote:


So I wondered; is there a method to "convert" binaural to
horizontal-only FOA?


Jumping in, after the fact.

*If you know the transfer function which led to the binaural rendition, 
especially in movement, you essentially have in your hands an "infinite 
order" system of equations, to be solved. Theoretically, you can solve 
the sparse system represented in time for pretty much any and all 
ambisonic systems. As long as it moves; if it doesn't, it'll yield just 
two degrees of freedom, and those will be well mixed against even the 
first order WXYZ-degrees.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] analog planner

2020-02-20 Thread Sampo Syreeni

On 2020-02-20, Charlie Richmond wrote:

3) Or that Siri’s AI is advanced enough to pun by auto-correcting 
panner to planner - as these weren’t made for 3-D...


I've been panning Siri's AI ever since it was released.  And other 
auto-correcting apps too.


I *so* saw that coming. :D
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] the dominance transformation

2019-03-21 Thread Sampo Syreeni
As Gerzon describes it, the dominance transformation is a Lorentz boost 
in a certain direction, or a four-dimensional hyperbolic rotation, 
between the first four ambisonic channels WXYZ. As far as I know, that 
sort of thing only works neatly in plain 4D Minkowsky space, with the 
possible extensions becoming approximate at higher finite orders of 
surface spherical harmonical decomposition. Since I'm no relativistic 
quantum physicist, I'm not too sure about how to even begin to calculate 
with something like a relativistic atomic orbital above the first two S 
and P stateS -- corresponding to a rigid boost of a higher order 
decomposition, which we'd probably be after if we wanted to formalize 
dominance for higher orders, and which is also beyond a strictly local 
analysis in both the acoustic and EM fields.


What I *do* know though, is that in the 1D setting of pulse radar time 
series, especially in connection with chirped pulses, hyperbolic phase 
rotations used to be approximated piecewise using physical allpass 
filters in various constant coefficients. Also, it's possible to piece 
back together a perfect LTI response from a heterodyne system which 
implements each band as a simpler non-LTI, SSB-modemed system.


Thus, long story short, is there a theory out there of heterodyning and 
bandshifting akin to the SSB-radio one for spherical harmonics? It 
certainly couldn't work as nicely as it does with normal frequencies, 
since there e.g. just isn't enough room for the information if you 
downshift from second order to first; conversely there *is* going down 
linearly in normal frequency from (x,2x) to (0,x). But since the 
spherical harmonic progression too is neatly *quadratic* instead of 
linear, *maybe* there is somewhere a theory which lets us 
*quadratically* transpose spatial harmonics, and so modulate willy 
nilly?


Anybody privy to anything like that? Because if there is such a theory, 
formally transposing the dominance operator one minimum level up could 
clear the way to actually characterising and even implementing higher 
order dominance for good.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] about extant ARA materials

2018-08-03 Thread Sampo Syreeni
All of the references to Acoustic Renaissance for Audio (ARA) have seem 
to gone dead. To the list's knowledge, is their work/site archived 
anywhere in full? I do have the papers, somewhere, but I'd like to see 
the whole of their presentation, so that I could show it off as an 
example of rational audio system design.


I believe they used to be housed under Meridian's site, at least at some 
point.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Questions on Ambisonics and space / VR Radio Theater

2017-09-21 Thread Sampo Syreeni

On 2017-09-21, Martin . wrote:

What processes do sound designers here use to design realistic 
ambisonic spaces?


Unfortunately I'm no sound designer, or even practitioner, but I'll 
still but in with a bit of basic theory. I hope you don't mind... ;)



Do you record it all with tetrahedral microphones?


I'm reasonably certain that is the exception. Obviously, because 
ambisonic is precisely the only end-to-end system in existence which 
enables systematic capture of soundscapes, people who do such acoustic 
work will be over-represented amongst the system's afficionados. But it 
still remains the fact that true soundfield mics are expensive, as are 
the musicians playing to them. As such the easiest, least expensive and 
so the most common way to exercise the machinery remains doing it 
in-studio.


Are there good reverb solutions to achieve this with mono recordings - 
or to enhance ambisonic recordings? Do you use conventional DAWs or 
also game engines like UE4 and Unity?


It depends: do you just want to author audio, or do you want it to track 
within a game? Ambisonic uniquely goes for both, equally, so again game 
people will be over-represented among the doers. But quite certainly if 
you just want to produce a static soundscape, using a game engine would 
be overkill.


As for reverb in production, especially including distance from a source 
and its movement over time, that side of the picture hasn't really been 
set in stone anywhere.


Early work was content to just pan the direct sound, and maybe put in 
some Schroeder kinda artificial reverb into the W channel. As for 
distance, panning between W and XYZ was used as a quick and dirty fix.


What you really want to do instead is much more complicated. First, 
you'll want to maximally decorrelate the reverb over the channels, so 
that you'll end up with proper envelopment. For a synthetically panned 
source that calls for four mutually decorrelated reverb lines, running 
from the source to WXYZ (and more, if going to an higher order). You'd 
want to add early slap echoes from walls and other obstacles, just as 
you do in stereo reverb modelling, only now panned to come from all 
around. If your sound source is moving, you'd want to model Doppler 
effects with a delay line, and perhaps even go as far as to model 
Doppler in the strongest early echoes as well. Then if you *really* 
wanted to go hyperrealistic, you'd want to model near field effects at 
least for any close sources and/or echoes.


No current software does all of that, be it DAW or game like. As is 
usually the case, we bump into the fact that even plain old ambisonic, 
despite its age, remains hitech which is under development. Its full 
potential has yet to be unleashed, so that if you aim at the limit, 
you'll still often have to roll your own.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] NYT: "Surround Sound? You Ain’t Heard Nothing Yet"

2017-07-15 Thread Sampo Syreeni

On 2017-07-15, Bearcat Şándor wrote:

Does anyone know anything more about the ambisonic speaker matrix used 
(speaker brand/model) and what the mic she used was?


Perchance even more to the point, why isn't she on-list already? I'm 
pretty sure each and every person doing this sort of work already should 
be here.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-07 Thread Sampo Syreeni

On 2017-07-06, Aaron Heller wrote:

The decoders produced by my toolbox in FAUST (the ".dsp" files) have 
distance, level, and near-field compensation up to 5th-order (and more 
soon). Those can be compiled to a large number of plugin types, 
including VST, AU, MaxMSP, ...


...and we like it. ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Sampo Syreeni

On 2017-07-05, Martin Dupras wrote:

I've deployed a 21-speaker near spherical array a few days ago, which 
I think is working ok, but I'm having difficulty [...]


Oh, and by the way, *please* compensate each speaker for 1) its 
propagation delay to the central sweet spot, and also 2) its frequency 
and distance dependent proximity effect. Both compensations can be done 
analytically, with the second one being par of the course for close 
range, domestic POA setups of the old kind. In that circuit the first 
one is more or less subsumed or at least approximated by the second one 
already. However if you *do* happen to use speakers at widely varying 
distances from the sweet spot, and you *do* happen to be able to do 
modern digital correction, *do* correct for absolute delay as well. It 
*will* make a difference, especially at the lowest orders. After all, 
you just said you're working with a "near-spherical array"; pretty much 
by definition that means not all of the speakers experience equal 
propagation delay towards the sweet spot...

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Help: what am I doing wrong?

2017-07-06 Thread Sampo Syreeni
ut that's not really true when we 
work so far away from the limit proper as we do with even a dozen or a 
two dozen speaker array; there we easily perceive multipathing, and the 
degradation which comes with it. As it also happens, it seems that there 
lower order multipathing, to most realistic degrees, somehow gets 
compensated by our hearing.


We don't have a nice theory of how precisely that happens, but we do 
seem to have plenty of evidence in both anechoic and more realistic 
conditions that something like that must be happening. For instnace, 
it's already more or less an established fact that a four speaker *most* 
basic first order POA system sounds better than a regular hexagon, and 
over a wider area; the difference isn't too subtle either: under blind 
listening conditions even I, with my pronounced hearing deficit, could 
*instantly* pick up on it.


That is then perhaps the best reason to go with higher order systems if 
we at all can: even if they can't approach the holophonic bound in any 
practicable way, they do isolate crosstalk so that it leads to less 
combing with a given number of speakers, so that multipathing doesn't 
lead to such prominent spectral lobing. And even if it does lead to time 
domain anomalies, they too will be closer to something our extant 
temporal (pre)masking machinery can handle.


Finally, once again, that's just my synthesis of a bunch of vague 
memories and my own thinking. Various people on-list more knowledgeable 
and more upto date might disagree. But in any case, these questions 
*have* been raised before, and on various occasions been discussed at 
length. Hopefully my ideas above can at least serve as pointers to what 
is already in the list archive. :)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] [off-topic] Multichannel players for permanent installations

2017-07-01 Thread Sampo Syreeni

On 2017-06-30, Augustine Leudar wrote:

its not the heat Dave - its the humidity (100 percent) that destroys 
speakers , that and the ants . Who funded that ?


Also what mostly throws off your naïve sound speed calculations. 
Temperature and the humidity which typically goes along with it jointly 
behave somewhat counter-intuitively when you input them into the full 
wave equation, dealing with multiple mixed gases at the same time.


BTW, that brings to mind a totally unrelated thingy. It's a standard pop 
physics question why does a boiling pot seem to go from higher to lower 
sounding noise. Most plausible accounts of the phenomenon I've seen 
speak high-falutin stuff about average sizes of bubbles and whatnot.


Then a theoretical physics friend of mine once offered a (seemingly) 
quite different, unexpected and in hindsight obviously correct answer: 
the viscosity of water drops just as that of pretty much any fluid we 
consider normal while its temperature rises. When you input the two 
numbers representing that into the relevant equations, your first order 
approximation just drops out. If you want to do something second order 
or further, you'll have to start speaking about bubble sizes and 
whatnot, but they too scale with temperature and viscosity, at first 
linearly around the phase transition from whence they come.


Goes to show, you're exactly right: a change of viewpoint from 
temperature to humidity, from energy level to how water behaves in it, 
often explains more and more simply than temperature by itself. 8)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Multichannel players for permanent installations

2017-07-01 Thread Sampo Syreeni
 mounted to a purpose built monitor room's front wall. 
They perhaps just took a Watt of power to drive, and were nigh perfect 
for their time (could be even better now in their particular 
application, too), but then considering the extra baggage in excess of 
the driving Ethernet cable... ;)


Still, there are lessons to be learnt from the earlier times, even with 
regard to the design of more reasonable PoE driven amps and speakers.


Perhaps for very small speakers with a low power digital amplifier, 
but anything decent would require mains (or DC batteries) to the 
speaker.


I'm not too sure it would... ;)


The available current may power the DAC though.


Any competent mixed domain engineer of today can engineer the DAC into 
the amplifier, or vice versa. They are no longer separate things at the 
low end of the spectrum, and vice versa, at the high end of the 
switching spectrum, we can also do a lot of things at the same time 
which don't really seem like either power or signal conversion.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Multichannel players for permanent installations

2017-07-01 Thread Sampo Syreeni

On 2017-06-28, Marc Lavallée wrote:

But a 24-channel RPI based solution would be much more difficult to 
integrate and would still cost over $400, which is about the same 
price than the uTrack24 [...]


I wasn't aware of that one before now. So I have to fold: you can't beat 
something like that if it does *half* what it's supposed to. Especially 
since they promise you can gang them upto four units at a time.


I'd advise a modicum of caution, however. I mean, they never say you can 
beam all those channels off the thing in real time, they don't talk a 
whole lot about doing simultaneous A/D and D/A at the same time, and 
even the fact that you can only get the full 24 channels at a reduced 
sample rate of 48kHz (only 8 at the full 96kHz one) took a bit of 
digging. Their customer service, while well-meaning, also doesn't seem 
to be the most technically minded or most connected to the resident 
engineer. I couldn't get a clear synch guarantee for the hardware of the 
like we'd have to have for HOA and/or WFS work. So, there might yet be a 
devil or two in the details.


But still, you are right. Both the price point and the specs of that 
thing are *insane*. If they do even *half* of what they claim, there's 
just no possible way you can beat it by building your own. Even a 
well-regulated power rail for your converters could eat up half of an 
amateur's budget. Jesus... :D

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Multichannel players for permanent installations

2017-06-27 Thread Sampo Syreeni

On 2017-06-27, Augustine Leudar wrote:

I know I've asked this before but maybe there's some new developments. 
HAs anyone any suggestions for anything up to a permanent 22 channel 
installation (could be two devices started at the same time and set to 
loop). [...]


Why not just have a couple of commodity PC's running the highest channel 
count external D/A converters you can muster within your budget? Given 
the existence of 7.1 home cinema, lots of implicitly synchronized 
converters already exist, at reasonable cost. Given the existence of 
newer USB, Firewire, HDMI and even Ethernet based transports, reliably 
feeding such multibank converters shouldn't be a problem.


The only problem is how to keep up synchronization between the 
converters. But then that ought to be a problem which is soluble by 
feedback and software.


First, even low-range modern converters keep pretty good time all by 
themselves. They don't drift too much, compared to what we can hear in 
spatial reproduction; compared to accidentally moving the listeners' 
heads ten centimetres or so in aggregate, even an average free running 
commercial converter will keep adequate time over minutes or tens of 
minutes by comparison.


And second, if you really want to make sure your separate converters run 
in time with each other, it's possible to insert an inaudible timing 
reference into the signal set which lets you do continuous feedback 
correction. If this sounds alarming, it should: lots of papers and 
trials exist which purported to make inaudible changes to the program 
material, yet proved to degrade playback. Especially on the digital 
watermarking side the results have been dismal.


But then, here the application would be rather different. First, unlike 
in watermarking, there would be no requirement for the added signal to 
be highly resilient. Instead it could be optimized to be highly 
inaudible. That means that if it was only added to aid in 
synchronization, it would need a very narrow bandwidth, amplitude and 
the resulting extremely low data rate, so that it could be buried *well* 
below even the perceptual noisefloor of an existing recording.


And secondly, it would only be present upon playback. It wouldn't need 
to be buried into the original program material. It could be optimized 
further, e.g. on psychoacoustical grounds, over just the one playback 
system, or switched off at will if it ever somehow annoyed any listener. 
It could also be different for each playback system, and each 
reproduction instance, so that the human hearing system's notorious 
capability of learning to recognize even low level noise signals, if 
repeated many enough times verbatim, could still be subverted.


As I said, I don't think such solutions exist as of now. But I also 
think the theory behind them is well developed enough to make their 
implementation for your use almost trivial. All we'd need is a) an 
inaudible reference signal to provide us with a relative delay reference 
(easily doable via MLS sequences or the like), 2) a self-acquiring servo 
loop to drive a set of relative delay estimates to zero (a simple 
exercise in first order control theory, with the driving measurement 
being derived from an FFT implemented autocorrelation measure), and 3) a 
high grade, capable of continuous variability delay resampler to be 
driven by such a measurement-correction loop (implementable utilizing 
part of said FFT machinery, or separately if e.g. oversampling, minimum 
phase characteristics or something such is required).


I'll attach Olli Niemitalo here, because he might be even more well 
versed than I am in this sort of thing, and certainly is the more 
adventuresome+productive of us two, in the signal processing department.



Must be bomb/cleaner/child/adult proof,


Tape one gigabit ethernet cable into the floor, leading to your 
converter bank in the next combustion compartment. Software gang it up 
with a wifi route leading to the same destination.


The only way it's proofer to the end of the world, or your kids, is a 
setup where you can't actually route back the signal to be heard, in the 
first place. ;)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re. Re: Re Re: Ambisonic Mic Comparison

2017-06-27 Thread Sampo Syreeni
re in excess of 30dB. The end result just fell silent, as it 
should have been from the start. I'm also rather certain the same result 
could have been achieved at negligible cost had the PCB been designed 
right from the start; even going with Cirrus's reference design and the 
ample commentary which came with it, the board could have been as first 
rate as the first iteration was. Combining those two separate teams' 
efforts fully, I'm pretty sure they could have exceeded my eventual 
achieved performance by at least a couple of decibels in inherent noise 
and power supply rejection -- my modifications having costed something 
in the vicinity of 15€ in current money, while being total overkill, and 
achievable by sane, careful circuit design at a tenth or even a hundreth 
of the total cost.


Long story short, openness and the sane engineering discipline which 
goes along with it can achieve a lot. Even in the lowest realms of 
engineering. In this case, with one half-clueless teenager rectifying an 
ex post *obvious* engineering fault of a well-performing, already mostly 
well-engineered mass-market product.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re. Re: Re Re: Ambisonic Mic Comparison

2017-06-27 Thread Sampo Syreeni

On 2017-06-26, David Pickett wrote:

This whole business of low noise microphones and preamps is in my 
experience a non-issue in the vast majority of cases. Very few 
environments are quiet enough to be softer than the noise level of 
most microphones.


Agreed, and thanks for pointing that aspect out aloud. Quite a number of 
people -- myself in particular because I have very little on-field 
experience -- tend to be swayed by minute theoretical disagreements 
which have absolutely nothing to do with our two shared goals: the best 
all-round practical signal chain possible, and the best sounding records 
(in their many forms) achievable within its bounds.


Sampo is right about the ease with which a high quality mic amp can be 
put together, taking reasonable (and obvious) precaution with 
screening, rf suppression and PSU smoothing. [...]


Now if only all of the bucks being thrown at the problem went to 
mitigating such real and well known culprits. If only we did high 
fidelity and the high tech which goes along with it at the *true* 
fidelity margin, at all price points. I mean if we ever got close to 
something like *that* principle, you could even today have a well 
rounded, effective setup for less than a week's pay. If you decided to 
invest more and had the wherewithal to go there, you could do even 
better -- but now all-round, so that what you saved in your mic and 
preamp, you could invest into acoustic treatment, professional help and 
whatnot, which currently tend to be underappreciated wrt to the 
difference they make.


I have a dream... :D
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re. Re: Re Re: Ambisonic Mic Comparison

2017-06-26 Thread Sampo Syreeni

On 2017-06-26, John Leonard wrote:

Well, of course, the ULN-8 isn’t just a pre-amp: it happens to have 
eight very nice low-noise high-gain mic pre-amps as part of the 
package, that’s all.


Nowadays even the highest of the highest spheres in operational 
amplifier technology costs something like 40 bucks per stereo pair. So, 
with the minimum of high grade passive components included, and adding a 
competitive markup, plus upholstery, it's unlikely you should have to 
pay much in excess of 150 euros for your mic preamp. At that price 
point, it already ought to look good as well as sounding nice.


What I mean to say here is that far too many people price themselves out 
of the game. Analogue preamp engineering is not exactly the kind of 
rocket surgery which calls for thousands of bucks at a pop. Done right, 
using current top of the line components, it certainly makes its demands 
known to your wallet. But when done right, it won't blow the bank.


Compared to the much more stringent and esoteric demands placed on a 
well-designed spatial pickup feeding your preamp, the latter ought to be 
a mere financial byline.


Empirically, based on my own experiences, the two low-cost 
(sub-$1,000.00) multitrack portable recorders that I tried after 
initially purchasing the Tetramic, were noisy at high gain when 
attempting to record quieter sounds with the Tetramic.


Then, regurgitating Fons's words, something is very wrong there. Using 
current chips and just a hint of age old analog engineering knowhow e.g. 
in how to make up a proper gain structure, you can *easily* get into 
stable 18 bit territory for less than a hundred bucks, per four 
channels. Getting past 20, predictably, start to run into diminishing 
returns, true. But then pretty much nobody both has the mic to exercise 
such extreme sensitivity, *while* at the same time running such a 
forbidding absolute amplitude reference as to not be able to gain ride 
any residual noise below the audibility threshold. Pretty much the only 
folks with such stringent demands come from a film sound background, 
with enough bucks from the get go to render the whole point moot.


I found that this was not the case with the ULN-8 and also with the 
Sound Devices 788T. We shall see how the SD Mix-Pre 6 behaves when it 
eventually reaches these shores.


To reiterate, if you can hear any difference between such already rather 
high end devices, somebody, somewhere royally fucked up. Even if my 
understanding of analogue engineering is rather limited, still, given 
the almost unreal performance of today's opamps as a building block, 
even I am fully confident I could design a minimalistic, well-performing 
mic preamp from scratch.


As always, other people may have different experiences, but their 
needs may not be the same as mine.


Granted, that. However, I'd still like to challenge you, as a 
practitioner, a wee bit. What *are* your needs, really? How do you 
quantify them, precisely?


I mean, once we know what you and the other practitioners are *really* 
after, I'd contend that even a half-baked theoretician such as myself 
could readily give rise to some rather magical sounding things. The kind 
of bang for buck which is entirely two or three decades beyond what an 
audiophile purveyor would have you believe is possible.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re. Re: Re Re: Ambisonic Mic Comparison

2017-06-26 Thread Sampo Syreeni

On 2017-06-26, David Pickett wrote:

I am not a MOTU user, but at USD 449, the MOTU 4Pre presumably has a 
quite adequate noise specification, or the highly competent Tetramic 
engineers would not have mentioned it in the same sentence as the more 
expensive options.


Furthermore, on the digital side of things, we can buy somethhing 
equivalent to yesterday's supercomputer for nickel and dime (cf. 
Raspberry PI and its ilk). It doesn't seem quite right that we couldn't 
engineer high quality analogue preamp circuitry towards a similarly low 
price point, especially since the advances in digital circuitry are 
synergistic with analogue designs.


It's been something like a decade since I looked into something like 
Cirrus Logic or Analog Devices A/D silicon. Then a perfectly good four 
channel, 16-18 bit equivalent chip cost somewhere in the vicinity of 
$35. A reference PCB layout for it came free as well. Thus, the idea 
that you'd now have to expend anything beyond some 100€'s in hardware in 
order to get a fully functional, top of the line preamp for your top of 
the line ambisonic mic, seems pretty much preposterous.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re Re: Ambisonic Mic Comparison

2017-06-25 Thread Sampo Syreeni
sparency 
of various transmission channels, *always* after factor analysis/PCA 
arrive at the same results: the principal component in the spectral 
domain consists of a nigh-linear spectral tilt, after compensation for 
some near-Weber-Fechner-law. Integrated over the whole of the human 
frequency passband, sensitivity to such average tilt is just 
ridiculously high, so that it for example tends to dominate loudspeaker 
and headphone preference to something like 1.1-1.3 sigma level.


(Sorry once again, I never, ever remember where I got my info; I'm 
clinically unable to remember any references, or faces, or numbers, or 
sometimes even my own name. So, take it with a grain of salt; it 
shouldn't be too difficult to find the relevant studies, given you 
prolly have access to all of the best periodicals already.)


The only real, attested to deviations from that idea/ideal of just the 
spectral tilt governing all, are 1) speech formant like 
characteristics,, i.e. waveguide-like resonances excited by 
near-periodic waveforms with some nonlinearity so as to not *just* 
"light up" the resonance using single harmonical series but having the 
excitation be a bit more spread out as it is in human speach, 2) the 
ridiculous sensitivity peak at 2-6kHz as attested to by the empirical 
ITU BS.648 transfer function; believe it or not, even to date it pretty 
much defies reduction to any basic psychoacoustical theory, and 3) the 
unreasonable efficiency of the human hearing system to react to 
wideband, binaural/dichotic onsets, and discern them beyond even high 
static noise backgrounds.


If you doubt me here, just read through the perceptual audio coding 
theory as a whole. All of the above has been explicitly taken advantage 
of, there. Fully? I dunno. Probably the last, time-domain thingy is at 
least a topic of contention. Especially since it has been cited as an 
explanation for why wide bandwidths in digital audio of over 25kHz (cf. 
ARA) could perhaps lead to better spatial resolution/spatiousness.


(BTW, Peter Craven seemed to provisionally buy into the argument, too. 
As one of the Ambisonic masterminds. He once put out an AES paper about 
the provisional benefits of minimum phase D/A reconstruction filters. I 
don't really buy into that theory *per se*, but just as Craven, given 
that we have extremely high sampling rates, arbitrary order digital 
filters and reasonable lossless compression algorithms readily available 
nowadays, I'd too advocate for wide bandwidths, slow rolloffs and 
perhaps even for minimum phase reconstruction filters.


Because, what would you really lose? Nothing in time or frequency at 
least, because of the *extreme* rates and filtering accuracies we 
currently have. What might we gain? Well, unconditional freedom from 
preringing. Which really *can*, at least in theory, be translated into 
something nonlinearly hearable, even via your common speaker or 
headphone. Thus, just to be sure...



More after I've read the AES papers.


I'd really like to see your interpretation of them.

Not to mention, they really should go into the Motherlode. Somehow, 
someone pirating them at their own peril, for communal benefit. I'm not 
the one to say *you* should be the one to betray your licence with your 
relevant publisher...except that I kind of am... ;)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re Re: Ambisonic Mic Comparison

2017-06-24 Thread Sampo Syreeni

On 2017-06-24, Fons Adriaensen wrote:


... In that scenario, the TetraMic recording was definitely
noisier, purely due to the additional gain required.


That doesn't make much sense.


I think as much as well.


Noise level (relative to signal) shouldn't increase with gain.


At least when noise level is well defined...

The Tetramic capsules have an acoustic noise level of 19 dB(A), and 
sensitivity is 7 mV/Pa. [...]


Enda didn't quantify what s/he meant by "noise" too well. You talk about 
A-weighted measurables, then, while Enda probably talked about program 
level overall noise and distortion, or something like that.


Could it be that you're just talking about different perceptual 
weightings? I mean, if we talk about noise, there we shouldn't ever go 
with A-weighting, or even C-weighting, but the ITU 468 curve. The one 
which peaks as fuck between 2-6kHz, and explains how things like Dolby B 
and C sliding band companders work so well; the one which also fails to 
explain the loudness of impulsive, nonstationary, nonlinear noise, yet.


I mean, the perceptual cognates of lower quality in this test appeared 
to be in precisely that frequency range.


That means the electrical noise level is -116 dBm(A). If the EIN of 
the preamp is say 6 dB or more better then most of the noise comes 
from the mic. So with an EIN of -122 dBm(A) you should be safe, and I 
wouldn't call that 'high end'.


I'd call that just "sane engineering for a sane gain structure".

The specs for the Motu 8m don't even mention EIN (which isn't a good 
sign). The 'dynamic range' figure of 112 dB can mean all sorts of 
things and is pretty useless.


Precisely.

Funnily enough, I'm about to buy meself four speakers right about now. 
For the first time. So that I could finally, eventually, at least do 
some pantophonics for myself before I *die*. Compose for at least a 
simple setup of four identical floor standing speakers, and whatnot; for 
the very minimum of proper spatial reproductive rigs.


It's then amazingly difficult to get a rig amenable to the job. At my 
rather low price point, it's almost impossible to get any numbers on how 
your tentative loudspeakers behave. Pretty much no speaker manufacturer 
wants to publish even such basic measures as impedance curves at 
contact, driver thermal constants/dynamic compression time constants, 
polar response plots, waterfall plots, crossover frequencies, phase 
plots, and the thing.


Undoubtedly it's more complicated on the speaker side. But the 
difficulty is manifest on the mic side as well: all of the measurables 
dual to those of a speaker can in fact sometimes affect the performance 
of a mic, and then even there they don't just tell you outright what 
those measurables *are*. Then because of the unknowns, you might well 
end up paying several thousands of euros extra, for nothing at all. Even 
at the high end, which people here talk about... :/

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Re Re: Ambisonic Mic Comparison

2017-06-24 Thread Sampo Syreeni

On 2017-06-24, Enda Bates wrote:

In terms of directional accuracy, our study did find the Ambeo to be 
slightly more accurate, but with the difference in capsule spacing 
that was expected.


I glanced at the post, but don't seem to remember. Sorry about that. But 
I just wanted to make sure: was the test double blind? You talk a *lot* 
about what was to be expected, so that eliminating observer bias might 
be doubly important.


Both the Ambeo and TetraMic were recorded with a MOTU 8m, with the 
stock Ambeo cables, and the PPAc cabling for the Tetra over a very 
short cable run, and yeah, the specific calibration was for sure used 
for the Tetra.


In general, on a priori reasons, mics wouldn't be expected to be much 
affected by cable considerations. Pretty much all professional mics of 
today tend to be extremely low current devices, which means they aren't 
too sensitive to resistance, and because of that, their proper cabling 
is also rather thin, leading to low capacitance due to low effective 
cross section between the cables, and low inductance because of close 
lead spacing. Of course all that modulo shielding, but still.


Thus I think if the cabling needs to be mentioned, especially with high 
end, broad diaphragm capsules, someone, somewhere did something Nasty. 
It should be a given that with mics costing thousands of euros a piece 
the cabling at least can be assumed to be beyond audible reproach.


In that scenario, the TetraMic recording was definitely noisier, 
purely due to the additional gain required.


Is that because of smaller capsules, lower line levels, noise gain, or 
what, you think?


I've heard from quite a few people that given a high end preamp with 
sufficient clean gain, that's not so much of an issue.


How does that happen, precisely? I mean, analogue circuit wise speaking? 
If the noise signal is there, no amount of even cleaner gain is going to 
take it away. Input impedance issues might *generate* noise, true, but 
then again what *is* the precise issue, here?


Inquiring minds want to know. :)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Matrix H lives ??

2017-04-24 Thread Sampo Syreeni

On 2017-04-13, Eero Aro wrote:

These two CD:s have been released of the Proms, and it is not 
certified, if these are H-Matrix at all. They do decode well with an 
Ambisonic decoder.


I'm pretty certain people on this list, perhaps combined with the folks 
on music-dsp, can utilize modern statistical signal processing 
algorithms in order to ascertain the encoding locus of just about 
anything. If need be. 8)


But beyond that, everything has to be preserved. Everything. Every 
four-track, and whatnot. That's the priority. Not really the debate over 
what the material really *is* or *should be*.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Matrix H lives ??

2017-04-24 Thread Sampo Syreeni

On 2017-04-13, g...@btinternet.com wrote:


My question is whether it is worth digitising these recordings for posterity?


It always is. I'm one of the folks who are willing to bear part of the 
brunt.


But only part of it. Full audio/video takes a lot of room, unlike my 
beloved textual Motherlode. I'd think we'd have to establish a shared, 
distributed backup for this sort of thingy. Beyond what I or anybody 
else can singly put out, or retain.


Anybody game? Even if I'm rather poor right now, I can put down 
something like 2-4TB of cold storage right now. Mirrored. In pure audio, 
it goes a mile, I think. In video not so much. Still, anybody else, 
game?


I have memories that this system utilised phase-encoded elements so 
would the cassette tape's performance have degraded any surround info?


They do. So, the brunt of it would be on the people having the original 
material. Those who'd have to arrive at an arrangement which ensures 
that the originals truly are digitized in the best manner possible; 
without losing quality. Also, you can't lose the original even after it 
has been digitized, but you have to preserve it as well; to perchance 
have it re-digitized at a later date...



I'm reluctant to throw away rare examples of our past technologies :-)


Indeed. That'd be pretty much an example of secular sin. Please let us 
do better than that, as a crowd. 8)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] The BBC & Quadrophony in 1973

2017-01-11 Thread Sampo Syreeni

On 2017-01-10, Stefan Schreiber wrote:

I came basically to the same conclusions. (AAC implementation 
possible, some MP3 "hack" maybe...) But to implement this, you or I or 
anybody would need the (detailled) format specifications.


It'd take a fair amount of digging, but I think I have all of that 
stashed away somewhere, for at least the MPEG2 layers and Transport 
Streams, MPEG4, RIFF, DTS, AC3 and Core Audio. Not exactly sure about 
the newer coding tools like Parametric Stereo, though. (Of all the 
things, I collect data format specs.)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


  1   2   3   >