Good evening, It’s been very informative reading this list and learning from all of you experts.
I’m an experienced audio engineer that suddenly discovered Ambisonics due to the whole VR 360 explosion. (Although I have made some recordings with a Calrec MK4 in the mid nineties; we would just mix them down to stereo, not knowing what to do with these “B-format” outputs, thinking that they were used by the “B”BC only…shameful, I now realize…we were young….:-) As I’m very new in this, so many questions - that even after reading this list thoroughly and other resources - remain unanswered and hopefully some of you can take the time to answer them. I’ll try to put them in separate threads so we can tackle the issues one by one, unless you prefer otherwise, let me know. Question 1: I’m understanding that a big variable re. localization in ambi to binaural decoding is picking the right HRTF. Now, is there a method whereby we could use test tones or pink/white noise to approximate the subject's HRTF and then use the closest measured HRTF from i.e. the IRCAM or CIPIC database? For example let’s say we use 100Hz, 1K, and 10K and the listener has to press a button on his device when he hears each tone exactly in the middle or exactly at -180 or otherwise. Or using regular and phase reversed tones and subject has to calibrate when they are the loudest or softest? Is this a ridiculous idea or does it have some standing? Would it be very CPU intensive or just a matter of supplying a spreadsheet with the IRCAM/CIPIC measurements and comparing the subject’s answers to that? Surely, it’s far from perfect, but what other solutions do we currently have to give binaural listeners the best possible outcome apart from getting themselves measured or them going through a whole list of HRTF’s ? Thanks ! Albert _______________________________________________ Sursound mailing list [email protected] https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.
