The LISTEN set from IRCAM, the KEMAR set from MIT and the spherical set by R. Duda are included in the Ambisonic Toolkit. I use them on http://ambisonic.xyz/ . The spherical set is probably a good enough compromise for VR applications, because perfection is not required for a good experience. What seems to be missing is a practical method to provide personalized HRTFs to users. -- Marc
On Sun, 24 Jan 2016 19:31:33 +0000, Stefan Schreiber <[email protected]> wrote : > http://www.blueripplesound.com/hrtf-amber > > > The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about > > 50 different people - this must have taken a lot of effort and > > we're very grateful to the good folk of IRCAM for doing the work > > and making the results available to the world! What we've done is > > analyse this data and come up with an 'average' HRTF that is a > > sensible compromise, using some new work. As it's an average, it > > wouldn't be perfect for any of the people actually measured, but > > hopefully not awful for any of them either! It's certainly much > > better than conventional "panning" techniques. > > > (See also: > > http://www.blueripplesound.com/personalized-hrtfs > ) > > > We provide "generic" HRTFs models (for instance, our Amber HRTF > > <http://www.blueripplesound.com/hrtf-amber>) which work well for > > many people, but even better results can be achieved using > > personalized HRTF measurements. > > > Could any people, companies or institutions on this list provide > access to such a practical and < usable > generic HRTF model? > > If not: I believe that some essential theses and papers should have > been done in the academic world, but don't exist anyway. > > Richard Furse basically states that a "good" generic HRTF is derived > from many HRTF measurements (data sets) via some form of averaging, > as a "sensible compromise". I doubt that this is a trivial process, > though... > > Best regards, > > Stefan > > > P.S.: VR companies will currently have to look into these issues, > and to find solutions which are practical at least < for most > > people. If some proposed HRTF data set doesn't fit to an individual > listener it should be pretty hard to distinguish between front/back > sources, for example. (Even with head-tracking.) > > Don't tell me that I didn't present a paper to prove my point... > Instead, give me the link to a paper which delivers some kind of > optimized generic HRTF data set. If such a paper doesn't exist (yet), > I don't see any reason why something like "Amber HRTF" can't be > re-engineered. > (Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a > public available list. And even IRCAM should be interested to provide > a good universal HRTF based on its own and public HRTF research!) > _______________________________________________ > Sursound mailing list > [email protected] > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe > here, edit account or options, view archives and so on. _______________________________________________ Sursound mailing list [email protected] https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.
