Hi Peter,
    Like I just said - needs experiments in zero G. I wonder what the
acoustics in the ISS are like? Might be easier to organise decent
acoustics in a Vomit Comet
(http://en.wikipedia.org/wiki/Reduced_gravity_aircraft) especially as
the padding already there would help. Now, where do we apply for
funding??

    Dave

On 5 November 2012 14:18, Peter Lennox <[email protected]> wrote:
> Eric, some interesting thoughts there, thanks.
> One or two thoughts in reaction:
> 1) you say " There have been a lot of studies regarding localization in the 
> transverse (horizontal) plane" - I know its quite common to conflate these, 
> but (as implied in your later thought experiment) - it's worth pointing out 
> that "horizontal" is specified as perpendicular to gravity. When a person is 
> standing or sitting straight, then if the head is not tilted then the 
> conflation is permissible. But. People tilt and move their heads all the 
> time, so acuity in hearing in the transverse plane is not the same as acuity 
> in the horizontal plane
>
> 2) Your question about acuity when the body is not in that 'usual' 
> orientation: I've thought the same thing, though the other way around - I put 
> people flat on their backs, then played ambisonic material tilted through 90 
> degrees, to see if they got some different experience. So, I was interested 
> in perception in the vertical, but using that transverse plane. The 
> experience was different, but inconclusive in that it wasn't a controlled 
> experiment, of course. I found that identification of source direction was 
> less good than I'd anticipated. BUT - actually, (going back to experiences 
> whilst camping - I've lain awake in the countryside thinking about these 
> things) - listening (especially for direction) with your head so close to the 
> ground is certainly an unfamiliar experience. You've messed up a lot of the 
> pinnae effects. Interaural differences may well be affected. You've got a 
> peculiar pattern of very early reflections (from the ground next to your 
> ears). Most importantly,
  y
>  ou're listening to sources in the sky, with no reflective and occlusive 
> bodies around them. There's no 'ground effect' of the sort that a standing or 
> sitting person will get - that it, early reflected material that has 
> interacted with the ground, including filtering by surface features, clutter 
> (material objects and detritus have a tendency to be near the ground due to 
> gravity...) so, overall, hearing in that area just won't be the same.
> The above might partly account for why, in your experiment, hearing in the 
> horizontal might seem better than it ought - there are simply more cues 
> available for sources at or near the ground? However, in the camping example, 
> I did find increased instances of reversals.
>
> So I had thought there might be an interaction between gravity and spatial 
> hearing, but realised that some of it is just down to physics - the sky 
> really is different from the ground, we really are sort of "2.5 d" hearers 
> (and thinkers?). I'd also wondered whether distance(range) perception might 
> differ with direction. It does (items seem nearer), but more to do with the 
> physics of the matter - for sources in the sky, sometimes (not always!) there 
> is only a direct signal path. So, distance perception as the product of the 
> direct/indirect ratio doesn't seem quite the right formulation.
>
> These things need some decent experimentation, it seems to me
>
> Cheers
> ppl
>
>
> Dr. Peter Lennox
>
> School of Technology,
> Faculty of Arts, Design and Technology
> University of Derby, UK
> e: [email protected]
> t: 01332 593155
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On 
> Behalf Of Eric Carmichel
> Sent: 03 November 2012 18:54
> To: [email protected]
> Subject: [Sursound] Vestibular response, HRTF database, and more
>
> Greetings,
> Mostly through serendipity, I have had the pleasure and privilege of great 
> teachers. I studied recording arts under Andy Seagle (andyseagle.com) who 
> recorded Paul McCartney, Hall & Oats, and numerous others. My doc committee 
> included Bill Yost, who is widely known among the spatial hearing folks. And, 
> of course, I've learned a lot about Ambisonics from people on this list as 
> well as a plethora of technical articles.
>
> I recently sent an email to Bill with the following question/scenario. I 
> thought others might wish to give this thought, too, as it gets into HRTFs.
>
> There have been a lot of studies regarding localization in the transverse 
> (horizontal) plane. We also know from experiments how well (or poorly) we can 
> localize sound in the frontal and sagittal planes. By simply tilting someone 
> back 90 degrees, his/her ears shift to another plane. This is different from 
> shifting the loudspeaker arrangement to another plane because the 
> semicircular canals are now in a different orientation. If a circular speaker 
> array was setup in the coronal plane and the person was lying down, then 
> his/her ears would be oriented in such a way that the speakers now circle the 
> head in the same fashion as they would in the horizontal plane when the 
> person is seated or standing. It's a "static" vestibular change, and gravity 
> acting on the semicircular canals (and body) lets us know which way is up. 
> But do we have the same ability to localize when the body is positioned in 
> different orientations, even when the sources "follow" the  orientation (as 
> is the case 
 in
>   the above example)? How about localization in low-g environments (e.g. 
> space docking)? The question came to me while camping. I seem able to 
> pinpoint sounds quite well in the (normal) horizontal plane despite a skewed 
> HRTF while lying down (and somewhat above ground).
>
> On another (but related) topic, I have downloaded the HRTF data from the 
> Listen Project, and have been sorting the participant's morphological 
> features. I have this in an Excel spreadsheet, and am converting this to an 
> Access database. Using the data, one can pick an "appropriate" HRTF starting 
> with gross anatomical features (such as headsize) and whittle it down to 
> minute features (such as concha depth or angle). I find HRTF discussions 
> interesting, but still argue that headphones and whole-body transfer 
> functions make a difference, too. Insert phones destroy canal resonance, 
> whereas an earcup with active drivers may have a large "equivalent" volume, 
> thus minimizing external meatus/earcup interaction (a mix and match of 
> resonances). Because of this, there can be no ideal HRTF, even when it 
> matches the listener.
>
> While listening to HRTF demos, the notion of auditory streaming and auditory 
> scenes came to mind. Some sounds were externalized, but other sounds of 
> varying frequencies, while emanating from the same sound source, appeared in 
> my head. The end result was that the externalized sounds provided a 
> convincing (or at least fun) illusion, but problems do persist. A stringent 
> evaluation of HRTF / binaural listening via headphones would require breaking 
> the sounds into bands and seeing if a sound's constituent components remain 
> outside of the head. When doing so, a brick-wall filter wouldn't be 
> necessary, but a filter that maintains phase coherency would be recommended. 
> The demo I refer to was that of a helicopter flying overhead. Though I 
> haven't done this (yet), it would be interesting to use FFT filtering to 
> isolate the turbine whine (a high-pitched sound) from the chopper's blades. 
> The high-pitched sound appeared to be in my head, whereas the helicopter as a 
>  whole seemed externa
 li
>  zed. Again, an individualized HRTF and different phones may yield different 
> results. Side note: Be careful using FFT filtering--it can yield some 
> peculiar artifacts.
>
> I am hoping to use headtracking in conjunction with VVMic to model different 
> hearing aid and cochlear implant mics in space. This offers the advantage of 
> presenting real-world listening environments via live recordings to 
> study/demonstrate differences in mic polar patterns (at least first-order 
> patterns) and processing without the need for a surround loudspeaker system. 
> In fact, it's ideal for CI simulations because an actual CI user never gets a 
> pressure at the eardrum that then travels along the basilar membrane, 
> ultimately converted to nerve impulses. With VVMic and HRTF data, I should be 
> able to provide simulations of mics located on a listener's head and then 
> direct the output to one or both ears. This does not represent spatial 
> listening, but it does represent electric (CI) hearing in space. Putting a 
> normal-hearing listener in a surround sound environment with mock processors 
> and real mics doesn't work because you can't isolate the outside (surround) 
> sound from the i
 nt
>  ended simulation, even with EAR foam plugs and audiometric insert phones.
> VVMic and live recordings via Ambisonics is a solution to creating "electric" 
> listening in the real world. Again, I'm referring solely to CI simulations. 
> With the advent of electric-acoustic stimulation (EAS), more than one mic is 
> used per ear: One for the CI and a second for the HA. Combinations of polar 
> patterns can be created. Respective frequency responses and processing can be 
> sent to one or two ears (diotic and dichotic situations). One caveat for 
> using vocoding to mimic CIs is that the acoustic simulation (and therefore 
> stimulation) still necessitates a traveling wave along the normal-hearing 
> listener's basilar membrane. The time it takes to establish a wave peak is 
> not instantaneous (though compressional waves in the the inner ear are 
> virtually instantaneous), and I believe a time-domain component to inner ear 
> (mechanical) action can't easily be excluded when using "acoustic" simulation 
> of CIs. I suppose I could look at data from BAERs and the Greenwood 
> approximatio
 n
>  to account for the time-frequency interaction. Just some thinking... and 
> ideas to share with others interested in hearing impairments.
>
>
> By the way, Teemko, if you're reading this, just wanted to let you know that 
> Bill Yost said he'd read your thesis over the weekend. I notice that Bill and 
> Larry Revit are in your references list. Larry isn't a fan of 
> Ambisonics--said to me in a phone communication that it sounds "tinny". I 
> suppose it does if one were to listen through laptop speakers or from poor 
> source material. Not sure what his source was.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> <https://mail.music.vt.edu/mailman/private/sursound/attachments/20121103/837528f1/attachment.html>
> _______________________________________________
> Sursound mailing list
> [email protected]
> https://mail.music.vt.edu/mailman/listinfo/sursound
>
> _____________________________________________________________________
> The University of Derby has a published policy regarding email and reserves 
> the right to monitor email traffic. If you believe this email was sent to you 
> in error, please notify the sender and delete this email. Please direct any 
> concerns to [email protected].
> _______________________________________________
> Sursound mailing list
> [email protected]
> https://mail.music.vt.edu/mailman/listinfo/sursound



-- 
As of 1st October 2012, I have retired from the University, so this
disclaimer is redundant....


These are my own views and may or may not be shared by my employer

Dave Malham
Ex-Music Research Centre
Department of Music
The University of York
Heslington
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to