Hello Everyone,
Since my first post (Greetings from a newcomer...), I have received many kind
and informative emails. Because there may be confusion regarding my cochlear
implant (CI) research, here’s some additional background: Part of the impetus
for my work stems from an attempt to show (objectively) that two cochlear
implants provide more benefit than one. From a normal-listener perspective,
this seems almost obvious: Two ears help us localize a sound source, and this,
in turn, helps us segregate a signal from noise. CI users have a lot of
difficulty listening in noise: Even a +5 dB SNR makes speech comprehension
difficult for them. Research to date hasn’t shown significant improvement in
word or sentence comprehension ability in noise with binaural implantation.
Individuals with two CIs say that there’s a marked improvement in their sense
of “space” (and sense of well-being) over a single implant, but it has been
difficult to quantify this improvement.
Consequently, insurance companies (at least in the US) won’t pay for two
implants. The old-school method of testing in noise largely ignores surround
sound or “real-world” scenarios, so I am attempting to improve the way we test
hearing-impaired listeners in noise. Methods of measuring speech comprehension
typically include the use of multi-talker babble or speech-weighted noise in
one speaker and the speech (or target) signal in another speaker: This
arrangement hardly replicates real-world scenarios.
In case readers are unfamiliar with CI listening, it’s probably a lot like
listening through a 6-channel noise-band vocoder. Examples of implant
simulations can be found on www.hei.org/research/shannon/simulations.html.
Although not included in simulations, the background noise would also be akin
to listening through a vocoder, so there's probably a lot of energetic masking
(versus informational masking) going on when using a limited number of channels
(channels equating to the number of electrodes along the implanted electrode
array). If the noise and signal are spatially separated, and if there's still a
sense of "direction" at opportune moments, then two CIs should help in noise.
Incidentally, typical CIs have 22 electrodes, but only so many electrodes are
"active" at any given time; othewise, there would be a lot of current smearing
among the electrodes. I think a 6-six channel vocoder is the most reasonable
approximation when simulating CI
listening (research supports this).
The post that said that Ambisonics resorts to some “psychoacoustic trickery”
was very well taken, and addresses one of my preliminary concerns regarding
first-order Ambisonics. But what I hope to do at the onset is to use a variety
of representative background noises from recordings (ranging from quiet coffee
cafes to loud restaurants) to investigate speech comprehension in surround
noise using single, binaural and hybrid CI patients. I could also vocode the A-
or B-formatted signals as well as the speech signals for simulated CI
listening with normal-hearing listeners. To be clear: Initial tests will use
Ambisonic recordings only to provide “real-world” background noise, not to
provide the target or speech signal. The speech signal will be recorded on an
independent (monoaural) track and reproduced through its own loudspeaker.
Auralizing the speech signal may or may not add much in the way of realism
because the intensity of reflected or
reverberant sound from a nearby talker (typically well within 1 m) would be
quite small. But the background noise should be realistic in level, and its
wave field created by an Ambisonic arrangement (even first-order) should
hopefully be more realistic than the old-school method using a single
loudspeaker. Sadly, a lot of hearing aid and CI studies are done with only two
loudspeakers, one for speech and one for noise: I just don't think this reveals
more than the effects of energetic or informational masking (depending on the
noise) using two, albeit spatially separated, monaural signals.
I have read a number of articles on first- and higher-order ambisonics, and I
realize that I have a lot to learn. Certainly, the "best" setup for my research
would be a way of creating a sound field at the listening position that's
equivalent to a real-world situation, but this isn’t easy to achieve in many
research environments. For example, binaural recordings and headphone playback
might give "accurate" pressures at the ears, but headphones are certainly out
of the question when it comes to CIs and most HA devices. Actually, I've never
experienced a sense of “open space” when listening to binaural recordings or
simulations from HRTF IRs (including the often-cited IRs made by Gardner et al
at MIT during the 1990s). I own ER-3A insert phones, Sennheiser HDA 200
audiometric headphones, and my work-horse AKG K240 studio 'phones--but I've yet
to hear a binaural recording that replicates live sound--practically everything
gives the usual "in-the-head"
effect or is lateralized (versus localized).
Of course, I'm also very interested in Ambisonics from a media and music
production viewpoint--this allows for creative freedom and more opportunities
to see which speaker arrangement sounds "right" to me. Again, I very much
appreciate the chance to communicate with those who have considerable
experience with Ambisonics (and Ambiophonics, too).If nothing else, I'll learn
a lot from researching Ambisonics--whether it be live sound recording
techniques, what NOT to do in the future, and a bit more about psychoacoustics.
Sincerely,
Eric (www.cochlearconcepts.com)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20111130/5e549099/attachment.html>
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound