This is an interesting turn to the conversation, and I just wish this wasn't my 
busiest time of year.

Eric, what you propose is interesting - and I've always ben interested in 
differences in cognitive processing of reverberant material - or, more largely, 
indirect sound - that which is (naturally) caused by 'sources' but does not 
directly emanate from those sources. 

It is known that there are various deficits in Auditory Scene Analysis 
associated with a range of developmental cognitive 'disorders' - in 
Schizophrenia (I've a reference somewhere after corresponding with a medical 
researcher in Sweden), in autism spectrum disorders, dyslexia, bipolar 
disorder, some dementias and probably others.

Your thesis that there may be significant differences in individuals' 
capacities to cognitively cleave 'source content' from reflected sound seems 
jolly plausible. On the one hand, we have blind-from-an-early-age folk such as 
Daniel Kish, who could echolocate to a remarkable extent, on the other hand, we 
have many (of us!) with some age-related deficits that make it much harder to 
sort out 'cluttered' scenes. Hearing aids are very little help in the latter, 
and often may impede performance.

It also makes little sense that we try to measure 'intelligibility' of spaces 
without a decent reckoning of the spatial character of reverb - it's often 
measured as a mono (or rather, non-spatial) effect. Yet we know that 
performance in echo suppression is superior in the binaural case over the 
monaural case. Therefore, there is something in the spatial nature of 
reverberation that can be of utility in normal hearing that can help 
de-conflate (I made that up) the physically conflated direct and indirect 
signals.

Finally, although binaural technologies could be used to explore much of what 
you want to investigate, it has to be said that the control of perceptible 
range and depth of field in synthesised soundfields ( even with personalised 
HRTFs) is still not quite adequate.

It seems to me, though, that you could get fairly usable soundfields by 
constructing higher-order ambisonic fields using synthetic means - recording 
individual components and assembling the field. This way, you could even, by 
slicing it right, have some experimental control of relative levels of sources 
and 3-d ambience.

In fact, I've a notion that what you're doing could be used, not only for 
testing purposes, but actually for training purposes - you could make an 
environment where people could be taught to improve their performance in 
reverberant environments - that would be quite exciting.
Good luck with the project, and do feel free to ask for help!
regards
ppl
Dr. Peter Lennox

School of Technology,
Faculty of Arts, Design and Technology
University of Derby, UK
e: [email protected]
t: 01332 593155
________________________________________
From: [email protected] [[email protected]] On Behalf 
Of Eric Carmichel [[email protected]]
Sent: 06 June 2012 19:37
To: [email protected]
Subject: [Sursound] Red is blue & sideways is straight ahead

Hello All,
First, many thanks for taking time to read this. This may be one of my better 
attempts at communicating what I’m attempting to do.
I very much appreciate and respect all the input regarding human perception (re 
prior posts / the sound of vision).

Professor Robert Greene wrote *...But right now, no one can know what anyone 
else experiences except in some structural sense.* I fully agree, but we 
(experimenters, psychologists) would have to provide the same physical stimulus 
for participants to agree on what *red” is. This means that light reflecting 
off of the *red* object contains the electromagnetic wavelengths requisite for 
stimulating the retinal cones (and rods too?) and eliciting a perception of the 
colour red (or the light itself is could be *red* by physical definition). Same 
goes for audio stimuli.

I believe it would be interesting to study how the hearing impaired *hear* 
reverberation. Have you listened to the Scottish prayer example that is often 
used in classroom demonstrations? This so-called “ghoulies and ghosties” 
demonstration (found on the “Harvard tapes”) has become somewhat of a classic. 
The recording is of a hammer striking a brick followed by an old Scottish 
prayer. The reader is Dr. Stanford Fidell. Playing the recording backwards 
focuses our attention on the echoes.

Practically no one reports hearing echoes in small (although reverberant) 
spaces when a transient sound is initiated. The echoes are not *heard* although 
the reflected sound may arrive as much as 30 to 50 ms later. The Scottish 
prayer demonstration is designed to make the point that these echoes do exist 
and are appreciable in size. Our hearing mechanism somehow manages to suppress 
the late-arriving reflections, and they go unnoticed (at least for the majority 
of us).

There is reason to believe that hearing-impaired persons have greater 
difficulty suppressing reverberation (a central processing issue, not 
necessarily peripheral organ dysfunction??). Hearing and consciously perceiving 
these echoes could, then, impart a deleterious effect on word recognition 
ability. But without providing the same physical stimulus to the hearing 
impaired listener, can we determine the magnitude of effect? If the recording 
of the hammer (transient) is perceived as being the same regardless whether it 
is played in reverse or not, we can make inferences regarding echo suppression. 
But if the recording used for one population (normal-hearing listeners) is not 
identical to the recording used to study a different population (e.g. 
hearing-impaired listeners), what initial inferences can we make about the 
latter’s perception under reverberant conditions? A recording / playback system 
that includes echoes coming from multiple directions could
 provide additional insight (and real-world validity).

All I’ve been saying is that the one variable that can be controlled is the 
physical stimulus. Stimuli that represent real-world scenarios have more 
external validity than tightly controlled sounds made up of monaural buzzes, 
clicks or tones. Similarly, it’s relatively easy to build and program a robot 
that can navigate in a virtual world built around well-defined colors, blocks 
and shapes; understanding how we navigate in the real (complex) world requires 
more complex stimuli (e.g. Rodney Brooks’ robots successfully navigate over 
difficult terrain without a priori info about the environment). We will never 
know what these robots are *thinking* (some don’t even run on code), but we can 
still measure their performance and then find ways to improve on the design. I 
wish to improve hearing aid and cochlear implant design; consequently, I need 
physical stimuli that represent the world outside of the laboratory. This has 
been my impetus for exploring
 Ambisonics. Naturally, I'm greatly enjoying the musical / artistic aspects of 
Ambisonics as well.

Kind regards,
Eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20120606/e4f83d8b/attachment.html>
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

_____________________________________________________________________
The University of Derby has a published policy regarding email and reserves the 
right to monitor email traffic. If you believe this email was sent to you in 
error, please notify the sender and delete this email. Please direct any 
concerns to [email protected].
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to