On Sun, Apr 17, 2011 at 09:28:28AM +0800, Junfeng Li wrote:

> I am now wondering how to subjectively evaluate distance perception in
> virtual environments which might be synthesized using WFS or HOA (high-order
> ambisonics). In my experiments, the sounds were synthesized at different
> distances and presented to listeners for distance discrimination. However,
> the listener cannot easily perceive the difference in distance between these
> sounds.

No surprise really. 

There is very little difference between the _direct_ sound of a source
at e.g. 2 m distance, and one at 20 m. Except for *very* close sources
a static receiver having the size of a human head has almost no
information to detect the curvature of the wavefront and hence the
distance of the source. In anechoic conditions it's near impossible to
detect distance, except again for very close sources, or by implicitly
assuming some standard loudness for the source, e.g. a human voice 
which has a strong correlation between loudness and timbre.

As others have already pointed out, distance perception depends in
practice almost entirely on interaction of the sound source with the
environment: the relative level of reverberation and direct sound,
and the delays and levels of early reflections. In a virtual environment
created by HOA or WFS you have to artificially recreate those as well,
otherwise the acoustics of the listening space will dominate and the
apparent distance of any reproduced sound will be the distance to the
speaker.

Ciao,

-- 
FA

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to