Re: [Sursound] distance perception in virtual environments

2011-04-28 Thread Helmut Wittek
Hello Junfeng,

it's no easy task to evaluate distance perception under anechoic conditions 
(which obviously hardly exists).
We did this during my PhD research on WFS.
Have a look at our paper:

Wittek, H., Kerber, S., Rumsey, F. and Theile, G.
Spatial perception in Wave Field Synthesis rendered sound fields: Distance of 
real and virtual nearby sources
Preprint #6000, AES 116th Convention, Berlin, 2004

or my thesis on my website:
http://www.hauptmikrofon.de/

Good luck,
best regards,
Helmut Wittek


-Ursprüngliche Nachricht-
Von: sursound-boun...@music.vt.edu [mailto:sursound-boun...@music.vt.edu]
Im Auftrag von Junfeng Li
Gesendet: Sonntag, 17. April 2011 03:28
An: Surround Sound discussion group
Betreff: [Sursound] distance perception in virtual environments

Dear list,

I am now wondering how to subjectively evaluate distance perception in
virtual environments which might be synthesized using WFS or HOA (high-
order
ambisonics). In my experiments, the sounds were synthesized at different
distances and presented to listeners for distance discrimination. However,
the listener cannot easily perceive the difference in distance between
these
sounds.

Anyone can share some ideas or experiences in distance perception
experiments? or share some references on this issue?

Thank you so much.

Best regards,
Junfeng
-- next part --
An HTML attachment was scrubbed...
URL:
https://mail.music.vt.edu/mailman/private/sursound/attachments/20110417/6
4a7d936/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-28 Thread Martin Leese
Helmut Oellers oell...@syntheticwave.de wrote:

 2011/4/26 Dave Malham d...@york.ac.uk

   On 24/04/2011 19:11, Helmut Oellers wrote:
...modern computers are also clever. Today nothing is unaccountable if we
 know the formula and all variables.

 That's a BIG assumption - and given the essentially chaotic (in the
 mathematical sense) nature of the Universe, wrong. We are now pretty certain
 that nothing is that predictable and that that idea's basically (old)
 Science Fiction - we have moved from  E. E. Doc Smith's Lensman universe
 ( where ultimately intelligent beings could predict everything because they
 knew the complete starting conditions and laws of the Universe) to the
 Discworld universe of Terry Pratchett where one flap of a Quantum Weather
 Butterfly's *** wings can change the course of the entire Universe (and
 confound even the Gods).

 Hello Dave,

 what you are describing, I would consider as the ?Heisenberg uncertainty
 principle?, which  disclosures, as closer we look at the things, as less we
 can discover.  Accordingly, in the quantum world the random exist, really
 not computable. However, in the macro world of whole air molecules, the
 conditions are describable.

No, not the Heisenberg uncertainty principle
just, as Dave stated, chaos.  At times, the
weather system gets itself into a chaotic state.
The motion of the planets is also thought to be
chaotic.  These are macro.

This example of the weather system gave rise
to the (unsubstantiated) claim that the flap of a
butterfly’s wings in Brazil can set off a tornado
in Texas.  (The location of the butterfly and its
effects vary.)  This very nice example was then
purloined and mangled by Terry Prachett who
introduced a spurious reference to Quantum
Theory.

Regards,
Martin
-- 
Martin J Leese
E-mail: martin.leese  stanfordalumni.org
Web: http://members.tripod.com/martin_leese/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-28 Thread Robert Greene


Actually, the butterfly flap thing is not really good either.
In chaos, things do not cause other things. The system is
essentially noncausal.
This is a trick point. But if a system depends unstably
on its initial state, it makes no real sense to say that it
depends on its initial state at all in any detail.

The weather has large scale stable aspects--it is almost always warmer in 
the summer  than in the winter for example. But the details of the weather 
are(it is currently believed) unstable. They are not really caused by

anything in any reasonable sense.

This is in fact not completely detached from quantum uncertainty
because if a system is unstable then it can obviously be knocked about
by quantum level changes--since it can be knocked about by arbitrarily
small changes of any sort. One merges into the other.

Also, there is no reason at all why a quantum uncertainty cannot
have macro effects, cf. Schrodinger's cat and many other examples.


Time for work. More on this later(if anyone cares)

Robert

On Thu, 28 Apr 2011, Martin Leese wrote:


Helmut Oellers oell...@syntheticwave.de wrote:


2011/4/26 Dave Malham d...@york.ac.uk



  On 24/04/2011 19:11, Helmut Oellers wrote:

   ...modern computers are also clever. Today nothing is unaccountable if we
know the formula and all variables.


That's a BIG assumption - and given the essentially chaotic (in the
mathematical sense) nature of the Universe, wrong. We are now pretty certain
that nothing is that predictable and that that idea's basically (old)
Science Fiction - we have moved from  E. E. Doc Smith's Lensman universe
( where ultimately intelligent beings could predict everything because they
knew the complete starting conditions and laws of the Universe) to the
Discworld universe of Terry Pratchett where one flap of a Quantum Weather
Butterfly's *** wings can change the course of the entire Universe (and
confound even the Gods).



Hello Dave,

what you are describing, I would consider as the ?Heisenberg uncertainty
principle?, which  disclosures, as closer we look at the things, as less we
can discover.  Accordingly, in the quantum world the random exist, really
not computable. However, in the macro world of whole air molecules, the
conditions are describable.


No, not the Heisenberg uncertainty principle
just, as Dave stated, chaos.  At times, the
weather system gets itself into a chaotic state.
The motion of the planets is also thought to be
chaotic.  These are macro.

This example of the weather system gave rise
to the (unsubstantiated) claim that the flap of a
butterfly?s wings in Brazil can set off a tornado
in Texas.  (The location of the butterfly and its
effects vary.)  This very nice example was then
purloined and mangled by Terry Prachett who
introduced a spurious reference to Quantum
Theory.

Regards,
Martin
--
Martin J Leese
E-mail: martin.leese  stanfordalumni.org
Web: http://members.tripod.com/martin_leese/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-24 Thread Helmut Oellers
   ...modern computers are also clever. Today nothing is unaccountable if we
know the formula and all variables. Audio is no mysterious. The complete
sonic field would be calculatable. The only problem is the huge amount of
variables. In principle, yet, we are able to calculate any wave front of the
source and any of her reflections in the recording room. The Wave Field
Synthesis provides the approach for handling the problem. The procedure can
synthesize the complete spatial distribution of all wave fronts. In
principle, also all reflections become to restore correct in time, level and
direction, at least in the horizontal level of the loudspeaker rows. The
really disturbing component always remained, as like at all other audio
playback, the additional playback room acoustics, which deliver unwanted
reflections.

However, at WFS we have a chance for avoiding that problem. All we need is
including the playback room properties into the synthesis. By this way
becomes possible, subtract the additional detours of single wave fronts in
the playback room. Never conventional procedure will be able to that,
because direct wave, first reflections and reverberation inseparably merge
together in the transmitting channels. Thus, the playback room unavoidably
remains the disturbing component in transmitting chain. No chance exists for
true spatial audio by that way, thereby. And no chance exists for
reproducing the source distance correctly in the traditional way.


Regards Helmut
www.holophony.net





  I think rooms are poor substitute, and very recent on evolutionary
 timescales, for the predictable reflections one gets in a forest. You need
 the simulated  forest (sort of both uniform but also random )for an accurate
 guess of the start time. Then you delay the direct sound arrival time from
 there as well as decreasing its amplitude proportional to 1/t (where t is
 the time-of-flight from start time to arrival at the listener).. if I
 remember what I tried to do. If you live in a room then expect errors but
 the same principle applies!
 We can't and don't determine the direction and distance of a sound with
 only two ears. We use an infinite 3d array. We just don't know the precise
 details of the ever-changing array. It is a very clever trick that evolution
 has come up with!



 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20110424/ca547b2f/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-20 Thread Helmut Oellers
Hi David,

you are not alone in your insigthes. Some single discrete reflections are
the most important fact for estimation of source distance.
There exist research from Helmut Wittek, who was proven, play the
reverberation from four different directions is absolutely sufficient. We
cannot use the direction of the wave fronts in the reverberation tail for
determine the position of the source. Also in the recording room, the
reverberation arrives from all possible directions.

Another case are the first reflections. Her  delay time and direction are
the most important fact for approve the source position, what inclusdes its
distance and the size impression of the recording room. Such single
reflections causing deep comb filter effects and change the perception
considerably. On the other hand, for reverberation is valid, what floyd
Toole says sometimes: As more reflections esxist as less disturbing there
are. ( as far as I remember well his words ).

All we need for correct distance reproduction is restore some ingle
reflections from her correct starting points and the correct relation
between direct wave and  reverberation.

Regards Helmut
www.holophony.net





2011/4/19 dw surso...@dwareing.plus.com


 Hi List,
 Just popped in.. It's been a while!

 IMO it is a combination of time-of-flight and the inverse square law, where
 t=0 is a virtual point in time determined by the brain as an intercept by
 plotting a function of the intensity of (primarily) transverse reflections
 against time.  Fortunately it is not necessary to work out how the brain
 might do this. One needs to concentrate maximising the availability, and
 accuracy of the information that would be needed to make such a calculation
 possible, without making too much muddy reverb. in the process.  Mono reverb
 does not seem to play much, or possibly any, part in this. It seems to be
 extracted in some way from larger ITDs and ILDs ie. transverse discrete
 reflections. It took me several years to work all this out, and nobody seems
 to have independently come  to the same conclusion in the last decade or
 so.. so it must be wrong. At least it is free and in the public domain now!
 My Heli.wav on Audio and Three Dimensional Sound Links* (long gone) was a
 product of precisely this method of distance synthesis.

 Regards,
 David Wareing.

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20110420/273802eb/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-19 Thread dw

On 17/04/2011 02:28, Junfeng Li wrote:

Dear list,

I am now wondering how to subjectively evaluate distance perception in
virtual environments which might be synthesized using WFS or HOA (high-order
ambisonics). In my experiments, the sounds were synthesized at different
distances and presented to listeners for distance discrimination. However,
the listener cannot easily perceive the difference in distance between these
sounds.

Anyone can share some ideas or experiences in distance perception
experiments? or share some references on this issue?

Thank you so much.

Best regards,
Junfeng
-- next part --
An HTML attachment was scrubbed...
URL:https://mail.music.vt.edu/mailman/private/sursound/attachments/20110417/64a7d936/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound



Hi List,
Just popped in.. It's been a while!

IMO it is a combination of time-of-flight and the inverse square law, 
where t=0 is a virtual point in time determined by the brain as an 
intercept by plotting a function of the intensity of (primarily) 
transverse reflections against time.  Fortunately it is not necessary to 
work out how the brain might do this. One needs to concentrate 
maximising the availability, and accuracy of the information that would 
be needed to make such a calculation possible, without making too much 
muddy reverb. in the process.  Mono reverb does not seem to play much, 
or possibly any, part in this. It seems to be extracted in some way from 
larger ITDs and ILDs ie. transverse discrete reflections. It took me 
several years to work all this out, and nobody seems to have 
independently come  to the same conclusion in the last decade or so.. so 
it must be wrong. At least it is free and in the public domain now! My 
Heli.wav on Audio and Three Dimensional Sound Links* (long gone) was a 
product of precisely this method of distance synthesis.


Regards,
David Wareing.
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-18 Thread Martin Leese
Richard Lee rica...@justnet.com.au wrote:

 You must simulate at least 2 things.
...
 You have to simulate early reflections and a reverb pattern appropriate to
 source distance.  MAG has a paper on this under Distance Panners from an
 idea by Peter Craven.

MAG's paper is:
M.A. Gerzon, The Design of Distance Panpots,
Preprint 3308 of the 92nd Audio Engineering Society Convention, Vienna
(1992 Mar.)
(Simulating distance effects in directional reproduction.)

A commercialisation of this was the TrueVerb
product from Waves.

Regards,
Martin
-- 
Martin J Leese
E-mail: martin.leese  stanfordalumni.org
Web: http://members.tripod.com/martin_leese/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-17 Thread Markus Noisternig
Hi, 

Gavin Kearney et al have presented their work on Depth perception in 
interactive virtual acoustic environments using higher order ambisonic 
soundfields at the Ambisonics'11 symposium in Paris; the article is available 
online at http://ambisonics10.ircam.fr/drupal/?q=proceedings/o6

Best, 
Markus

On 17 avr. 2011, at 19:38, Dave Hunt wrote:

 Hi,
 
 Date: Sun, 17 Apr 2011 09:28:28 +0800
 From: Junfeng Li junfeng.li.1...@gmail.com
 Subject: [Sursound] distance perception in virtual environments
 
 Dear list,
 
 I am now wondering how to subjectively evaluate distance perception in
 virtual environments which might be synthesized using WFS or HOA (high-order
 ambisonics). In my experiments, the sounds were synthesized at different
 distances and presented to listeners for distance discrimination. However,
 the listener cannot easily perceive the difference in distance between these
 sounds.
 
 Anyone can share some ideas or experiences in distance perception
 experiments? or share some references on this issue?
 
 Thank you so much.
 
 Best regards,
 Junfeng
 
 Change in amplitude with distance should be perceptible fairly easily, but on 
 its own would just sound the same but quieter, or louder. High frequency 
 absorption by the air is only really perceptible when the distance is fairly 
 large, though this effect could be exaggerated for artistic purposes. The 
 lateness of arrival of sound from distant objects is not directly perceptible 
 unless there is something visible (e.g. lightning and thunder).
 
 Reverberation definitely gives perceptible distance effects. More distant 
 sources are more reverberant. The amplitude of the direct signal should 
 decrease with distance (inverse square law, or some similar law), while the 
 amplitude of the reflected and reverberant signal would remain fairly 
 constant or decrease less rapidly with distance than that of the direct 
 signal. It is the ratio of direct to reverberant sound that is important.
 
 John Chowning's 1971 paper The Simulation of Moving Sound Sources is a good 
 early consideration of how to synthesise distance.
 
 Of course the reported result will depend on the listener, who may not be 
 used to analysing sound for these effects.
 
 Ciao,
 
 Dave
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound
 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-17 Thread jim moses
That's an interesting question. The environment you're working in for
synthesis could matter quite a bit. That is, if your working in, or
simulating, an environment with little reverberation it is harder to judge
distance since direct-to-reflected energy ratio is an important cue. The
other important cue is timbre detail - especially high frequencies. But this
requires the listener be familiar with the sound source to be able to
discriminate. Try testing with spoken voice.

I can't think of any research of the top of my head (especially for
multi-channel environments). It is certainly well known that controlling
high frequencies and direct/reflected ratio is important for distance
perception in stereo mixing - but even there that's usually a relative, or
comparative judgment, of one sound source appear vaguely 'behind' another.
Not so much an absolute judgment that you might want for a virtual
environment.

jim

On Sat, Apr 16, 2011 at 9:28 PM, Junfeng Li junfeng.li.1...@gmail.comwrote:

 Dear list,

 I am now wondering how to subjectively evaluate distance perception in
 virtual environments which might be synthesized using WFS or HOA
 (high-order
 ambisonics). In my experiments, the sounds were synthesized at different
 distances and presented to listeners for distance discrimination. However,
 the listener cannot easily perceive the difference in distance between
 these
 sounds.

 Anyone can share some ideas or experiences in distance perception
 experiments? or share some references on this issue?

 Thank you so much.

 Best regards,
 Junfeng
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 https://mail.music.vt.edu/mailman/private/sursound/attachments/20110417/64a7d936/attachment.html
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound




-- 
Jim Moses
Technical Director/Lecturer
Brown University Music Department and M.E.M.E. (Multimedia and Electronic
Music Experiments)
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20110417/5157390f/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] distance perception in virtual environments

2011-04-17 Thread Ralph Glasgal
For relatively nearby distance detection such as the buzzing bee or whispering 
or conversation (versus more distant sources such as in a concert hall), one 
needs to deliver interaural level differences on the order of 10 ot 20 dB with 
the corresponding ITD of up to 700 microseconds.  (If the sources and speakers 
are relatively centered then we can ignore the pinna distance detection 
problem.)  At the moment I believe only the Choueiri BACCH dummy head recording 
and crosstalk cancellation method can routinely deliver this magnitude of ILD 
over the full range of frequencies.  If you are synthesizing the ILD in 
your virtual signals then you don't need to use a dummy head or an Ambiophone.  
Of course, this ILD seems to apply only for distances to sources at the sides 
of the head but in practice extreme XTC and thus real binaural ITD provides for 
proximity at all frontal angles in the horizontal plane as in everyday 
hearing.    
 
RACE, if carefully implemented with directional nearfield speakers, can get up 
to about 10 dB or more ILD and you might try this since it is easier (cheaper) 
than using any of the other crosstalk cancelling or WFS or HOA methods.  There 
is no question that Ambiophonic users report enhanced depth perception when 
listening to ordinary music or the commercially available earphone type 
binaural recordings but you may want more than this for what you are doing so 
you should tweak the normal Ambiophonic methodology to optimize ILD capture and 
reproduction.
 
Ralph Glasgal
www.ambiophonics.org    

From: Junfeng Li junfeng.li.1...@gmail.com
To: Surround Sound discussion group sursound@music.vt.edu
Sent: Saturday, April 16, 2011 9:28 PM
Subject: [Sursound] distance perception in virtual environments

Dear list,

I am now wondering how to subjectively evaluate distance perception in
virtual environments which might be synthesized using WFS or HOA (high-order
ambisonics). In my experiments, the sounds were synthesized at different
distances and presented to listeners for distance discrimination. However,
the listener cannot easily perceive the difference in distance between these
sounds.

Anyone can share some ideas or experiences in distance perception
experiments? or share some references on this issue?

Thank you so much.

Best regards,
Junfeng
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20110417/64a7d936/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20110417/da4e9255/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound