Hello, 

Sampo mentioned that he heard  our demo at Aalto. Here is the title and
the abstract of the demo, which we first showed in AES 55th conference on
spatial audio. 


Demo 3: Head-mounted head-tracked audiovisual reproduction.

Olli Santala, Mikko-Ville Laitinen, Ville Pulkki and Olli Rummukainen.
Aalto University, Department of Signal Processing and Acoustics

Audiovisual scenes are reproduced with headphones and a head-mounted
display in this demonstration. The sound has been recorded with a real
A-format microphone, and it is reproduced using binaural DirAC, which
utilizes DirAC processing, virtual loudspeakers and head tracking. The
video has been recorded with the Ladybug3 camera, and it is displayed
using the Oculus Rift. The listeners are allowed to turn their head in all
directions. The auditory and the visual objects should match and be stable
in the world coordinate system. Moreover, the reverberation should be
perceived to be natural. The reproduced scenes include music, traffic, and
sports recorded both indoors and outdoors.



See description of the audio rendering technique here:
Laitinen, M-V., and Ville Pulkki. "Binaural reproduction for directional
audio coding." Applications of Signal Processing to Audio and Acoustics,
2009. WASPAA'09. IEEE Workshop on. IEEE, 2009.



To me this demo is really cool since the auditory objects are nicely
externalized, even in the field of vision. The trick could to be that when
the subject perceives the space visually, he adapts to the HRTFs used in
the system fast. We also update the head position with rate of about 100
Hz, and then correspondingly update the video and audio. This prevents
nausea, and also helps in externalization of headphone audio.


-Ville



_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to