Don't know if any of this is useful to you, but you never
know...

A few years ago I wrote a spatializing audio mixer
for a project at NIST.  It took streaming monaural
sources for any number of objects, and spatialized
them for each observer in a distributed VR world.

The VR world was in Java, but the audio spatialization
was all implemented in an external C++ streaming audio
mixer module.

The spatialization turned out to be relatively simple
for two channels.  It had to be, because we only had
a few weeks to design and implement it.
I used a KEMAR head-related transform
function, and derived a table of attenuation and latency factors for
each ear, indexed by the 3d orientation of the audio source
relative to the listener.

Stepping back a little, there are three primary cues for
spatialization:
        a) attenuation/volume - due to ear directionality, and audio path
being blocked by your body
        b) latency - due to differing audio travel paths
        c) frequency filtering - due to differing transmission
characteristics of your ear/head/chest and air.
For my application I decided to ignore (c) on the basis that I couldn't perform
frequency filtering fast enough.  For each of N avatars you need to process N-1
audio streams.  In truth implementation time was also a deciding
factor.

Attenuation by itself is a very weak audio positional cue.  Adding
latency is a big improvement.  When you combine those
two audio cues with visual cues, the effect is rock solid.
We were using immersive stereo video on Immersadesk and CAVE
hardware, so the visual cues (ie the avatars) were accurately placed.

One of the really nice side-effects of implementing latency is that you get
doppler shifts for rapidly travelling objects.  That's a fun thing to demo.

I have no experience with 8-channel spatialized audio, and I haven't
thought about how you might generalize for >2 channels.
But I'm just starting on adding spatialized audio to the Xj3D
project, so these are questions I'll be looking at in the
weeks and months ahead.

Good luck,
Guy.



At 09:45 PM 7/11/2002 +1100, James Sheridan wrote:
Hi all,

The main thing I am concerned with is spatialisation, not synthesis.
Being able to play a point sound over an 8 speaker array (diamond
layout) and have it come from the same physical location as virtual
location.   From what I understood Java3D only really has an interface
for this at the moment and we are meant to use the AudioDevice and
AudioEngine classed to write our "device specific code".

Will there be a general spatialisation algorithm built into java3D? and
if so what is the timeline? and what are the plans for this - will we be
able to simply select the number of speakers and their positioning and
then just map the sound channel for each speaker to the different
channels on our sound card? or will a different approach be taken?

Regards James

Hi, Sheridan.
This is a complex subject...
As far as I know, Java3D use JavaSound as main resource, and has some
classes to use sound within 3D enviroment .

May you find more resources at JavaSound email list :
search : http://archives.java.sun.com/archives/javasound-interest.html
subscribe : http://java.sun.com/products/java-media/sound/list.html

  JavaSound works with sampled (wave) and Midi sounds. The Midi sound
engine runs in software mode, using a high quality General Midi sond bank,
but can also use your own sound bank, as well set JavaSound to use your Midi
hardware devices. But I think a few expensive sound cards can match the
JavaSound Midi software quality. And there is not much CPU load.
 The sampled sounds are send to your harware sound device. The data can be
obtained and processed as you wish. So you can make your own wave sound
synthesizer. There are some  algorithms about sound synthesis using technics
as Frequency Modulation synthesis, Additive synthesis , Subtrative
synthesis, Granular Synthesis , Trigonometric waves, and others.

there is a lot more ... ;)

Alessandro Borges

=============================

From: "James Sheridan"
Subject:  3D Sound Driver/Engines
I was wondering if anyone out there has had any experience writing sound
drivers/engines for java 3D?   If so would it be possible to post how
you went
about it and any tips/hints? or would you please be able to
get in contact with myself.
I'm wondering how you went about it? software or hardware? and if software
what algorithms were used?
With fingers crossed I'm more specifically trying to write an 8 channel
driver for a hammerfall card if anyone has one?
===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".
===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to