Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-10 Thread Dave Malham
On 9 July 2013 01:03, Stefan Schreiber st...@mail.telepac.pt wrote:
 J. Liles wrote:

  Still, during that demo, most
 of the time the crow was actually below the horizon due to the fact
 that I automated its flight path rather carelessly by clicking the
 mouse at random points on a Control Sequence in Non Timeline (and the
 automation input is not bound by the top-only constraint that the
 interface is).



Interestingly, a bird is a very bad subject for testing height and
depth, especially if the perception is that it is flying. For most
people, flying birds in real life are always above and it is almost
impossible to shake that perceptual conviction off without other
non-audio cues. A recording made by my ex colleague, Tony Myatt, of
nesting seabirds, mad by sticking the Soundfield out on a pole from
the top of Bempton cliffs (a bird sanctuary on the cliffs about 60 km
from York)  could _not_ be made to sound anything but up even though
most of the birds were down around the nesting sites.



 Two commentaries:

 - the representation of negative elevation is easily possible via
 headphones/binaural techniques.

 - Direct sound from down might be rare or not (but think about some walk
 in the woods wearing a prototype of Oculus Rift and a head-mounted camera...
 :-D ), but reverberation from down is just normal. (Floor/ground
 reflections.)


It is very dependent on type of music. For music genres that are
mostly presented on stage or similar, there is probably no need for
down (or possibly even up!) as a panning location. But, there's a
whole world of other things out there from games to theatre and
museums and right through to electroacoustic composers who would at
one time or another find it useful or even artistically necessary. On
of the limitations we most regretted having to accept in The Morning
Line sculpture was that we could not move sounds much lower than -30
degrees from the horizontal. Nevertheless, we provided the composers
with the ability to pan sounds both above and below - and it was
used..

  Dave

 If you care to share your use-case for negative elevations--I'm ready
 and willing to be convinced of their utility. I was just planning to
 ignore the issue until such time as I reimplement the interface using
 OpenGL--where the ability to  move the camera and display more visual
 cues to its orientation would make manipulating points over the entire
 sphere more usable.



 See above!

 Best,

 Stefan Schreiber

 P.S.: You need the Oculus camera add-on (TM) to avoid running against the
 trees, at least during a VR assisted walk in the woods. Even better if you
 stayed at home...___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound



-- 
-- 
As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-10 Thread Michael Chapman

 It is very dependent on type of music. For music genres that are
 mostly presented on stage or similar, there is probably no need for
 down (or possibly even up!) as a panning location.

You probably speak for the majority (about 'up') by I like listening to a
'tiered' orchestra in FOA/periphony ... and accept that is probably a
personal idiosyncrasy.

Michael


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-09 Thread Stefan Schreiber

Matthew Palmer wrote:


It's state is being just a thing in my head. I bought an Oculus and it
still hasn't gotten here but I was picturing using maybe the Hydra Razer to
make a tool for placing sounds with your hands. Ultimately, tools could be
built where people built songs, it'd be really sweet to see like little
kids making songs that way. Emailed Ico because I'm broke/in debt and don't
have audio set-up or any programming experience. Found a person here in
Richmond, VA to help with information coming from/to maxmsp. I'm trying to
make a movie with my friends that's kind of idealized for Oculus viewers 
so we started talking with people building games/3d artifacts at the school
and making friends so through them  Ico  whoever else is over there at
VT, I figure we can get something done. I also thought maybe people who
were already building similar things  into open source might be interested
in helping.

 



Sounds like you are a really creative person!

All the best!

Stefan
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-09 Thread Matthew Palmer
Yeah, I hope it works out: thank you

On Tuesday, July 9, 2013, Stefan Schreiber st...@mail.telepac.pt wrote:
 Matthew Palmer wrote:

 It's state is being just a thing in my head. I bought an Oculus and it
 still hasn't gotten here but I was picturing using maybe the Hydra Razer
to
 make a tool for placing sounds with your hands. Ultimately, tools could
be
 built where people built songs, it'd be really sweet to see like little
 kids making songs that way. Emailed Ico because I'm broke/in debt and
don't
 have audio set-up or any programming experience. Found a person here in
 Richmond, VA to help with information coming from/to maxmsp. I'm trying
to
 make a movie with my friends that's kind of idealized for Oculus viewers

 so we started talking with people building games/3d artifacts at the
school
 and making friends so through them  Ico  whoever else is over there at
 VT, I figure we can get something done. I also thought maybe people who
 were already building similar things  into open source might be
interested
 in helping.



 Sounds like you are a really creative person!

 All the best!

 Stefan
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130709/eb414729/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread Michael Chapman


  and 3) When youtube
 transcodes the video it adds an annoying click about once per second
 (I believe this is due to a mismatch between the camera's framerate
 and youtube's expectations. I would love to provide a theora/vorbis
 screencapture video instead, but, alas, I cannot find any tools that
 can capture screen activity and record audio via JACK in sync. Anyway,
 the poor quality of the video is not due to a lack of effort.

Is putting it on your own site (HTML5 video tag : video.../video )
out of the question ... ?

Good luck, anyway.

Michael
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread Dave Malham
Hi,
This looks good - can't try it at the moment as I am away from my
Linux machine but I do have a question - the user manual says The
spatialization control may be visualized as moving the sound source
across the surface of a hemispherical dome enclosing the listener but
this implies only one hemisphere (presumably upper) in use as I can't
see any way of switching to  lower hemisphere.

  Dave

On 7 July 2013 23:26, J. Liles malnour...@gmail.com wrote:

 Cross posting this here, as I don't know how many of you read LAU...

 -

 As many of you already know, Non Mixer (http://non.tuxfamily.org) has
 since its inception provided some cushy features for dealing with
 Ambisonics mixes.

 Lately I've been working on extending these features to the next level.

 This is a quick demonstration of the new Spatializer module and the
 associated Spatialization Console. What this does is provide synthetic
 distance cues as well as a cushy interface for placing sounds in
 virtual space.

 http://youtu.be/GVm5Jd1WDWw

 I'm hoping to have these new features ready for testing soon--but my
 free time is very limited.

 As always, donations are welcome and very much appreciated.
 (http://non.tuxfamily.org/wiki/Donations)

 A note about the video:

 Every time I try to make one of these screencasts, I run into the same
 problem: nothing works. Thus, I had to aim my video camera at the
 screen and record audio from my computer via line-out. This is
 problematic for a number of reasons. 1) I can only record stereo this
 way 2) my camera only samples audio at 32KHz and 3) When youtube
 transcodes the video it adds an annoying click about once per second
 (I believe this is due to a mismatch between the camera's framerate
 and youtube's expectations. I would love to provide a theora/vorbis
 screencapture video instead, but, alas, I cannot find any tools that
 can capture screen activity and record audio via JACK in sync. Anyway,
 the poor quality of the video is not due to a lack of effort.
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound




--
-- 
As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread J. Liles
On Mon, Jul 8, 2013 at 6:26 AM, Dave Malham dave.mal...@york.ac.uk wrote:
 Hi,
 This looks good - can't try it at the moment as I am away from my
 Linux machine but I do have a question - the user manual says The
 spatialization control may be visualized as moving the sound source
 across the surface of a hemispherical dome enclosing the listener but
 this implies only one hemisphere (presumably upper) in use as I can't
 see any way of switching to  lower hemisphere.

   Dave


Hi Dave. Don't worry--you have time to get to a Linux machine as the
interface and effects in the demo have not been released yet. What's
described in the documentation has been in Non Mixer for 5 years or
so. Documenting the recent changes is still on my TODO list.

That being said, currently the new interface shares the property of
the old in only representing the top hemisphere. I've played around
with multiple views to allow manipulation of negative elevation, but I
decided that it was too confusing for the user, especially considering
A) the extremely small number of people with periphonic rigs and B)
the even smaller number of *musical* scenarios where a sound source
should emanate from below the listener. Still, during that demo, most
of the time the crow was actually below the horizon due to the fact
that I automated its flight path rather carelessly by clicking the
mouse at random points on a Control Sequence in Non Timeline (and the
automation input is not bound by the top-only constraint that the
interface is).

If you care to share your use-case for negative elevations--I'm ready
and willing to be convinced of their utility. I was just planning to
ignore the issue until such time as I reimplement the interface using
OpenGL--where the ability to  move the camera and display more visual
cues to its orientation would make manipulating points over the entire
sphere more usable.

Although the demo video shows the source positions being automated,
that is not actually a very likely use case IMHO. It was merely done
to highlight the effect. The actual use case is positioning
instruments on a stage (such as an orchestra).
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread Stefan Schreiber

J. Liles wrote:


On Mon, Jul 8, 2013 at 6:26 AM, Dave Malham dave.mal...@york.ac.uk wrote:
 


Hi,
   This looks good - can't try it at the moment as I am away from my
Linux machine but I do have a question - the user manual says The
spatialization control may be visualized as moving the sound source
across the surface of a hemispherical dome enclosing the listener but
this implies only one hemisphere (presumably upper) in use as I can't
see any way of switching to  lower hemisphere.

 Dave

   




That being said, currently the new interface shares the property of
the old in only representing the top hemisphere. I've played around
with multiple views to allow manipulation of negative elevation, but I
decided that it was too confusing for the user, especially considering
A) the extremely small number of people with periphonic rigs and B)
the even smaller number of *musical* scenarios where a sound source
should emanate from below the listener. Still, during that demo, most
of the time the crow was actually below the horizon due to the fact
that I automated its flight path rather carelessly by clicking the
mouse at random points on a Control Sequence in Non Timeline (and the
automation input is not bound by the top-only constraint that the
interface is).
 



Two commentaries:

- the representation of negative elevation is easily possible via 
headphones/binaural techniques.


- Direct sound from down might be rare or not (but think about some 
walk in the woods wearing a prototype of Oculus Rift and a head-mounted 
camera...   :-D ), but reverberation from down is just normal. 
(Floor/ground reflections.)



If you care to share your use-case for negative elevations--I'm ready
and willing to be convinced of their utility. I was just planning to
ignore the issue until such time as I reimplement the interface using
OpenGL--where the ability to  move the camera and display more visual
cues to its orientation would make manipulating points over the entire
sphere more usable.
 



See above!

Best,

Stefan Schreiber

P.S.: You need the Oculus camera add-on (TM) to avoid running against 
the trees, at least during a VR assisted walk in the woods. Even better 
if you stayed at home...
___

Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread J. Liles
On Mon, Jul 8, 2013 at 5:03 PM, Stefan Schreiber st...@mail.telepac.ptwrote:

 J. Liles wrote:

  On Mon, Jul 8, 2013 at 6:26 AM, Dave Malham dave.mal...@york.ac.uk
 wrote:


 Hi,
This looks good - can't try it at the moment as I am away from my
 Linux machine but I do have a question - the user manual says The
 spatialization control may be visualized as moving the sound source
 across the surface of a hemispherical dome enclosing the listener but
 this implies only one hemisphere (presumably upper) in use as I can't
 see any way of switching to  lower hemisphere.

  Dave





 That being said, currently the new interface shares the property of
 the old in only representing the top hemisphere. I've played around
 with multiple views to allow manipulation of negative elevation, but I
 decided that it was too confusing for the user, especially considering
 A) the extremely small number of people with periphonic rigs and B)
 the even smaller number of *musical* scenarios where a sound source
 should emanate from below the listener. Still, during that demo, most
 of the time the crow was actually below the horizon due to the fact
 that I automated its flight path rather carelessly by clicking the
 mouse at random points on a Control Sequence in Non Timeline (and the
 automation input is not bound by the top-only constraint that the
 interface is).



 Two commentaries:

 - the representation of negative elevation is easily possible via
 headphones/binaural techniques.


In that case, I hazard to guess that the number of people with the time and
skills to convert B-Format to an HRTF of their own head is similar to the
number of people with periphonic ambisonics rigs. Seriously though, can you
point me to some free-software for generating HRTF output from B-Format?
Because I could use some.


 - Direct sound from down might be rare or not (but think about some walk
 in the woods wearing a prototype of Oculus Rift and a head-mounted
 camera...   :-D ), but reverberation from down is just normal.
 (Floor/ground reflections.)


Excellent point, however, the panning of a sound source is only
incidentally related to the direction of the reflections. Nothing's to stop
a reverb from doing what it does, regardless of whether or not a source can
be panned below the equator.

But anyway, the purpose here is not to craft virtual walks in the woods
(I'm sure there are other tools for that, Blender's new 3D sound objects
come to mind). The purpose of this work is to produce music.
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130708/ed7f1867/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread Stefan Schreiber

J. Liles wrote:


On Mon, Jul 8, 2013 at 5:03 PM, Stefan Schreiber st...@mail.telepac.ptwrote:

 


J. Liles wrote:

On Mon, Jul 8, 2013 at 6:26 AM, Dave Malham dave.mal...@york.ac.uk
   


wrote:


 


Hi,
  This looks good - can't try it at the moment as I am away from my
Linux machine but I do have a question - the user manual says The
spatialization control may be visualized as moving the sound source
across the surface of a hemispherical dome enclosing the listener but
this implies only one hemisphere (presumably upper) in use as I can't
see any way of switching to  lower hemisphere.

Dave



   


That being said, currently the new interface shares the property of
the old in only representing the top hemisphere. I've played around
with multiple views to allow manipulation of negative elevation, but I
decided that it was too confusing for the user, especially considering
A) the extremely small number of people with periphonic rigs and B)
the even smaller number of *musical* scenarios where a sound source
should emanate from below the listener. Still, during that demo, most
of the time the crow was actually below the horizon due to the fact
that I automated its flight path rather carelessly by clicking the
mouse at random points on a Control Sequence in Non Timeline (and the
automation input is not bound by the top-only constraint that the
interface is).


 


Two commentaries:

- the representation of negative elevation is easily possible via
headphones/binaural techniques.

   



In that case, I hazard to guess that the number of people with the time and
skills to convert B-Format to an HRTF of their own head is similar to the
number of people with periphonic ambisonics rigs. Seriously though, can you
point me to some free-software for generating HRTF output from B-Format?
Because I could use some.
 



No I can't (for the moment), it is also not my obligation.

I only wanted to point out that binaural  in any form  doesn't include 
any upper/lower hemishere restrictions.


If you use personal or common HRTF datasets doesn't really matter. HT 
(head tracking) is also irrelevant, because binaural is full-sphere. 
(Like Ambisonics.)


(Speaking about HT: Be aware that HT for video glasses and VR devices - 
this was the Oculus Rift example - is going more and more mainstream. 
Sensors/gyroscope devices are widely available, relative GPS would allow 
movement tracking in real or virtual space. We audio people are just a 
bit behind, probably because surround sound looks esoteric, and you 
won't use some $100 sensors for advanced headphones   )


I might look during the next days if I find some (public) B format --- 
binaural/HRTF stuff which might interest you. And yet I hope that our 
Ambisonics specialists here will provide the information way faster than 
I ever could.






 


- Direct sound from down might be rare or not (but think about some walk
in the woods wearing a prototype of Oculus Rift and a head-mounted
camera...   :-D ), but reverberation from down is just normal.
(Floor/ground reflections.)
   




Excellent point, however, the panning of a sound source is only
incidentally related to the direction of the reflections. Nothing's to stop
a reverb from doing what it does, regardless of whether or not a source can
be panned below the equator.

But anyway, the purpose here is not to craft virtual walks in the woods
(I'm sure there are other tools for that, Blender's new 3D sound objects
come to mind). The purpose of this work is to produce music.

 

I am aware that musical sources usually don't come from down, but this 
is maybe related to the fact that (most) musical sources are actually 
coming from the front. (Please, no new discussions about DWMM, this is 
just an observation by some stupid musician. O:-)   I have been in 
jazz clubs, been in flamenco caves etc. etc., and mostly...   )


Beyond music: If you imagine a Formula 1 game, the car and gear noises 
should clearly come from the lower hemisphere, not the upper one! ;-)  
(Unless you lost control and the car turned over... I hope you are a 
good driver!  )



Best,

Stefan
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread Matthew Palmer
Would you be interested in helping build tools for the Oculus Rift? I
pitched a proposal for a tool to Ico Bukvik at Virginia Tech (
http://www.icat.vt.edu/) and he was interested in helping. I can email you
more. - matt


On Mon, Jul 8, 2013 at 9:12 PM, Stefan Schreiber st...@mail.telepac.ptwrote:

 J. Liles wrote:

  On Mon, Jul 8, 2013 at 5:03 PM, Stefan Schreiber st...@mail.telepac.pt
 wrote:



 J. Liles wrote:

 On Mon, Jul 8, 2013 at 6:26 AM, Dave Malham dave.mal...@york.ac.uk


 wrote:




 Hi,
   This looks good - can't try it at the moment as I am away from my
 Linux machine but I do have a question - the user manual says The
 spatialization control may be visualized as moving the sound source
 across the surface of a hemispherical dome enclosing the listener but
 this implies only one hemisphere (presumably upper) in use as I can't
 see any way of switching to  lower hemisphere.

 Dave





 That being said, currently the new interface shares the property of
 the old in only representing the top hemisphere. I've played around
 with multiple views to allow manipulation of negative elevation, but I
 decided that it was too confusing for the user, especially considering
 A) the extremely small number of people with periphonic rigs and B)
 the even smaller number of *musical* scenarios where a sound source
 should emanate from below the listener. Still, during that demo, most
 of the time the crow was actually below the horizon due to the fact
 that I automated its flight path rather carelessly by clicking the
 mouse at random points on a Control Sequence in Non Timeline (and the
 automation input is not bound by the top-only constraint that the
 interface is).




 Two commentaries:

 - the representation of negative elevation is easily possible via
 headphones/binaural techniques.




 In that case, I hazard to guess that the number of people with the time
 and
 skills to convert B-Format to an HRTF of their own head is similar to the
 number of people with periphonic ambisonics rigs. Seriously though, can
 you
 point me to some free-software for generating HRTF output from B-Format?
 Because I could use some.



 No I can't (for the moment), it is also not my obligation.

 I only wanted to point out that binaural  in any form  doesn't include
 any upper/lower hemishere restrictions.

 If you use personal or common HRTF datasets doesn't really matter. HT
 (head tracking) is also irrelevant, because binaural is full-sphere. (Like
 Ambisonics.)

 (Speaking about HT: Be aware that HT for video glasses and VR devices -
 this was the Oculus Rift example - is going more and more mainstream.
 Sensors/gyroscope devices are widely available, relative GPS would allow
 movement tracking in real or virtual space. We audio people are just a bit
 behind, probably because surround sound looks esoteric, and you won't use
 some $100 sensors for advanced headphones   )

 I might look during the next days if I find some (public) B format ---
 binaural/HRTF stuff which might interest you. And yet I hope that our
 Ambisonics specialists here will provide the information way faster than I
 ever could.







 - Direct sound from down might be rare or not (but think about some
 walk
 in the woods wearing a prototype of Oculus Rift and a head-mounted
 camera...   :-D ), but reverberation from down is just normal.
 (Floor/ground reflections.)




 Excellent point, however, the panning of a sound source is only
 incidentally related to the direction of the reflections. Nothing's to
 stop
 a reverb from doing what it does, regardless of whether or not a source
 can
 be panned below the equator.

 But anyway, the purpose here is not to craft virtual walks in the woods
 (I'm sure there are other tools for that, Blender's new 3D sound objects
 come to mind). The purpose of this work is to produce music.



 I am aware that musical sources usually don't come from down, but this
 is maybe related to the fact that (most) musical sources are actually
 coming from the front. (Please, no new discussions about DWMM, this is just
 an observation by some stupid musician. O:-)   I have been in jazz clubs,
 been in flamenco caves etc. etc., and mostly...   )

 Beyond music: If you imagine a Formula 1 game, the car and gear noises
 should clearly come from the lower hemisphere, not the upper one! ;-)
  (Unless you lost control and the car turned over... I hope you are a good
 driver!  )


 Best,

 Stefan

 __**_
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/**mailman/listinfo/sursoundhttps://mail.music.vt.edu/mailman/listinfo/sursound

-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20130708/50b22183/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Fwd: Non Mixer Spatializer Demo

2013-07-08 Thread Matthew Palmer
It's state is being just a thing in my head. I bought an Oculus and it
still hasn't gotten here but I was picturing using maybe the Hydra Razer to
make a tool for placing sounds with your hands. Ultimately, tools could be
built where people built songs, it'd be really sweet to see like little
kids making songs that way. Emailed Ico because I'm broke/in debt and don't
have audio set-up or any programming experience. Found a person here in
Richmond, VA to help with information coming from/to maxmsp. I'm trying to
make a movie with my friends that's kind of idealized for Oculus viewers 
so we started talking with people building games/3d artifacts at the school
and making friends so through them  Ico  whoever else is over there at
VT, I figure we can get something done. I also thought maybe people who
were already building similar things  into open source might be interested
in helping.



On Mon, Jul 8, 2013 at 11:28 PM, Stefan Schreiber st...@mail.telepac.ptwrote:

 Matthew Palmer wrote:

  Would you be interested in helping build tools for the Oculus Rift? I
 pitched a proposal for a tool to Ico Bukvik at Virginia Tech (
 http://www.icat.vt.edu/) and he was interested in helping. I can email
 you
 more. - matt



 They will have to think more about the audio output(s) of the Oculus Rift,
 at least for the CE version. (I would go for some mixed approach. Include
 some earbugs, but include also the necessary interfaces for serious
 headphones, surround processors etc.)

 What is the current state of development?

 Best,

 Stefan

 P.S.: And because of 1000 Hz HT, we finally would have some HT research
 device for the demonstration of convincing surround reproduced and listened
 to on headphones. Which I have proposed some (ages) years ago. (The point
 was  that   the existing solutions from say Smyth Research or Beyer were
 way too expensive for any normal or CE market,  then. Times are changing.
 Of course, we don't know yet if the University of York would invest into
 some $300 high tech HT device like the Oculus Rift, but maybe some of the
 student would accept that  his/her  OR is being used for scientific
 reasons as 3D audio reproduction tracker/processor, when he/she is not
 playing games:-D )

 P.S. 2: And of course, you could do the same for less than $300, but then
 you would have to design your own headphone without (double) LCD display.

  Initial prototypes used a Hillcrest 3DoF http://en.wikipedia.org/wiki/**
 3DoF http://en.wikipedia.org/wiki/3DoF head tracker that is normally
 120 Hz, with a special firmware that John Carmack requested which makes it
 run at 250 Hz, tracker latency being vital due to the dependency of virtual
 reality's realism on response time. The latest version includes Oculus' new
 1000 Hz Adjacent Reality Tracker that will allow for much lower latency
 tracking than almost any other tracker. It uses a combination of 3-axis
 gyros 
 http://en.wikipedia.org/wiki/**Gyroscopehttp://en.wikipedia.org/wiki/Gyroscope,
 accelerometers 
 http://en.wikipedia.org/wiki/**Accelerometerhttp://en.wikipedia.org/wiki/Accelerometer,
 and magnetometers 
 http://en.wikipedia.org/wiki/**Magnetometerhttp://en.wikipedia.org/wiki/Magnetometer,
 which make it capable of absolute (relative to earth) head orientation
 tracking without drift.[20] http://en.wikipedia.org/wiki/**
 Oculus_Rift#cite_note-**update11-20http://en.wikipedia.org/wiki/Oculus_Rift#cite_note-update11-20[25]
 http://en.wikipedia.org/wiki/**Oculus_Rift#cite_note-AutoFU-**1-25http://en.wikipedia.org/wiki/Oculus_Rift#cite_note-AutoFU-1-25
 


 http://en.wikipedia.org/wiki/**Oculus_Rifthttp://en.wikipedia.org/wiki/Oculus_Rift

 I could imagine how I would design my own HT headphone...   ( Top secret
NSA/GCHQ tag Don't tell Ambisonics researchers )   :-D




 On Mon, Jul 8, 2013 at 9:12 PM, Stefan Schreiber st...@mail.telepac.pt
 wrote:



 J. Liles wrote:

 On Mon, Jul 8, 2013 at 5:03 PM, Stefan Schreiber st...@mail.telepac.pt


 wrote:





 J. Liles wrote:

 On Mon, Jul 8, 2013 at 6:26 AM, Dave Malham dave.mal...@york.ac.uk




 wrote:






 Hi,
  This looks good - can't try it at the moment as I am away from my
 Linux machine but I do have a question - the user manual says The
 spatialization control may be visualized as moving the sound source
 across the surface of a hemispherical dome enclosing the listener
 but
 this implies only one hemisphere (presumably upper) in use as I can't
 see any way of switching to  lower hemisphere.

Dave







 That being said, currently the new interface shares the property of
 the old in only representing the top hemisphere. I've played around
 with multiple views to allow manipulation of negative elevation, but I
 decided that it was too confusing for the user, especially considering
 A) the extremely small number of people with periphonic rigs and B)
 the even smaller number of *musical* scenarios where a sound source
 should emanate from below the listener. Still,