Hi, thanks for your answer *why don't you compute all possible versions beforehand*
thats exactly what i m doing presently cause i m using a 187 filters database(azimuth and elevation).I would love to be able to reduce the angle thread under 5° which triggers around 1000 files to produce.For a 3Mb original sound file, it becomes huge. Thanks Arthur 2010/6/3 David Huard <[email protected]> > Hi Arthur, > > I've no experience whatsoever with what you are doing, but my first thought > was why don't you compute all possible versions beforehand and then > progressively switch from one version to another by interpolation between > the different versions. If the resolution is 15 degrees, there aren't that > many versions to compute beforehand. > > David > > On Thu, Jun 3, 2010 at 6:49 AM, arthur de conihout < > [email protected]> wrote: > >> Hello everybody >> >> i m fighting with a dynamic binaural synthesis(can give more hints on it >> if necessary). >> >> i would like to modify the sound playing according to listener's head >> position. I got special filters(binaural ones) per head position that i >> convolve in real time with a monophonic sound.When the head moves i want to >> be able to play the next position version of the stereo generated sound but >> from the playing position(bit number in the unpacked datas) of the previous >> one.My problem is to hamper audible artefacts due to transition. >> At moment i m only using *short audio wav* that i play and repeat if >> necessary entirely because my positionning resolution is 15° >> degrees.Evolution of head angle position let time for the whole process to >> operate (getting position->choosing corresponding filter->convolution->play >> sound) >> >> *For long **audio wav* I could make a fade-in fade-out from the >> transition point but i have no idea how to implement it(i m using audiolab >> and numpy for convolution) >> >> An other solution could be dynamic filtering means when i change position >> i convolve the next position filter from the place the playing must stop for >> the previous one(but won't pratically stop to let the next convolution >> operates on enough frames) in accordance with the filter frame length(all >> the filters are impulse response of the same lenght 128). >> >> The "drawing" i introduce just below is my mental representation of what i >> m looking to implement, i already apologize for its crapitude (and one of my >> brain too): >> >> >> t0_________t1__t2__t3___________________________________________________________t=len(stimulus) >> monophonic sound(time and bit position in the unpacked datas) >> >> C1C1C1C1C1C1C1C1C1C1C1... >> running convolution with filter 1 corresponding to position 1 (ex: angle >> from reference=15°) >> >> P1_______ >> sound playing 1 >> >> ^ >> position 2 detection(angle=30°) >> >> C2C2C2C2C2C2C2C2C2C2C2... >> running convolution with filter 2 >> >> P1_____x >> keep playing 1 for convolution 2 to operate on enough >> frames (latency) >> >> FIFO >> fade in fade out >> >> P2_________ >> sound playing 2 >> >> >> I don't know if i made myself very clear. >> >> if anyone has suggestions or has already operated a dynamic filtering i >> would be well interested. >> >> Cheers >> >> Arthur >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> [email protected] >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > [email protected] > http://mail.scipy.org/mailman/listinfo/numpy-discussion > >
_______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
