2010/5/26 arthur de conihout <arthurdeconih...@gmail.com>: > i try to implement a real-time convolution module refreshed by head > listener location (angle from a reference point).The result of the > convolution by binaural flters(HRTFs) allows me to spatialize a monophonic > wavfile.
I suspect noone not closely involved with your subject can understand this. From what you write later on I guess binaural filters are some LTI system? > I got trouble with this as long as my convolution doesnt seem to > work properly: > np.convolve() doesnt convolve the entire entry signal Hmm http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html#numpy-convolve claims that the convolution is complete. Can you give an example of what you mean? Furthermore, I think the note there about scipy.signal.fftconcolve may be of large use for you, when you are going to convolve whole wav files? > ->trouble with extracting numpyarrays from the audio wav. filters and > monophonic entry > ->trouble then with encaspulating the resulting array in a proper wav > file...it is not read by audacity Hmmm I worked one time with wavs using the wave module, which is a standard module. I didn't deal with storing wavs. I attach the reading module for you. It needs a module mesh2 to import, which I don't include to save traffic. I think the code is understandable without it and the method .get_raw_by_frames() may already help solving your problem. But I didn't really get the point what your aim is. As far as I understood you want to do sth named "spacialise" with the audio, based on the position of some person with respect to some reference point. What means "spacialise" in this case? I guess it's not simply a delay for creating stereo impression? I guess it is sth creating also a room impression, sth like "small or large room"? Friedrich
wav.py
Description: Binary data
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion