Re: [linux-audio-dev] 3D fft analysis program
Am Montag, 15. Mai 2006 16:50 schrieb Esben Stien: [Baudline] is now released as GPL software, though not available for download, or so it seems. Really??? The website doesn't say so ... Uwe
Re: [linux-audio-dev] 3D fft analysis program
Am Montag, 15. Mai 2006 20:31 schrieb Esben Stien: Uwe Koloska [EMAIL PROTECTED] writes: The website doesn't say so ... Yes, it does. If you navigate to the download section and then follow the source link. Ok, but I think GPL and purchase is a contradiction, isn't it? Uwe
Re: [linux-audio-dev] LADSPA2 name early consensus?
Am Donnerstag, 27. April 2006 11:02 schrieb Steve Harris: Discusson seems to have slowwed down a bit, so I went through the list archive and pulled out all the names: apa chap clap eep fap ladspa2 pea peep peeper rap sax wasap xap sax is the Simple API fo XML (despite all the other meanings wikipedia reports) Uwe
[linux-audio-dev] fst-1.6 doesn't compile with wine 0.9
Hello, just tried to update my installation of fst-1.6 to the newest (now beta!) wine. But there are several problems: the options to winebuild have changed and so in fst/Makefile the build-command must be changed: - the target must be renamed to $(fst_exe_MODULE).spec.o: cause winebuild creates an assembler file that can't be processed by the implicit rule .c.o: -- but the new winebuild can create the .o file directly. - the executable must be named via the -F option -- to get rid of this winebuild error message the option '-F' must be inserted just before $(fst_exe_MODULE) With this modifications the build creates the libfst.so successfully but when linking fstconfig with libfst there are two missing symbols: ./libfst.so: undefined reference to `__wine_spec_exe_entry' ./libfst.so: undefined reference to `__wine_spec_init_ctor' Unfortunately the wine documentation is not updated to this new behaviour and I was not able to figure out what to do. When building the intermediate assembler source (just undo the extension change for the .spec.o) and looking at it, the two symbols are destination marker: The first is just an address in an internal table and the second is the target of a jump command. I can't find any suggestions what to additionally link to libfst to satisfy this unresolved references ... Is there an updated version of libfst? Or someone with more wine knowledge, who can solve this problems? By further investigation I found a workaround that builds libfst but at least ardour gives a memory access fault (Speicherzugriffsfehler): Unpack the distribution cp fst/libfst.spec.c fst/libfst.spec.c_backup ./configure make cp fst/libfst.spec.c_backup fst/libfst.spec.c This uses the prebuilt libfst.spec.c but maybe this doesn't match the current wine version ... I have build ardour with VST=1 and it only starts without memory access fault when giving the option -V Thank you Uwe Koloska
Re: [linux-audio-dev] jack_convolve-0.0.10, libconvolve-0.0.3 released
Hello, fons adriaensen schrieb: (*) Recent experiments by prof. Angelo Farina (Univ. of Parma, Italy) suggest strongly that when the DA conversion is done properly, there is no audible difference between a sample rate of 48 kHz and any higher value. Do you have any pointer to this? A short scan of his homepage has shown nothing in this area :-( Thank you Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Tracktion, JUCE and Linux
Jens M Andreasen schrieb: Boring? Well, this is (after all) an audio developer list and asking questions on how interfaces works is very much on topic. There is (hopefully?) a good chance that somebody else was just there doing similar things ... Yes, please don't go offlist with this more technical aspects. I am very interested in following the process, since I'm thinking about inserting jack into another project and it may be very valuable to study this process from ground up. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] [OT] Linux soundapps pages updated: Back In Black edition
Dave Phillips wrote: Greetings; Just in time for Xmas: Thank you -- and with musings! Great. http://linux-sound.org(USA) http://linuxsound.atnet.net (Europe) this one must be http://linuxsound.atnet.at (Europe) Happy Christmas! Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] rosegarden, alsa_seq and usx2y rawusb
Chris Cannam wrote: On Sunday 12 Dec 2004 22:44, Uwe Koloska wrote: But when starting jack with driver 'usx2y' the midi seems not to arrive at ZynAddSubFX (or Hydrogen or fluidsynth or ...) Try changing Rosegarden's Sequencer Timer setting (Settings - Configure Rosegarden - Sequencer - Synchronisation) to system timer, or something else other than (auto). Ah -- thank you. Will try that at home. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] rosegarden, alsa_seq and usx2y rawusb
Am Montag, 13. Dezember 2004 11:15 schrieb Chris Cannam: On Sunday 12 Dec 2004 22:44, Uwe Koloska wrote: But when starting jack with driver 'usx2y' the midi seems not to arrive at ZynAddSubFX (or Hydrogen or fluidsynth or ...) Try changing Rosegarden's Sequencer Timer setting (Settings - Configure Rosegarden - Sequencer - Synchronisation) to system timer, or something else other than (auto). No, this doesn't change anything ... Have tried both, changing the setting - with - without restarting rosegarden. And now I have noticed the following: if the midi doesn't reach the alsa_seq client zynaddsubfx, after some time the following error message from the synth appears: NOTES TOO MANY ( POLIPHONY) - (Part.C::NoteOn(..)) To make it clear: this message doesn't appear for the first notes, but after some time. So for me it looks like midi reaches the program, but can't be processed -- even the meter from zynaddsubfx doesn't show anything. Very mysterious Uwe
[linux-audio-dev] rosegarden, alsa_seq and usx2y rawusb
Hello, have now figured out how to drive my audio and midi with tascam us-122 on SuSE 9.2 -- and (after disabling all tv thingies) it works very reliable and with very few xruns. The following software seems to be involved: - linux-2.6.8-24.3 (SuSE Standard kernel -- heavily patched) patched with linux-2.6.8-24-usx2y-0.8.6.patch from rncbc (CONFIG_HPET_RTC_IRQ=n) - realtime-lsm-0.8.5 - ruis jack jack-0.99.21.1usx2y-17.suse92 - qjackctl-0.2.13-1 - rosegarden4-0.9.9-2 - ZynAddSubFX-1.4.3-141 With this configuration I have found a weird problem: When starting jack with driver 'alsa' (from qjackctl) I am able to play my alsa instruments (for example ZynAddSubFX) from within rosegarden: - start ZynAddSubFX - start rosegarden - choose ZynAddSubFX for the active track - play the keyboard an listen to nice ZynAddSubFX sounds ... But when starting jack with driver 'usx2y' the midi seems not to arrive at ZynAddSubFX (or Hydrogen or fluidsynth or ...) Even ASeqView doesn't show any incoming events if chosen as instrument in rosegarden. Restarting jack with driver alsa -- and all went well ... What is the problem? And what can I do to help with this problem? Are there any missing facts? Yours Uwe Koloska
Re: [linux-audio-user] Re: [linux-audio-dev] RME is no more
CK wrote: I read: for the record, i sent a mail to rme as well and got exactly the same answer (in german) which i saw before here on this list. I still don't see the point, the GPL _protects_ their IP rights, if I was the evil corporation trying to rip off rme I could aswell rip the thing apart and reverse engineer the code and the protocol, might still be cheaper than doing the rd work. I think their point is another one: There are few companies that used firewire with all it's potential. RME is thinking they are the only ones, that uses all the potential in firewire. If the make a ALSA solution, their competitors have the same basis (that they think of is the best one) ... And since firewire is a very generic protocol they may be right :-(( Is this true, that a firewire driver for one card can be used with equal power for another card? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] muse and /dev/rtc
Lee Revell wrote: OK this all looks good. I don't know, it sounds like a bug in Muse. There must be some incompatibility using a binary Suse Muse package with a Mandrake kernel. I don't think so -- I have a SuSE 9.2 and this means SuSE kernel with SuSE MUSe ;-) Try a newer version of Muse. Maybe if you compile it your environment will be detected correctly. Is there a newer version than 0.7.0? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] muse and /dev/rtc
Matthias Nagorni wrote: Exactly: If you set CONFIG_HPET_RTC_IRQ=n and recompile the (SuSE 9.2-)kernel, MusE should work. And why is it set? Normally there is a reason for doing something ;-) Will this changed setting affect other settings, scripts, programs, etc.??? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Tascam US428 Hangup
Hello all, good news! I have succeeded in making my us-122 work! Heureka and Hurrai and thanks to all that has helped! Am Freitag, 19. November 2004 19:42 schrieb Karsten Wiese: Hope to get it installed on my SuSE 9.2 (the last time I tried to build only the alsa-drivers, it hicks up with many errors I wasn't able to resolv) You can setup a standard suse kerneltree and only copy the usx2y subdir's content from alsa1.0.7/alsa-kernel/usb/usx2y to suse9.2kernel/sound/usb/usx2y. I think;-) unfortunately not -- the code seems to use some features not present in the SuSE Kernel, that is a heavily patched 2.6.9 (that is said to be near 2.6.10rc2). But the full installation of alsa-drivers-1.0.7 worked like a charm; ./configure make make install # or better 'checkinstall' and then the us-122 was able to in/output Audio and Midi -- wow! Now I am testing all the wonderful audio applications ... I have found some problems: - qjackctl (0.2.10) cannot stop jackd after a audio graph has been established i have to kill it manually (but not as root) - the same is true for some jack clients I tried (aeolus, freqtweak) to be really shure, that you have an OHCI (and not UHCI) device you can look at/mail here the outputs of $ lsusb for the record: this has to be lsusb -v -- otherwise information about OHCI, EHCI or UHCI is not given. I will add my experiences to the wiki page. Have a nice time making music with linux Uwe Koloska
Re: [linux-audio-dev] Tascam US428 Hangup
[EMAIL PROTECTED] wrote: All audio apps work first time on the default kernel installation - no patches or anything else required yet. Jackd perfect no x-runs yet - Everything works and is easily configurable so after the initial install one can get working immediately and make soundz? Did you run jackd and all audio apps as root? As normal user it produces a lot of xruns in the default configuration. After the additional install of the realtime lsm, I can be myself (a normal user) and start jackd with realtime capabilities from qjackctl. But then again some (very few) xruns when playing ardour and some other programs accesses the disk (changing the virtual screen to my pim). Maybe this is also a problem of running a bloated GUI and more than just the audio programs. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Tascam US428 Hangup
Rui Nuno Capela wrote: Applied the realtime-lsm-0.8.4 (http://www.joq.us/realtime/) and incidentally, a homemade snd-usb-usx2y-0.8.6. Regarding the later, you can follow the whole original story from alsa's bts: Should we start something like a wiki page to collect all this success stories at one place? Maybe this http://www.affenbande.org/~tapas/wiki/index.php (follow link to LowLatency on 2.6.x -- whole link is too long) would be a good starting point. In fact there's this one linux-2.6.9-usx2y-0.8.6.patch.gz which should fit into suse's factory kernel-sources. Unfortunately, I have lost my track on this one. Where can I get it? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Tascam US428 Hangup
Rui Nuno Capela wrote: Seems like a good idea. Feel free to do it :) OK -- I have started a page at http://www.affenbande.org/~tapas/wiki/index.php?Audio%20on%20SuSE In fact there's this one linux-2.6.9-usx2y-0.8.6.patch.gz which It's one of the uploaded (many) files under alsa bug #425. I think is the last one. I can't access the alsa-project bugtracker for the time being :-(( Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Tascam US428 Hangup
karsten wiese wrote: erm no. for us122 etc (USB 1.1) OHCI is in charge as EHCI is USB2. Ah -- I thought, that usb2 uses EHCI _and_ is downwards compatible with usb-1.1. So, while using EHCI I run USB 1.1. (forgive me, I know near to nothing about this stuff ;-) us122 on OHCI really is a case for snd-usb-usx2y 0.8.6. Have you already tried it? Will try it. Is it part of Alsa 1.0.7? The Release Note from Jaroslav Kysela just mention - usX2Y - usx2y cleanups and fixes - snd-usb-usx2y 0.7.3 - snd-usb-usx2y - crash fix for OHCI USB-HCDs Hope to get it installed on my SuSE 9.2 (the last time I tried to build only the alsa-drivers, it hicks up with many errors I wasn't able to resolv) Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Tascam US428 Hangup
Rui Nuno Capela wrote: If you're on a OHCI based USB system this is an known issue. Yes and no. I have both OHCI and EHCI (is this right -- anyway the other one ;-) So, I have to use the EHCI port, right? Hopefully, it's not the one on the board like for some other guys that have reported this problem ... It has been ironed out by snd-usb-usx2y 0.8.6 as you may find on latest alsa-driver-1.0.7 (also on 2.6.10-rc2-mm1 kernel). But then again the problem of making a new kernel modul or a new kernel for my distro (now SuSE 9.2). The last time I tried to install a 2.6.9 (with all audio patches) I gave up, cause many of the skripts (usb, hotplug) have stopped working. What about deMuDi 1.2.0? Is this one able to work with the tascam usb interfaces? There is so much music to do ... Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] formant analysis lib
Am Mittwoch, 17. November 2004 18:42 schrieb Dave Griffiths: Not an area I'm that familiar with, but are there any good libraries out there that can do formant analysis of speech? The formant analysis code from snack http://www.speech.kth.se/snack/ is the code from the well known ESPS package and most praised in the speech community. There is a good f0 analyzer, too. Uwe Koloska
Re: [linux-audio-dev] Tascam US428 Hangup
Hello, It was the 13. November 2004 when Spencer Russell wrote: Hullo, there, I've got a Tascam us428 audio/midi interface/control surface, and I've got a Tascam us122 I've had a heck of a time trying to get it working. It seems to work alright if I use the device directly with xmms, but when I try to start jack, my system hangs. and it's the same here. The interface initalizes and gets its firmware but if I try to send audio to it - starting jack with this device - using aplay with this device the machine freezes -- no chance to get in either by any keyboard command (haven't tried -- what's the name? -- sysrq) or via network (ssh) from another machine ... This is the current SuSE 9.2 with kernel 2.6.8 (patched from SuSE to be something like 2.6.9rc2) with the realtime kernelmodule and snd-usb-usx2y 0.7.3. Thanks for your help! Uwe
Re: [linux-audio-dev] Knobs / widget design
Dave Robillard wrote: I think we've (perhaps?) finally figured out that we can't really have a standard-LAD-GUI-elements-set. It will just turn into another LADSPA-GUI war, nothing will get decided, and nothing will get done. But as far as I understand, it's not about GUI-elemt toolkits but about a list of GUI elements and a (or some) ways to place them. Then every host can implement its own set and / or we can provide GUI libraries for the different toolkits. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Knobs / widget design
Thorsten Wilms wrote: SVG vector graphics (prefered by Peter and me) http://wrstud.uni-wuppertal.de/~ka0394/forum/04-05-02_knobs_02.png 3d rendering variatios http://wrstud.uni-wuppertal.de/~ka0394/forum/04-05-02_knob_3d_1-2-3.jpg very nice! I most like the svg ones for the cleaner look. But there comes another handling problem: some people have opted for linear movement (I too think radial movement is intuitive but mostly unusable -- normal mouse movement is linear) but then I think we need both directions: - up/down for something like gain - left/right for something like pan and this only works with an additional linear display in the right direction What do you think? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] [OT] affordance? (was: Is ladspa actually la-dsp-a? Is JACK the ultimate solution?)
Steve Harris wrote: [OT] - my canned plugin writing experience - all generalisations and IMHO of course Affordance, appearance and usability has as much affect on the perceived sound quality as the DSP code (posivly and negativly). Some of this can be achieved without a custom UI. Sorry for my ignorance -- but is there someone who can present me the german term for affordance? The web-dicts I searched, don't know this word -- and the wikipedia entry doesn't give me any clue about the german word (though I understand the concept). Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] [OT] affordance? (was: Is ladspa actually
CK wrote: It's Affordanz, a psychological term see: http://www.wissenschaft-online.de/abo/lexikon/psycho/320 Aaah! Thank you. This also explains, why this artword isn't found in any dictionary. Another little piece in the neverending quest for knowlegde ;-) Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] LADSPA proposal ...
Jack O'Quin wrote: That [Lisp] wasn't really a serious suggestion. Just found this nice and small lisp. It is used in festival speech synthesis system and named SIOD (Scheme in one defun) http://www.cs.indiana.edu/scheme-repository/imp/siod.html Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] we need some delegates for mLAN@musikmesse
Am Montag, 22. März 2004 20:44 schrieb Paul Davis: Unfortunately, the ZKM meeting is basically a 100% overlap with the musikmesse. Is'nt the ZKM LAD meeting one month later than musikmesse, is it? Musikmesse: March 31. -- April 3. ZKM LAD conference: April 29. -- May 2. Uwe
Re: [linux-audio-dev] LADSPA + GUI?
Dave Robillard wrote: If you really want to make a custom GUI for your plugin, nothing's stopping you from writing a simple jack/ladspa host that just takes input, runs it through your plugin, and outputs via jack (this is really easy BTW) and putting whatever UI you want on it. Then your plugin will still be a normal LADSPA plugin everyone can use. But you cannot automate it in ardour ... Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] swh-plugins, freqtweak, fftw3 and the planet
Jack O'Quin wrote: The f version operates on floats rather than doubles. It is not built by default. If building fftw yourself, you need to configure it with the --enable-float option. but this will build only the float variant. And the fftw3 build procedure is not able to build both variants ... does anyone have any clues about how to solve this in a packageful way? thanks Because a prog (I think brutefir) uses both variants, I have build my fftw3 rpm (with checkinstall) the way, Suse build their package: - first build the normal version - then in the install step install the normal version and build and install the float-version. You can get the src.rpm from the Suse server. Look into the SuSE-9.0 tree. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] deconvolver for IR creation anyone?
Hello Denis, just to get it right: Denis Sbragion wrote: it must be near the end of the file, because of the way it works. This is because of the delay of the filter line, isn't it? Furthermore if you use BruteFIR to perform the deconvolution you should consider that BruteFIR truncates the output to make it the same lenght of the input, where the full convolution of a signal of length N with a filter of length M would yeld a signal of length N+M-1. To avoid this truncation simply add some seconds of silence to the recorded signal before performing the convolution (the scripts I sent you already did it). So to get the IR right, I have to cut the beginning of the bruteFIR result by the length of the filter (if the input starts at the time the signal has started). Right? How do you revise your IRs? Manually? What is the reason for bruteFIR to keep this delay in the result? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] deconvolver for IR creation anyone?
Am Donnerstag, 11. Dezember 2003 22:24 schrieb Uwe Koloska: Maybe I am too dumb, but I can't make it work ... At least for the convolution, I have managed to make it work. The problem was with the attenuation. The values from Denis were not suited for my setup. For the impulses I have around, the IR attenuation must be 24 to 32. The deconvolution works, too. But I don't understand it completely. The IR seems to start at the end of the file. Especially with the direct deconvolution of the original IR, the pulse is at the end of the convolution result .. I must study the AES paper a bit deeper ;-) Yours Uwe -- mailto:[EMAIL PROTECTED] http://www.koloro.de
Re: [linux-audio-dev] deconvolver for IR creation anyone?
Am Dienstag, 9. Dezember 2003 18:06 schrieb Denis Sbragion: no, this is in the BruteFIR documentation. BruteFIR accepts a lot of input formats, including many binary formats, uups, my fault -- I thought bruteFIR needs exponents of the FIR filter that has to be calculated beforehand. But I don't realized that an impulse response or a recorded sweep can be used directly. Could you please send me these scripts. Since I am far more better in UI design than in algortihmic debugging -- I hope to make a nice tool from this skripts. See the attached file. There are two examples for 44.1 Khz 96 Khz measurement (shell scripts + BF configuration). Maybe I am too dumb, but I can't make it work ... Here is what I have done: 1. create a sweep of 40s with AudioEase's Make A TestTone 2.0 (the wav files created with that prog cannot be read by sndfile-convert but with wavesurfer/snack ...) 2. play the sweep with Spark LE on an iBook through a microverb into ardour running on my linux box with NVIDIAs onboard sound 3. revert the sweep in time and create a float32 raw file for use with bruteFIR 4. use bfcfg44 configuration for bruteFIR but the output is empty (all floats have an exponent of -19..21) And even the sweep directly fed into bruteFIR gives an empty file. What do you use for creating the sweeps? I used the Aurora plugins from Angelo Farina: http://www.ramsete.com/aurora/ They're not free but they're free enough for my needs :) But I don't have any windows machine ;-) Uwe -- mailto:[EMAIL PROTECTED] http://www.koloro.de
Re: [linux-audio-dev] deconvolver for IR creation anyone?
Hello, Apostolos Dimitromanolakis wrote: I would be interested in this project too. What I'm looking for is actuallay an anti-reverb that will be able to cancel reverbs in a listening room, well always in conjunction with the listener position. Then I think DRC (digital room correction) is for you: http://freshmeat.net/projects/drc/ This works with bruteFIR http://www.ludd.luth.se/~torger/brutefir.html But I must confess: I have not understand how to use the result of drc in bruteFIR. For a reverb convolution I think only the first steps of the DRC process are necessary, but I haven't understand what to do there, too. And unfortunately the only DRC tutorial is for windows only :-(( And it doesn't explain the steps, it just shows them ... The other useful thing would be a phase-filter to correct the phases coming out from a two or three way loudspeaker to get clarity in the sound similar to high-end speakers. As far as I understand what DRC does, this is one of the postprocessing steps. I'm surprised that you mind modern consumer soundcards not linear, after all the sigma-delta converters used in most of todays soundcards are supposed to be perfectly linear and it was one of the reasons of their adoption. Oh, I don't want to state, that all consumer soundcards are nonlinear. I have a very old Soundblaster AWE 32 and found that the waveform coming out when playing an mls signal cannot be computed to an impulse response by mls2imp. (And looks very bad) I hope to find some time to make a website showing the waveforms and my experiences. With a more modern USB AD/DA converter (Tascam US-122) I can compute the IR both from a direct loop (what gives something very near to a dirac) and from my alesis microverb. And since the onboard sound of my ASUS motherboard with nvidia nforce2 chipset gives similar results (though not so bad, but still unusable), my only explanation (after examining the code and process) was the unlinearity of the two soundcards. (But I don't fully understand the whole process) Maybe, if I post the waveforms, someone can give a better explanation. Another effect appears, when I fed the mls signal dircetly through a reverb (ladspa, gverb). After the impulse, there is a constant noise tail ... Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] deconvolver for IR creation anyone?
Hello, thank you for your answer! Denis Sbragion wrote: yep, you are right. To do the convolution between the measured sweep and the inverse filter to get the impulse response you can use brutefir (with a little trickery). I use it to do my own measurements with 45s log sweeps. The inverse filter is almost 2 millions taps, but brutefir eat it without esitation even with just 64 mb of RAM available. A truly wonderful piece of software! But, how can I learn to use it in such a way??? I haven't found any clue on the web page how to use a binary file as input data for bruteFIR. Are your scripts helpful in that? Doing it under Linux is a bit more complicated. If you want I have some shell scripts that do all the steps needed to get the impulse response (sweep playing recording + econvolution). Anyway they are just a clumsy hack that I use myself for my measurements, don't expect any fancy interface. Despite this, thanks to the brutefir floating point accuracy and the long sweep used the results are state of the art (90+ dB of S/N even in a not so quiet environment with a dirty cheap panasonic WM-60A capsule and a DIY mic preamp). Could you please send me these scripts. Since I am far more better in UI design than in algortihmic debugging -- I hope to make a nice tool from this skripts. What do you use for creating the sweeps? Let's make the linux convolution reverb real! Yours Uwe Koloska -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
[linux-audio-dev] deconvolver for IR creation anyone?
Hello, I am looking for a deconvolver, that is able to produce impulse responses from sinus sweeps (and especially the exponentially sweeping sine wave introduced by Farina). Do you have any suggestions or at least tips to start an implementation by myself? Recently I managed to use the mls tools from nwfiir to produce an IR of my microverb. I had to learn the hard way, that simple soundcards are not able to be used as MLS source because of the non linearities. Even a simple DA-AD loop gives a result wave that mls2imp cannot cope with. But an empty loop with an US-122 (unfortunately not with linux for now) gives something very near to a dirac impulse! The hunt for the linux convolution reverb has started ;-) Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Retro speech synthesis
Frank Neumann wrote: I was wondering for a while when some older geek will start porting/rewriting e.g. the old Amiga's narrator.device for Linux - or, even more retro, what about S.A.M. (Software Automatic Mouth) from the 8-bit Atari machines, dated around 1985? Ah, those were the times.. :-) Here's a link to S.A.M (don't know what the diskette contains): http://retrobits.net/sam.html Uwe Koloska -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] and just to finalize ...
Paul Davis wrote: what a night (paul simon on the famous AG concert in central park) 6) [ only if we really wanted hosts to have a real handle on the plugin GUI window ] the library would need to contain a way to pass in an X Window, and wrap it up as a native drawing area for each toolkit. i would prefer not to do this for now, if ever. What about reparenting the window like the window manager does? I must confess: I don't know anything about programming this but have heard about the concept. AFAIK, any window can be absorbed by a container -- but I don't know what this really and programmatically means. So it's just an idea ... Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] Re: Blocking rima-tde.net, was Re: *****SPAM***** [linux-audio-dev] WINNING NOTIFICATION
Joern Nettingsmeier wrote: but still: don't run your own smtp. I hope you mean: don't run your own smtp that delivers directly! It's very convenient to have my own smtp -- but I'm using my ISPs smtp as a relay. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] LL-patch for SuSE kernel?
Hello, Takashi Iwai wrote: yes, suse kernel (since 8.1) already includes most of the necessary changes. some parts are missing but they are on the rare code path, which has been not audited quite well, anyway. What does this mean? - If I need low latency, I can use a SuSE kernel without recompiling it? And don't have to bother about switching LL on and off? (There is neither the kernel-config option Low Latency nor somethin in /proc to switch it on) - I can use the SuSE sources, but have to recompile? (with what options?) the LL patch is a easy change. but it's pretty difficult to apply the preempt patch on the heavily modified kernel like suse's and redhat's. if you really need PE, try 2.6 kernel. it's better maintained. What do I need the PE patch for? I am still working at an answer to KEYBOARDS and will detail in a later letter. For now only some questions: - (like said above) to get low-latency on a stock SuSE 8.2, I don't have to bother with the kernel and any configuration (setting something in /proc) -- right? - what is the reason for starting the init without full capabilities? (to use jackstart, I have to rebuild the kernel with all capabilities set for init -- but this was straightforward) - when running jackd (from the very nice qjackctl with jackstart) together with ardour and jamin, I get a number of xruns. The same configuration under windowmaker doesn't seem to have this problems. (I am trying to move my CD-mastering from MacOS-X and Spark LE to Linux) - the first thing to start with real audio under KDE is to deactivate aRts -- right? Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] LL-patch for SuSE kernel?
Takashi Iwai wrote: - what is the reason for starting the init without full capabilities? (to use jackstart, I have to rebuild the kernel with all capabilities set for init -- but this was straightforward) it's a question of security. in fact, the full capability is dangerous from this perspective. some of last security holes on 2.4 kernel are related with this. well, in theory, it's possible to enable all capabilities but drop it in the early boot stage by setting via /proc/sys/cap-bound. but it will be unlikely implemneted. it must be pretty hard to convince security guys to accept CAP_SETPCAP capability as default. Is there another (secure) way of using jackd in realtime without making it suid root? What advice can I give to an unexperienced linux user? sorry, i don't know jamin. very nice mastering jack application -- you must know ;-) http://jamin.sourceforge.net/ - the first thing to start with real audio under KDE is to deactivate aRts -- right? yes, if it's conflict. the sb live, for example, can run jackd and artsd for the playback stream (i.e. not for the full-duplex). It's a matter of hardware then? Till now I am bound to use the builtin audio from my ASUS A7N8X Deluxe. But that's not as bad. For live recording I'm using a iBook with TC.works Spark LE and Tascam US-122 as audio interface. Then import the session into ardour and find out what next ;-))) Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
[linux-audio-dev] LL-patch for SuSE kernel?
Hello, is there anyone with a version of the LL-patch for SuSE kernels? (I use 8.2 with 2.4.20). Some hunks of the patch are always in the SuSE kernel (sometimes with small changes as other variable names) -- and this lead me to think that the SuSE kernels had included the LL-patch. Maybe (having the KEYBOARDS discussion in mind) it would be a nice thing if SuSE (and other distributors) have a LL-kernel ready to run. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
[linux-audio-dev] KEYBOARDS: Linux is not suited for audio applications ...
Hello, the german magazine KEYBOARDS has answered a readers question about audio and linux with tremendous ignorance. I think this is a good chance to push linux to the attention of the masses. Here is the full text of question and answer (first in german, so anyone can correct my errors ;-). --- KEYBOARDS -- Leserbrief: Habe mir schon ein paar Mal KEYBOARDS am Kiosk geholt, weil mich gerade das Thema Recording und Computer interessiert. Einige Artikel waren für mich recht interessant. Nur vermisse ich gänzlich Vergleiche mit Linux. Ist es Absicht, dass dieses aufsteigende System nicht erwähnt wird, oder traut sich keiner ran? Seit einigen Monaten steige ich auf Linux um, nur meine Musik-Geschichte hängt hinterher. Dabei gibt es in SuSE eine Menge Musik-Software und Synthesizer, und ich habe gelesen, dass einige Programme bald zur Marktreife gelangen. Von Verkäufern höre ich, dass sie nicht am Linux interessiert seien, weil man da nix mehr verdiene. Von anderen höre ich, Linux sei kein Multimedia-System. Desinformation auf der ganzen Linie ... Rainer Hain (KEYBOARDS): Das Ganze ist ein recht kompliziertes Thema. Linus Thorvald selbst hält Linux nicht für Audio oder generell für Multimedia-Anwendungen geeignet. Low-Latency ist mit den aktuellen Kerneln schlicht nicht zu machen, schon gar nicht Multichannel. Dazu kommt dann, dass ein Setup von Linux heute zwar simpel ist, aber nur, solange man nicht von einem Standard-SuSE abweicht. Und das muss man, wenn man Audio und MIDI betreiben will. Deshalb springt kaum ein Sequenzer-Hersteller drau an, die fürchten den ungeheuren Support-Aufwand. (Man erkläre dem User mal am Telefon, dass er ein Make-File ändern muß und wie er dann die Sources neu kompiliert ...) Deshalb gibt es auch kein Package, was auch nur entfernt an Cubase oder Logik herankäme. An der Treiberunterstützung hapert es halt auch. Ich habe hier zwar eine gute Auswahl an gängigen Interfaces (Audio und MIDI), aber für keines davon gibt es Linux-Treiber. --- english translation Reader: Sometimes I have bought Keyboards cause I'm especially interested recording and computer. Some articles seemed to me very interesting. But I deeply missed any comparisons with linux. Is it intended that this rising system is not mentioned or does noone felt able to do it? Since some month I'm migrating to linux -- only my musical things are left behind. Despite SuSE having a lot of music software and synthesizers; and I read about some programs coming to end-user stability soon. From dealers I hear, that they are not interested in linux cause there is nothing to earn. Other people say, linux is not a multimedia system. Desinformation all along the line. Rainer Hain (Keyboards): This is a very complex matter. Linus Thorvald himself considers linux not to be suited for audio or universally multimedia applications. Low-latency cannot be achieved with current kernels especially not multi-channel. On top of that comes the fact, that a setup of linux is quite simple today, but only if you don't leave the standard SuSE. But this must be done to work with audio and midi. Therefore hardly any sequencer manufacturer uses linux -- they fear the tremendous support effort. (try to explain a user on the phone, that he has to change a makefile and how he must compile the sources ...) Therefore there is no package that can hardly reach the level of Cubase or Logic. The driver support also is a problem. I have a great variety of popular interfaces (audio and midi) but there is no linux driver for one of them. I hope we are able to shape a convincing answer! Yours Uwe Koloska -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
[linux-audio-dev] [german] Keyboards Leserbrief zum Thema Linux und Audio
Hallo, hat jemand von euch in der aktuellen Keyboards die Antwort von Herrn Hain auf die Leseranfrage zu Linux und Audio gelesen? Die strotzt ja nur so von Unwissen, das schier danach schreit eines besseren belehrt zu werden. Vielleicht könnten wir ja gemeinsam eine Antwort erarbeiten. Bei Interesse könnte ich heute abend auch den Text von Frage und Antwort hier posten (leider gibt's die Leserbriefe nicht auf der Keyboards Webseite). So long Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
Re: [linux-audio-dev] [german] Keyboards Leserbrief zum Thema Linux und Audio
Hi list, Frank Neumann wrote: This list is english, so please write english mails to it from now on. Sorry, I thought this is mainly a german theme and could be discussed onlist by the german members. But you are right -- in english all members can participate. So I will post the readers letter and Mr. Hains answer translated and hope we can together produce a really good reply. I'll translate what Uwe said above: Thanks for that! Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
[linux-audio-dev] wavesurfer ASR extensions anyone?
Hello, I tried to download the ASR extension for wavesurfer http://www.speech.kth.se/wavesurfer/ from the respite project http://www.multitel.be/html/fr/projets/respite.htm but have no success -- the perl script seems to bug out on my request :-((( Is there anyone out there who has succeeded in downloading this extension and can send it to me? Thank you Uwe Koloska -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany
[linux-audio-dev] ladspa for tcl sound toolkit snack
Hello, have you ever talked about cross-plattform integration of ladsp in the tcl sound toolkit snack? http://www.speech.kth.se/snack/ Where should I start to take the challenge? And is it possible to make it crossplatform? I use snack for speech processing and it would be very nice to be able to use all the nice ladspa plugins -- and maybe make my own ones. Uwe -- voiceINTERconnect www.voiceinterconnect.de ... smart speech applications from germany