Re: [LAD] Something like Processing for audio
[[sorry darren, this was meant for the list. i hit the wrong button..]] On Mon, Sep 29, 2008 at 1:38 PM, AlgoMantra [EMAIL PROTECTED] wrote: Look, I know that everything I'm asking for exists on the Linux platform. The problem is, it doesn't all exist in one place, or under a single language. I have exactly the same crib. I could not have said it better, and I am convinced that this is a pressing issue. Let me list out the languages I set out to learn in sequence, and finally where I stand. Python - PureData - CSound - Chuck - Processing (after this I shifted to Ubuntu from XP and found the kind of freedom I wanted) - C/C++ (fullstop) ----.- 1/f ))) --. ---... http://www.algomantra.com I'm not your personal army. I'm MY personal army. ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
I want my musical skills to be all I need to be able to make music on Linux. I feel that knowing how to compute on Linux IS a musical skill these days. ----.- 1/f ))) --. ---... http://www.algomantra.com I'm not your personal army. I'm MY personal army. ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Hallo, Darren Landrum hat gesagt: // Darren Landrum wrote: Frank Barknecht wrote: Faust can, but: Is it really important? And why would it be important (taking aside speed issues)? Speed is a large part of it, yes. Another reason to stick to C++ is for things that need the speed and low-level abilities, like on-demand sample streaming from disk for making a sampler. It's a give and take. For some convenience I'll happily give away some speed. OTOH there are areas where speed matters, those areas should be optimized (and thus probably be coded in C/C++). The environments I mentioned can be extended with C in these critical areas. Anyway speed definitly is not what made Processing popular. The core objects in Pd are all coded in C. And generally they are quite fast. Connecting these objects adds a bit of overhead (function calls etc.), but even that is quite fast in Pd. The latest versions of Pd now work on 64-bit systems as well. Pd-vanilla can (and I do have it working), but Pd-extended still cannot. Pd-extended is shipped with an old version of Pd. If you skip the compilation part, Pd (or the other environments I mentioned, Pd is just one example) can do all that. Just bundle the Pd binary and a sh-script with your Pd application to distribute it, if you want. (I would prefer an unbundled download as I already have Pd.) Does Pd do oversampling? I seem to recall it can. Yes, you do that with the [block~] object in a subpatch. I keep bringing this up because several of the things I want to work on, namely lumped modeling/wave digital filters and non-linear processing, both require oversampling to work right, largely to avoid frequency warping and aliasing issues. One thing with Pd is that it is a block-based realtime system, so some of the techniques you mentioned (i.e. those that involve feedback delay lines) can be impossible to implement in Pd using only the builtin objects (feedback delay lines always have a minimal delay time of one block in Pd). But you can code these objects as C-externals or with Faust for Pd. A Processing-alike for audio can also integrate and abstract away things like LASH support, MIDI, and OSC. Something like LASH (session state saving, see the other current thread) is an important capability in my world. Point taken: Session management is an area where Pd isn't especially good. OTOH C++ out of the box isn't either. ;) Ciao -- Frank Barknecht _ __footils.org__ ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
2008/9/29 Darren Landrum [EMAIL PROTECTED]: Sorry for starting this entire argument. I'm just tired of getting nowhere with all of the same tools that everyone else seems to have no problem with. I have a very bad habit of putting myself under a great deal of pressure to exceed everyone's expectations of me. Look, I know that everything I'm asking for exists on the Linux platform. The problem is, it doesn't all exist in one place, or under a single language. I'm convinced at this point that starting over from scratch with a solid design is preferable to trying to use several disparate tools and somehow glue them all together. I've already played around with code here and there to try out some different approaches to this problem, but nothing that I've bothered keeping around. Starting tonight, I'm going to draft a very detailed design, create a code hosting account somewhere (probably Google Code), and get started. I will keep the list apprised of any progress with regular emails. It's been pointed out to me that many people on the list seem to think that I'm trying to get someone else to code this for me. That is not and never was my intention, and I apologize for any miscommunication on my part for that. I am a very slow and not very good coder, though, and it might take a little while to see any progress. First things first, though. A solid design. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev I don't know if it is relevant to this discussion (at least in an acceptable amount of time) but I just wanted you to know about my attempt: NASPRO (http://naspro.atheme.org). I hope people here don't take this message as spamming, because it simply is not. The ideas here are: * to make different existing and not-yet-existing sound processing technology interoperate, both general-purpose sound processing stuff (for example plugins a la LADSPA, LV2, etc.) and special purpose stuff (for example check this: http://naspro.atheme.org/content/ndf-01-file-format-overview), in both compiled and interpreted forms. * be techonlogy neutral (support for each technology implemented in external modules). * define distinct layers, each dealing with a specific aspect of the whole problem (one for sound processing, one for GUIs, one for session handling, etc.), so that a DSP coder can only work on the DSP part and have all the rest automagically implemented and working (for example, you write a LADSPA plugin or write an NDF file and you get an automatically generated GUI without writing one more line of code); * have back bridges when possible, so that applications with support for one of NASPRO-supported technologies gets support for all other technologies without writing a single line of code. * build dead-easy-to-use tools on top of that to make it easy for non demaning applications to support DSP stuff. * build tools on top of that to do data routing among each sound processing component (in other words, chain-like and/or graph-like processing) - plus, since we have those back bridges, you could also use, for example, CLAM networks (as soon as CLAM will be supported) as an alternative to these tools and have the same degree of supported technology (the same goes for gstreamer, Pd, etc). * be cross-platform (apart from Mac/Windows, alternative desktop-oriented OSes like Haiku or Syllable are getting stronger these days and could become viable to do sound processing in some near or distant future). The result will hopefully be to make it also easier to develop new technologies AND without breaking interoperability. Now, since I'm only one, and I am the only one working on this, it will take an insane amount of time probably, and getting each of these abstraction layers right is astonishingly difficult already (anyone remembering GMPI?) - at the moment I'm fighting with core level stuff and I will be doing that at least for another year or two. If you can wait, I will probably have a talk about NASPRO by the end of October and will put down some slides trying to describe the inner working of it (a lot of people complained that I wasn't clear enough on the website)... Maybe this helps :-\ Stefano ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
On Sat, Sep 27, 2008 at 6:49 PM, Chris Williams [EMAIL PROTECTED] wrote: I can't see this as being anything other than a specification bug. I don't think the rosegarden developers have implemented the spec correctly, necessarily, but the spec gave them ample room to do what they did. So if I understand correctly, the original requirement is to have a single plugin receive a single MIDI stream, and then output multiple audio channels which the user can treat separately with different effects in the host (according to his or her whim). And Rosegarden will not do this because, although your plugin can declare that it has any number of output channels, Rosegarden will not handle more than two; it will merge them in an undocumented and externally unpredictable way. And it doesn't allow the user to route the channels to separate effects anyway, even if there are only two. I would have said this was squarely a limitation of Rosegarden. It's one that is likely to remain, as well. The DSSI spec of course permits this -- at the moment it's basically a user problem (this plugin will not work properly in this host). If the DSSI spec was changed so as to require the host to support the behaviour you want, then Rosegarden would simply change from being a technically complete but limited DSSI host to being an incomplete or non-conformant DSSI host; its actual behaviour would not change unless some new enthusiast with lots of spare time happened to step in to completely rewire its audio architecture. (Fons is quite right to say that the Rosegarden project has suffered from worrying about audio too much already.) I think that this limitation is quite defensible -- Rosegarden just isn't a good host for any task where the words audio and routing might both appear, and that's simply the way it is. You'd surely be better off doing it in something like Ingen and just driving the resulting graph from Rosegarden if desired (unless I misunderstand the goal here). But I guess you're right that what's not really defensible is in the failure to provide any way in the protocol to negotiate this, or for the plugin to determine itself whether its host is capable. You're probably right about the cause of this as well (basing the protocol off a simplistic effects protocol -- although I think simplistic is the point rather than effects -- many effects call for more sophisticated output classification as well -- try running the Bode frequency shifter LADSPA plugin on a mono track in Rosegarden some time). My guess is that this situation -- in which a plugin may be capable of something, but just mysteriously fails in a given host -- will get worse with LV2, as well, given the potentially huge range of optional extensions that may or may not be available. I agree, it's not a very promising situation: who would want to write plugins that only might work? (Well, hey, DSSI is still at v0.9. Plenty of time!) Chris ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
On Sun, Sep 28, 2008 at 6:18 PM, Chris Williams [EMAIL PROTECTED] wrote: DSSI, IMO, *attempted* to get this right. Implicit in the DSSI spec is an acknowledgment that a plugin spec can't be in the business of mandating gui solutions on a platform with many to choose from, so they tried to find a way around it using a remote gui which communicates with the host via OSC. I'm not sure this is entirely correct, either, but it's at least more right than several other ways of doing it (*cough* LV2), especially the central idea of trying to abstract the gui away from the architecture. The DSSI approach has two really big problems: 1. It's hard to share significant amounts of data between GUI and plugin, and to do things like synchronising the state of user presets. It can be done using shared memory negotiated between GUI and plugin via configure calls, for example, and Lars wrote a small library to help with this, but it's always going to be a pain. 2. It's different from any other plugin system, so you can't just write the code once and wrapper it for each of your supported protocols. Take a look at the absolutely gross hack used by dssi-vst, for example, to see how much pain is involved in doing an incomplete and unsatisfactory job of wrapping a GUI written for another system. Chris ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Frank Barknecht wrote: One thing with Pd is that it is a block-based realtime system, so some of the techniques you mentioned (i.e. those that involve feedback delay lines) can be impossible to implement in Pd using only the builtin objects (feedback delay lines always have a minimal delay time of one block in Pd). But you can code these objects as C-externals or with Faust for Pd. Yes, I was wrestling with this logic back when my idea was to make a Reaktor-alike for Linux. Reaktor handles this, I think, by having a separate core layer that works sample by sample. In other words, when you're using the Core objects, you're operating at the sample level, and then when you use the objects you've made with Core, you're at the block level. That's probably not a bad way to do it. I think I may have figured out some logic to implement something like a single-sample delay as a block object. That's one of the things I wanted to play around with. There's also the possibility of playing around with templating, where an object can process a vector's worth of data or a single sample, depending on how it's called. I've been thinking about the front-end, ie, the editor with the single-button build and run system to make a pseudo-live-coding environment for C++. Since nothing to do with this front-end will ever make it into a JACK application and/or plug-in made with this system, I'm thinking I could implement the front-end with Python instead of C++ and save myself a lot of headache. Right now, though, I'm agonizing over a physics class lab assignment due today that I can't figure out, and running on 3 hours of sleep as a result. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Guide to Linux Sound APIs
On Sun, 28.09.08 15:36, David Cournapeau ([EMAIL PROTECTED]) wrote: On Sat, Sep 27, 2008 at 6:14 AM, victor [EMAIL PROTECTED] wrote: There is also pulseaudio, which is quite simple to program and use in simple apps. What's the percentage of linux systems which have pulse audio ? I know I don't on my system, and it is a very popular one (ubuntu). Almost all distributions at least ship it. And all the major ones enable it by default. Fedora does, Ubuntu does, OpenSuse does. Lennart -- Lennart PoetteringRed Hat, Inc. lennart [at] poettering [dot] net ICQ# 11060553 http://0pointer.net/lennart/ GnuPG 0x1A015CC4 ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Guide to Linux Sound APIs
On Sun, 28.09.08 09:38, Paul Davis ([EMAIL PROTECTED]) wrote: Also, I guess it depends on how you upgrade, because my workstation is 8.04, which is upgraded every year or so for 2 years and a half now, and I don't have pulseaudio. One of the package I wan't to add sound support for is for science mostly, and many people are still using Ubuntu Dapper, Fedora 3, etc... So it does not look like pulseaudio is that great if you want to support various linux and have very little needs for audio. As Lennart tried to make reasonably clear, the primary goal of PulseAudio is NOT to act as a new API, but to act as a new *infrastructure* that supports existing APIs transparently. I am sure that he would be happy if it eventually takes over the world and everybody writes apps using its API, but that doesn't appear to be the goal right now. The reason why I don't ask application developers at this time to adopt the native PA API is that it is a relatively complex API since all calls are asynchronous. It's comprehensive and not redundant, but simply too complex for everyone but the most experienced. Lennart -- Lennart PoetteringRed Hat, Inc. lennart [at] poettering [dot] net ICQ# 11060553 http://0pointer.net/lennart/ GnuPG 0x1A015CC4 ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Guide to Linux Sound APIs
That was not my experience. I put together a pulseaudio IO module for Csound using the simple API (pulse/simple.h) in about half an hour. It seemed much simpler than any alternative. And it seemed to do everything I needed from it. Victor At 13:59 29/09/2008, you wrote: On Sun, 28.09.08 09:38, Paul Davis ([EMAIL PROTECTED]) wrote: Also, I guess it depends on how you upgrade, because my workstation is 8.04, which is upgraded every year or so for 2 years and a half now, and I don't have pulseaudio. One of the package I wan't to add sound support for is for science mostly, and many people are still using Ubuntu Dapper, Fedora 3, etc... So it does not look like pulseaudio is that great if you want to support various linux and have very little needs for audio. As Lennart tried to make reasonably clear, the primary goal of PulseAudio is NOT to act as a new API, but to act as a new *infrastructure* that supports existing APIs transparently. I am sure that he would be happy if it eventually takes over the world and everybody writes apps using its API, but that doesn't appear to be the goal right now. The reason why I don't ask application developers at this time to adopt the native PA API is that it is a relatively complex API since all calls are asynchronous. It's comprehensive and not redundant, but simply too complex for everyone but the most experienced. Lennart -- Lennart PoetteringRed Hat, Inc. lennart [at] poettering [dot] net ICQ# 11060553 http://0pointer.net/lennart/ GnuPG 0x1A015CC4 ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev Victor Lazzarini Music Technology Laboratory Music Department National University of Ireland, Maynooth ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
On Sunday 28 September 2008 17:19:53 Darren Landrum wrote: Fons Adriaensen wrote: Why do you expect making music on computers to be easy ? Normally, to make music, you need not only the skills you refer to above, but also those to play an instrument. For traditional instruments that takes years of hard work. What makes you think it should somehow be easy or automatic when using a computer ? What the heck?! When I said musical skills, that is *EXACTLY* what I meant! By musical skills, I mean the skills of being able to play an instrument or two, knowing music theory, harmony, singing, and whatever else that really has little to do with computers. If you want to use digital audio well (which is what computers can help you to do), you need to understand digital audio. If the computer (or maybe rather computation) becomes your instrument of choice to make music, you have to understand how it works. That takes learning too. I think you have to start seeing computation as another instrument to make music, and then you'll understand it takes time to develop skill at it. The point is, back when I messed with this stuff on Windows, it was amazingly easy for me to bring up a DAW, load a softsynth into it, and start laying things down on the keys. So far, my experience with Linux audio has been a lot less satisfying. I don't know... it has become fairly easy over the last few years to install an audio targeted Linux distribution and fire up a DAW, fire up some soft synths, connect them through JACK and start playing around. How does this not work for you? Reading your arguments in the threads of this discussion, you seem to want two opposite things, which is confusing I find. I think you'd do well, trying out a bunch of different programs that get close to what you are looking for, and then maybe look if there is a possibility to extend them to what you want to have. Starting something from scratch, will get you on a long development path, on which all these other programs have already been... So unless you have a really radically different new approach to audio DSP programming environments, please start designing something new... but to know whether your idea is really radically new, you have to look in depth at the existing ideas... sincerely, Marije ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
2008/9/29 nescivi [EMAIL PROTECTED]: If the computer (or maybe rather computation) becomes your instrument of choice to make music, you have to understand how it works. That takes learning too. The question here is: what to learn? I would not expect a musican to learn programming in order to record audio nor a synth-sound developer to learn dsp coding technics. There are different layers of complexity and approaches and maybe Darren is right, that the complexity is not enough hidden, I mean in the background of the task to accomplish, in linux, so that easier tasks can soon become confusing, or flood the banks, by introducing different technics and requiring in-depth knowledge. For example: If I wanted to write a synth plugin, I would not expect in the first place, to worry about things as realtime privilegs, threads, realtime memory allocation, server-client handling communication, double-buffering, image-rendering etc... Unless I wanted to do something completely new, I would hope, to have predefined known working solutions for most of these requirements. To have them simplified as much as possible by a host application, because otherwise everyone would have to think about it again and again... instead, I'd prefere to concentrate on the actual task. Maybe LV2 etc. have not gone far enough, in this sence. My 2 €-Cent, Emanuel ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
On Mon, 2008-09-29 at 18:22 +0200, Emanuel Rumpf wrote: For example: If I wanted to write a synth plugin, I would not expect in the first place, to worry about things as realtime privilegs, threads, realtime memory allocation, server-client handling communication, double-buffering, image-rendering etc... yet thousands of VST plugins have been written where people had to worry about precisely these issues, and a veritable industry has grown up around it. does VST address darren's desires? no, it doesn't. but keep in mind that as time keeps ticking by, what used to be entirely acceptable barriers to entry turn into problems that people want to solve. --p ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Hallo, Emanuel Rumpf hat gesagt: // Emanuel Rumpf wrote: 2008/9/29 nescivi [EMAIL PROTECTED]: If the computer (or maybe rather computation) becomes your instrument of choice to make music, you have to understand how it works. That takes learning too. The question here is: what to learn? I would not expect a musican to learn programming in order to record audio nor a synth-sound developer to learn dsp coding technics. If we're talking about a Processing for audio, then programming is exactly what the users of such a tool want to learn. They just like their programming to be less of a chore, i.e. they want Python or Lua instead of C++ and maybe blocks and patchcords instead of vi/Emacs. _From my experience with giving some workshops in this area I believe that a growing number of artists is looking for software that allows them to build their own specific tools. They are not frightend to learn certain algorithms and how to deal with technical issues as long as these issues are art related. They actually like thinking about things like DSP algorithms for sound and video, but they don't want to think about device drivers and freeing memory (too much). Ciao -- Frank ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Guide to Linux Sound APIs
On Monday 29 September 2008, Lennart Poettering wrote: On Sun, 28.09.08 09:38, Paul Davis ([EMAIL PROTECTED]) wrote: Also, I guess it depends on how you upgrade, because my workstation is 8.04, which is upgraded every year or so for 2 years and a half now, and I don't have pulseaudio. One of the package I wan't to add sound support for is for science mostly, and many people are still using Ubuntu Dapper, Fedora 3, etc... So it does not look like pulseaudio is that great if you want to support various linux and have very little needs for audio. As Lennart tried to make reasonably clear, the primary goal of PulseAudio is NOT to act as a new API, but to act as a new *infrastructure* that supports existing APIs transparently. I am sure that he would be happy if it eventually takes over the world and everybody writes apps using its API, but that doesn't appear to be the goal right now. The reason why I don't ask application developers at this time to adopt the native PA API is that it is a relatively complex API since all calls are asynchronous. It's comprehensive and not redundant, but simply too complex for everyone but the most experienced. Lennart I believe (no docs to confirm or deny this) it also is hard coded to pick the first device it finds as the default output device. Since I relegated the mobo's el simple chipset for use by skype et all, then installed an Audigy2 for the real utility audio. But PA refuses to use the Audigy2. So it gets nuked. And then sound Just Works(TM). I'd complain, but there seems to be no path to the actual developers other than Bugzilla, and my Bugzilla entries have been won't fixed. If there is no path from the user who finds his system crippled, leaving him no choice but to nuke as much as he can in order to get any sound. That of course is not conducive to actually getting it fixed. Fix that, so there is a working dialog path back from the user to the developer, and maybe it can be made to work. As it is, the documentation on it is non-existent, and we the users feel like we're battling with M$, a generally futile endeavor, and that is gonna lead to a lot of profanity name calling. This is after all, linux, where choice is a talking point. I'm deliberately trying to be civil, but this is as civil as I can manage after the treatment I've received when I fussed about it. -- Cheers, Gene There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) Practical people would be more practical if they would take a little more time for dreaming. -- J. P. McEvoy ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev