[LAD] Dumb Idea #27: LV2 host as kernel?
If one were to build a kernel to a digital audio workstation that was itself a bare-bones LV2 host, could things like audio tracks, midi tracks, and mixer channels and the like be built as LV2 plug-ins? I've been thinking a lot about a comment made a while back about how monolithic applications are very ill-suited to the open-source method of development. So I got to thinking about how an operating system works (at a high level; my meager coding skills are no matches for people well-versed in operating systems) and began to ask some questions. This kernel would have to handle things like audio routing, and message passing between two processes (the LV2 plug-ins), and would jockey the audio in and out of the plug-in graph. It would need to support the GUI and event extensions, and probably a few others, at the very least. The hope might be that if such a kernel could be made, it might then be a lot easier for many people to contribute the small pieces that would make for a usable application. Please feel free to consider this mindless brainstorming if you'd like. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Dumb Idea #27: LV2 host as kernel?
Adrian Knoth wrote: A last remark: what you call a kernel could be ardour or qtractor one day. There's no use and no need in getting rid of the real OS, let the Linux kernel do the hardware handling for you, implement your LV2 host and do everything else in a LV2 plugin. I was never suggesting that this *replace* the OS kernel. I was merely using that as an analogy. Otherwise, everything you say sounds really good. I do think in the end, however, that Paul is right and that there is a good reason none of this has happened yet. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Sorry I've been away. I've been catching up on homework and trying to put a preliminary design together. I should probably let this thread die, but no doubt this subject will just come up again anyway. Frank Barknecht wrote: _From my experience with giving some workshops in this area I believe that a growing number of artists is looking for software that allows them to build their own specific tools. They are not frightend to learn certain algorithms and how to deal with technical issues as long as these issues are art related. They actually like thinking about things like DSP algorithms for sound and video, but they don't want to think about device drivers and freeing memory (too much). I think this gets to the heart of the matter quite nicely. The goal is a system where the programming is focused on the processing and the synthesis, but a language that actually compiles to standalone applications. Though I'm not aiming for it to be cross-platform (I don't have the tools on any other platforms), I don't see why it couldn't be made so. At this point, I want to lay out the complete design, then decide what I want to finish for the 0.1 release and get to work on it. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Frank Barknecht wrote: One thing with Pd is that it is a block-based realtime system, so some of the techniques you mentioned (i.e. those that involve feedback delay lines) can be impossible to implement in Pd using only the builtin objects (feedback delay lines always have a minimal delay time of one block in Pd). But you can code these objects as C-externals or with Faust for Pd. Yes, I was wrestling with this logic back when my idea was to make a Reaktor-alike for Linux. Reaktor handles this, I think, by having a separate core layer that works sample by sample. In other words, when you're using the Core objects, you're operating at the sample level, and then when you use the objects you've made with Core, you're at the block level. That's probably not a bad way to do it. I think I may have figured out some logic to implement something like a single-sample delay as a block object. That's one of the things I wanted to play around with. There's also the possibility of playing around with templating, where an object can process a vector's worth of data or a single sample, depending on how it's called. I've been thinking about the front-end, ie, the editor with the single-button build and run system to make a pseudo-live-coding environment for C++. Since nothing to do with this front-end will ever make it into a JACK application and/or plug-in made with this system, I'm thinking I could implement the front-end with Python instead of C++ and save myself a lot of headache. Right now, though, I'm agonizing over a physics class lab assignment due today that I can't figure out, and running on 3 hours of sleep as a result. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
Chris Williams wrote: There's a reason that ReWire (*loosely* a jack equivalent) slowly became deprecated in favour of VSTIs on Windows. Propellerheads won't even give you the time of day unless you're a registered for-profit corporation with a real product. Even then, they give trouble. Justin Frankel (the Reaper guy) had to argue with them, and he has a registered for-profit corporation with a real product! It will continue to be tolerated, though, as long as Reason remains a popular tool for music, and it *is* quite popular. THAT's why ReWire is dying off more than anything, assuming it is. Steinberg at least made VST an open standard (notice the flame-war-avoiding quotes there), allowing anyone to be able to develop plug-ins, even if they're free. No, it's not compatible with the GPL, but that's off-topic for this conversation, I think. As to session state saving, it's not something that *personally* concerns me all that much, provided each component allows the facility for saving its own configuration. Paul's right, though; it really is a big deal on the other OSs. Users are used to saving their project in their DAW of choice and having the DAW remember it, rather than them having to be responsible for saving each piece individually. DSSI provided some capability for this with the 'configure' function / OSC call. It gave the host some handle on how to reconfigure the instrument in question when loading a project. LV2 doesn't even do that from what I can see. My understanding with LV2 is that all communications between the GUI (whether included with the plug-in or generated by the host) flow through the host, and can be captured, analyzed, serialized by the host on the fly. Someone please correct me if I'm wrong on that. I would think that that means the host definitely *can* bring up an LV2 plug-in with state information quite intact. I don't have any knowledge of how difficult it is to do any of this, though. I'm only book smart on the issue. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
Paul Davis wrote: It might suprise you that I probably agree with this point even more than you do :) JACK exists primarily because there was not a suitable plugin API on linux and because several of us felt it unlikely that there ever would be one. The biggest obstacle of all was the still-unsolved issue of GUI toolkit compatibility. Its remarkable and cool that JACK works as well as it does, and the isolation it provides between processes can be handy. But yeah, if we had had a single GUI toolkit and a decent plugin API ... no JACK would have emerged, probably. Wasn't JACK based at least loosely upon the same concepts as CoreAudio? I seem to remember something about that some time ago. Myself, I'm watching and participating quite eagerly in this conversation, because I would like to write a plug-in or two (or three) and I still don't know what API (JACK, LV2, etc.) I want to focus my energy on. Chances are, I'll be able to choose only one. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Paul Davis wrote: To be honest, this is not an avenue that interests me individually. I view these tools as a lot like table saws, routers and jointers: things with immense power that make a lot of tasks way faster and simpler than they would otherwise be, but that do not remove the obligation to develop a set of tool-specific skills. I've never had much interest in designing table saws for people who don't want to know how to use table saws, and I personally don't have a lot of interest in designing software tools for people who don't want to learn how to use them. Who said anything about not wanting to learn how to use table saws? This seems a bad analogy to me. NI's plug-ins don't make me a better musician. I still have to learn how to use them, but I don't have to know how to code them myself first. That part's already done. I'm more than willing to learn how to use a table saw, but I really don't feel like building my own table saw first. Right now, audio on Linux really does kinda force the latter, in my opinion. I'm genuinely enjoying myself debating this, though. :-) -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Fons Adriaensen wrote: Plus the simple observation that once we had the synths that nobody needed to understand whatsoever about, most synth music, with few exceptions, degraded to junk food quality levels. Being good at using a modular synth still didn't require knowing how to design and solder circuit boards along with understanding the quantum mechanics of how electrons travel through a semiconductor. Making music by coding one's own software synths *is* a lot like that, though. I seriously doubt Paganini ever felt he needed to make his own violin in order to be a better musician. I'm not trying to fob this off onto someone else. I would actually like to start this project myself, even though I'm a lousy coder and my code will probably look horrible. At this point, though, I fail to see the point, as it appears I would be completely on my own, and nobody would care what I come up with. So, I may as well stick to just doing whatever I can on my own, make my music, and stop caring about how, or whether anyone else would care how. You've already basically accused me of not being able to play just because I want to use software synthesis. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Something like Processing for audio
Sorry for starting this entire argument. I'm just tired of getting nowhere with all of the same tools that everyone else seems to have no problem with. I have a very bad habit of putting myself under a great deal of pressure to exceed everyone's expectations of me. Look, I know that everything I'm asking for exists on the Linux platform. The problem is, it doesn't all exist in one place, or under a single language. I'm convinced at this point that starting over from scratch with a solid design is preferable to trying to use several disparate tools and somehow glue them all together. I've already played around with code here and there to try out some different approaches to this problem, but nothing that I've bothered keeping around. Starting tonight, I'm going to draft a very detailed design, create a code hosting account somewhere (probably Google Code), and get started. I will keep the list apprised of any progress with regular emails. It's been pointed out to me that many people on the list seem to think that I'm trying to get someone else to code this for me. That is not and never was my intention, and I apologize for any miscommunication on my part for that. I am a very slow and not very good coder, though, and it might take a little while to see any progress. First things first, though. A solid design. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
Fons Adriaensen wrote: First, why should a complete instrument, taking in MIDI and producing audio, be a plugin in Rosegarden or any other sequencer ? It would be much more useful as a standalone app, and probably *a lot* easier to develop. I wouldn't think for even a fraction of a second to write Aeolus as a plugin - it would be an exercise in self-torture of the third degree. Except the biggest advantage of plug-ins is session state saving: When I have one master app that stores the states of all of my plug-ins, I can save out the session, and recall it later exactly as I saved it. Where is the functionality in JACK for that? I know that LASH has been making headway into that issue, but my understanding is that it has been an uphill battle. Believe it or not, this is a major showstopper for a lot of people. When I save out a session and pull it up later, I want it to come back up the way it was when I saved it. I don't want to have to mess with bringing up every program I was using and finding the preset I was using in each one. Of course, my experience with JACK is limited, and if it turns out the session state saving is in there, then I simply haven't found it yet, and you can ignore this email. Indeed, you can take this email for all it's worth. I've just about gotten to the point where I've stopped caring. -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
Paul Davis wrote: something must be going wrong with the world darren. we're in agreement with each other twice in the same month :)) It must be something in the water. :-P So... Why couldn't session states be saved as part of JACK? I realize it can be argued that it isn't within the scope of JACK, but... Isn't it, kinda? It's an important feature, and it has to get implemented *somewhere*. Of course, it makes the most sense that session states should be saved and recalled by the host. Isn't that also a part of what LASH is trying to accomplish? Unfortunately, I have to plead ignorance here, as I'm no coder, just a math head trying to make some new plug-ins, and getting nowhere, I might add. I still regard LV2 as a potentially powerful system for creating and handling virtual instruments and effects, but the right extensions (events and MIDI, and UI) would have to be implemented by the popular hosts. That's my largely ignorant opinion, anyway. -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
Fons Adriaensen wrote: Well, a 'rich' plugin standard has to provide almost everything that the operating system provides: audio, midi, GUI, network,... So why not use the system as your host ? All it takes is a good session manager. This is clearly a repeating theme here. Is LASH the solution to this issue, then? I remember looking at the documentation for it and thinking it didn't look too difficult to implement. Reaktor works by having a standalone app for designing new ensembles (a complete instrument, effect, or combination thereof), and the VST plug-in is basically the core engine with the GUI engine running the ensemble without all of the graph-y back-end editing features. I don't know any of the details on how they made this work. I get the impression that the Emu Emulator X/X2 sampler works the same way. Gigasampler is not a plug-in, but used Propellerhead's ReWire, perhaps the closest analog to JACK on Windows. ReWire, though, can save the state of the slave programs wired in to the host app. I don't know how they accomplish this. (Big fat lot of help I am, I know.) I'd still like to think that there is still an innovative solution to this problem, and that we are the ones destined to find it. Time for some brainstorming, perhaps? -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Specification issues in open systems
Darren Landrum wrote: I'd still like to think that there is still an innovative solution to this problem, and that we are the ones destined to find it. Time for some brainstorming, perhaps? Sorry for replying to my own message. If something like this is to be solved, it should be tied to the host, I think. In other words, the state of my previous session has to restore itself upon re-loading the session file in Ardour|Qtractor|Rosegarden (take your pick). That means the host has to have the ability to run the other programs and set up the JACK graph connections. Is this even remotely possible? -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Vamp compilation issues (Kubuntu 64-bit with RT kernel)
Darren Landrum wrote: Chris Cannam wrote: And not true. The build log you just quoted even showed the -fPIC option in it! I actually answered your question on the Vamp forum, but to recap: it's an error in the Makefile, which was shipped with a line that was unwise and should not have been included. If you look in the vamp-plugin-sdk Makefile, you'll see two consecutive lines both starting PLUGIN_LIBS = ..., the first of which is commented out and the second uncommented. Just comment out the second one and uncomment the first instead. Oh, and it did compile and install just fine, once I fixed the makefile as you described. Thank you! I also went ahead and recompiled and reinstalled Rubber Band. Now, it's time to figure out how to use all of this new, nifty software. Regards, Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
[LAD] Vamp compilation issues (Kubuntu 64-bit with RT kernel)
I posted this exact message to the Vamp forum, noticed the dates of the last posts, there, and decided to try here instead. I'm having a most unusual problem compiling the Vamp-SDK on my machine. This has actually been an on-going problem for some time now. Looking over the Vamp forums, I see one other person seems to have successfully compiled and installed Vamp on his 64-bit Ubuntu install, so I'm wondering what keeps going wrong here. The first thing I did was to edit the Makefile to install to /usr instead of /usr/local. That was simple enough. Then when I run make, I get the following output (notice the error at the end): - g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/PluginAdapter.o vamp-sdk/PluginAdapter.cpp g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/RealTime.o vamp-sdk/RealTime.cpp ar r vamp-sdk/libvamp-sdk.a vamp-sdk/PluginAdapter.o vamp-sdk/RealTime.o g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/PluginHostAdapter.o vamp-sdk/PluginHostAdapter.cpp g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/hostext/PluginBufferingAdapter.o vamp-sdk/hostext/PluginBufferingAdapter.cpp g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/hostext/PluginChannelAdapter.o vamp-sdk/hostext/PluginChannelAdapter.cpp g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/hostext/PluginInputDomainAdapter.o vamp-sdk/hostext/PluginInputDomainAdapter.cpp g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/hostext/PluginLoader.o vamp-sdk/hostext/PluginLoader.cpp g++ -O2 -Wall -I. -fPIC -c -o vamp-sdk/hostext/PluginWrapper.o vamp-sdk/hostext/PluginWrapper.cpp ar r vamp-sdk/libvamp-hostsdk.a vamp-sdk/PluginHostAdapter.o vamp-sdk/hostext/PluginBufferingAdapter.o vamp-sdk/hostext/PluginChannelAdapter.o vamp-sdk/hostext/PluginInputDomainAdapter.o vamp-sdk/hostext/PluginLoader.o vamp-sdk/hostext/PluginWrapper.o vamp-sdk/RealTime.o ranlib vamp-sdk/libvamp-sdk.a ranlib vamp-sdk/libvamp-hostsdk.a g++ -static-libgcc -shared -Wl,-Bsymbolic -Wl,-soname=libvamp-sdk.so.1 -o vamp-sdk/libvamp-sdk.so vamp-sdk/PluginAdapter.o vamp-sdk/RealTime.o g++ -static-libgcc -shared -Wl,-Bsymbolic -Wl,-soname=libvamp-hostsdk.so.2 -o vamp-sdk/libvamp-hostsdk.so vamp-sdk/PluginHostAdapter.o vamp-sdk/hostext/PluginBufferingAdapter.o vamp-sdk/hostext/PluginChannelAdapter.o vamp-sdk/hostext/PluginInputDomainAdapter.o vamp-sdk/hostext/PluginLoader.o vamp-sdk/hostext/PluginWrapper.o vamp-sdk/RealTime.o g++ -O2 -Wall -I. -fPIC -c -o examples/SpectralCentroid.o examples/SpectralCentroid.cpp g++ -O2 -Wall -I. -fPIC -c -o examples/PercussionOnsetDetector.o examples/PercussionOnsetDetector.cpp g++ -O2 -Wall -I. -fPIC -c -o examples/AmplitudeFollower.o examples/AmplitudeFollower.cpp g++ -O2 -Wall -I. -fPIC -c -o examples/ZeroCrossing.o examples/ZeroCrossing.cpp g++ -O2 -Wall -I. -fPIC -c -o examples/plugins.o examples/plugins.cpp g++ -static-libgcc -shared -Wl,-Bsymbolic -Wl,--version-script=vamp-plugin.map -o examples/vamp-example-plugins.so examples/SpectralCentroid.o examples/PercussionOnsetDetector.o examples/AmplitudeFollower.o examples/ZeroCrossing.o examples/plugins.o vamp-sdk/libvamp-sdk.a /usr/lib/gcc/x86_64-linux-gnu/4.2.3/libstdc++.a /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/4.2.3/libstdc++.a(functexcept.o): relocation R_X86_64_32 against `std::bad_typeid::~bad_typeid()' can not be used when making a shared object; recompile with -fPIC /usr/lib/gcc/x86_64-linux-gnu/4.2.3/libstdc++.a: could not read symbols: Bad value collect2: ld returned 1 exit status make: *** [examples/vamp-example-plugins.so] Error 1 - Is it trying to tell me that libstdc++ is compiled incorrectly on my machine? It's the one that comes with every Ubuntu 64-bit system. Any help would be greatly appreciated. Thank you! My system: AMD64 X2 with Kubuntu 64-bit, 2.4.24-19-rt kernel, 2GB DDR-800 RAM, SATA2 hard disks. Regards, Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
[LAD] LV2 in Ardour 3?
I seem to recall some discussion involving the implementation of LV2 as a part of Ardour 3, along with the MIDI functionality. I was just hoping to confirm whether this is true or not. If it is, what LV2 extensions are going to be supported? I'm really hoping for MIDI, GUIs, and probably the port grouping as well. Thanks for the help! Regards, Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Writing a library?
[EMAIL PROTECTED] wrote: It's not XML, it's a sort of flat-text-ish thing with various keywords for setting keys, keygroups, mutegroups and so on. Having briefly skimmed the spec over lunch, I'm not in much of a position to say how good it is, but it looks right. Essentially an SFZ file is a text file describing what to do with a bunch of .wav or .ogg files. It's almost worryingly clueful. It wasn't actually created by Cakewalk, but by a small company that Cakewalk had bought, and rather than closing up or quelching SFZ, they decided to keep it open. That's the story best as I was able to divine it, anyway. And now with the news that Tascam is discontinuing all Gigastudio-related development (http://www.filmmusicmag.com/?p=1738 and confirmed on the Legacy section of Tascam's web site: http://www.tascam.com/legacy;37,7.html) it's possible that SFZ might become a new standard for sample libraries to use. Garritan is apparently releasing, or getting set to release, their libraries in SFZ now. My issue now, though, is I clearly do not have the skills to create a good, usable library. Nor do I want to; I'd rather spend that time creating a working application, even if it's a monolithic one. I'm here because I'm broke and scratching my own itch. If I had the money, I'd go off and buy NI Komplete and be happy actually making music. It can be argued that learning programming and DSP is making me a better person, but it certainly isn't making me a happier one. So now I'm working out a plan for a code framework for making software synths and samplers, likely directed graph based. I might release that framework separately, but no one will likely get a library out of me. I can only do what I can do with the tools I have, and scratching my own itch comes first. Maybe someone with those skills would like to jump in and help? I realize that code says more on this forum than talk, but surely I can try to gather a team together for a larger project. Right? Maybe? -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Writing a library?
Julien Claassen wrote: Hi Darren! I'd still suggest on going linuxsampler. There's a basic framework already. I'm not the skillful programmer myself, otherwise I'd like to help. But reasons for my point: 1. LS has already MIDI and audio drivers working. 2. LS offers a clear structure and an API to go by. 3. LS is in use already. 4. It already has two GUIs and is probably getting more. 5. MY OWN HERE: It's useable for blind people as well with relative ease. 6. I think the people there are a helpful and nice crowd. So six nice reasons to go that way. Perhaps you can also rely on code already written, like take a look and produce similar code in parts and there are people who know the framework and the matter. and it won't be another standalone app to maintain and adapt to every novation in the audio-world, like audio/MIDI driver APIs changing etc... One of the delicate remarks: If you don't get along well with LinuxSampler's license, you could make your engine a seperate package and say it's LGPL. Is tht correct? Some backup. No licensing discussion just a true or false statement. PLEASE! :-) Kindest regards Julien I'm not a CS person, I'm a math and engineering person. I truly and honestly believe that it will be easier for me to start from scratch than to try to wrap my head around someone else's codebase. That being said, I mentioned starting with a code framework that would allow the creation of any kind of synth or sampler, not just the one I have in mind (which is inspired by the upcoming Omnisphere more than anything else). I may not be a CS guy, but I do understand the value of planning in advance. Nevertheless, anyone who quotes John Miles in his sig must be a cool person, so I'll certainly wait a bit for other ideas and advice before barreling off on my own. Thank you for the reply. -- Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
[LAD] Writing a library?
I've been looking around for a library to read and write SFZ files, which is an open sampler format released by Cakewalk: http://www.cakewalk.com/DevXchange/sfz.asp Finding none, I thought I might try my hand at writing a library for this myself, as there is no embedded wave information like with Gig files. SFZ is simply a text file to be parsed. Now, I know about writing a good header file, and its associated class, and all that, but I have no knowledge of how to write it as a dynamic library. Google searches on every possible permutation have been worthless to me as well. I would prefer to write it in C++, as that's what I know, and even then, not too well, hence why I thought I'd start with something simple like parsing a text file. If anyone has any advice, recommendations, or ideas, I'll happily listen and learn. I have yet to think too much about how the data will be stored in the class, and what methods to make available to access it, so if anyone knows any best practices there, I'd really like to know. Consider this a feeler post. I'd ultimately want this for a future project, which you can guess at by now. Thank you for the help! Regards, Darren Landrum ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Prototyping algorithms and ideas
Kjetil S. Matheussen wrote: Thanks, that was simple. I'll try figuring out the rest myself. But what about resampling? The main main signal usually needs to be resampled up 5-10 times to get a decent sound. Can I do that with faust? Something like: process = resample(5,d) I'm quite curious about this as well. How do tools like Faust and CLAM handle up- and down-sampling? -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Re: [LAD] Prototyping algorithms and ideas
Stéphane Letz wrote: Have a look at Faust: http://faust.grame.fr/ Oh, hey! I'd forgotten about Faust. I might have to give that one a go. Thanks! And thanks to all the other replies. Csound was already on my short list, but I'm having troubles getting it working on my AMD64 system for some reason, probably my fault. -- Darren ___ Linux-audio-dev mailing list Linux-audio-dev@lists.linuxaudio.org http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev