Hello Miguel, The answers to your questions are all yes. The one application I know of which uses all these features is Swami. It does its own instrument management (the SoundFont loader API of FluidSynth allows this), so instruments are loaded via callbacks. The interface is abstract enough, where other instrument formats can be synthesized (although it is in a SoundFont centric fashion). Swami supports to some extent DLS and GigaSampler at this point for example (development version).
If you are interested in the code for the FluidSynth plugin, you can find it here: http://swami.svn.sourceforge.net/viewvc/swami/trunk/swami/src/plugins/ The wavetbl_fluidsynth.c file is the interesting bits. The development plugin underwent some clean up, but not sure if it would be any more useful then the code in the link above. But: http://swami.svn.sourceforge.net/viewvc/swami/branches/swami-1-0/swami/src/plugins/ Perhaps of interest to you would be Swami itself (or libInstPatch for which it is based on), while it is also a loadable library and has a python binding. Its still in development, but libInstPatch is rather useful at this point for doing instrument editing with Python. The current plans are to provide an interface to FluidSynth for synthesizing other formats. Regardless of what you choose, do let me know if you have any questions in this regard, since it is one area I am actually familiar with in regards to FluidSynth. Best regards, Josh Green On Wed, 2007-01-31 at 21:02 +0100, Miguel Lobo wrote: > Hi, > > I'm starting a project to write a GPL module tracker (similar in > spirit, though not in interface, to Impulse Tracker or FastTracker). > It will be written in Python and at a minimum I want to support Linux > and Windows. > > I'm looking at the possibilities for the sound generation and so far > fluidsynth seems the most promising option, as it supports Linux and > Windows, it is maintained and it seems to implement most of the > required functionality. > > However, from a cursory look at the code I'm not sure if that > functionality is exposed in the API. In particular, are there > interfaces for the following? If not, how difficult would it be to > implement them? > > * Loading samples in "raw" format from memory into the synthesizer. > * Building an instrument in memory from loaded samples, frequencies, > volume envelope, and so on. > * Getting information from an instrument loaded from an SF2 > soundfont. > * Applying effects such as tone portamento and so on directly i.e. > without going through the MIDI controller change interface. > > Many thanks, > Miguel > > _______________________________________________ > fluid-dev mailing list > [email protected] > http://lists.nongnu.org/mailman/listinfo/fluid-dev _______________________________________________ fluid-dev mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/fluid-dev
