On 1/22/2012 10:29 PM, Julian Leviston wrote:

On 23/01/2012, at 4:17 PM, BGB wrote:

as opposed to either manually placing samples on a timeline (like in Audacity or similar), or the stream of note-on/note-off pulses and delays used by MIDI, an alternate idea comes up: one has a number of delayed relative "events", which are in-turn piped through any number of filters.

then one can procedurally issue commands of the form "in N seconds from now, do this", with commands being relative to a base-time (and the ability to adjust the base-time based either on a constant value or how long it would take a certain "expression" to finish playing).

likewise, expressions/events can be piped through filters.
filters could either apply a given effect (add echo or reverb, ...), or could be structural (such as to repeat or loop a sequence, potentially indefinitely), or possibly sounds could be entirely simulated (various waveform patterns, such as sine, box, and triangle, ...).

Heya,

Yeah, I've had that idea for a while - although a more comprehensive one (I write music). Take a look at what Apple did to their own product Final Cut Pro... to turn it into Final Cut Pro X, and notice that there are rumors surrounding Logic Pro X, and I'm pretty sure you'll see that this idea is where Apple will most likely go when they release Logic Pro X.

In Final Cut Pro, they call it their "magic timeline".


I am pretty sure this costs money though.
in my case, I was using Audacity, partly because it was free...


By the way, what you're describing CAN be done with Ableton Live without much trouble... also Ableton Live has the ability to use Max for Live, which is Cycling 74's excellent Max/MSP product inlined into a Live instrument (what you're calling various waveform patterns). It's sine, square/pulse and triangle by the way, not "box"... and we also can use all sorts of other waveforms... generated or sampled...


I wasn't claiming originality though (although, if someone has free software for this, I might be more interested).

I knew it was called square-wave, but for whatever reason (mental slip...) wrote "box" instead. same idea though:
have a wave-generation function, stream things through filters, ...


although not nearly as advanced, this is sort of how the audio-mixer in my 3D engine works (it can also use streams and apply filters through streams, ...), as well as implementing effects like echoes and doppler-shifting and delays (sounds will not start playing instantly, but the mixer may instead calculate a delay based on how far away it is, and start playing then).

[ sadly, I don't know of a good way to do "realistic" spatial echoes in real-time, leaving such cheap tricks as "a box which enables echos"... likely it would involve "some way to calculate a large FIR-filter based on the sound-source, destination, and intermediate geometry"... ]


the main differences:
all mixer-streams mix directly to the output;
all streams are simple 1:1 filters (no composting or sequencing);
the mixer is mostly just used for playing in-game sound-effects (from sound files); the stream feature is thus far currently only really used as a means for plugging in things like Ogg/Vorbis, MIDI, and Text-To-Speech playback (feeding samples through streams or layering them isn't really done);
...

probably, if I were to implement something like my idea though, it would be functionally independent of the 3D engine, and could probably be used as a standalone mixer (capable of also producing output as WAV files, ...).


other features I would probably include (forgot to mention earlier):
ability to playback particular notes from particular instruments (probably reusing code from my MIDI synth);
probable inclusion of spatial positioning and attenuation;
potentially: inclusion of or integration with MIDI (say, ability to pipe streams in as instrument patches, and other MIDI extensions);
...

possible implementation routes (should I pursue this idea) include, among other things: (hackishly) augment my existing MIDI library with new features (namely, stream-based mixing, most likely an API independent of the traditional MIDI command-stream);
implement a new library, which may depend on MIDI library;
simply copy/paste any relevant code, but otherwise implement a new/independent library.

the latter 2 options have the merit of reducing likely code-tangle (and avoid polluting the MIDI synth) with what would otherwise be functionality unrelated to MIDI, but the cost that they would add another library to the project. (note: as-is, the MIDI synth does not implement streams or delay-based mixing, it merely mixes directly and operates according to time-slices. so most of this would be all new code anyways).


or such...


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to