I like this Process Algebra approach to music:

http://www.cosc.brocku.ca/~bross/research/BrazilSCM95.pdf

And here are some tracks:
http://www.cosc.brocku.ca/~bross/research/LMJ/

Op 23 jan. 2012, om 06:17 heeft BGB het volgende geschreven:

> On 1/22/2012 8:57 PM, Julian Leviston wrote:
>> 
>> 
>> On 23/01/2012, at 2:30 PM, BGB wrote:
>> 
>>> little if anything in that area that generally makes me think "dubstep" 
>>> though...
>>> 
>>> (taken loosely enough, most "gangsta-rap" could be called "dubstep" if one 
>>> turns the sub-woofer loud enough, but this is rather missing the point...).
>> 
>> Listen to this song. It's dubstep. Popular dubstep has been raped to mean 
>> "brostep" or what skrillex plays... but this song is original dubstep.
>> 
>> two cents. mine.
>> 
>> http://www.youtube.com/watch?v=IlEkvbRmfrA
>> 
> 
> sadly... my internet sucks too much recently to access YouTube (it, errm, 
> doesn't even try to go, just says "an error occured. please try again 
> later."...). at this point, it is hard to access stupid Wikipedia or Google 
> without "connection timed out" errors, but this is the free internet provided 
> by the apartment complex, where traces show it is apparently through 3 layers 
> of NAT as well according to tracert (the local network, and two different 
> "192.169.*.*" network addresses)
> 
> also no access to usenet (because NNTP is blocked), or to FTP/SSH/... (also 
> blocked).
> and sites like StackOverflow, 4chan, ... are black-listed, ...
> 
> but, yeah, may have to pay money to get "real" internet (like, via a 
> cable-modem or similar).
> 
> 
> but, anyways, I have had an idea (for music generation).
> 
> as opposed to either manually placing samples on a timeline (like in Audacity 
> or similar), or the stream of note-on/note-off pulses and delays used by 
> MIDI, an alternate idea comes up:
> one has a number of delayed relative "events", which are in-turn piped 
> through any number of filters.
> 
> then one can procedurally issue commands of the form "in N seconds from now, 
> do this", with commands being relative to a base-time (and the ability to 
> adjust the base-time based either on a constant value or how long it would 
> take a certain "expression" to finish playing).
> 
> likewise, expressions/events can be piped through filters.
> filters could either apply a given effect (add echo or reverb, ...), or could 
> be structural (such as to repeat or loop a sequence, potentially 
> indefinitely), or possibly sounds could be entirely simulated (various 
> waveform patterns, such as sine, box, and triangle, ...).
> 
> the "main mix" would be either a result of evaluating a top-level expression, 
> or possibly some explicit "send this to output" command.
> 
> evaluation of a script would be a little more complex and expensive than 
> MIDI, but really why should this be a big issue?...
> 
> the advantage would be mostly that it would be easier to specify 
> beat-progressions, and individually tweak things, without the pile of 
> undo/redo, copy/pasting, and saving off temporary audio samples, as would be 
> needed in a multi-track editor.
> 
> 
> it is unclear if this would be reasonably suited to a generic script-language 
> (such as BGBScript in my case), or if this sort of thing would be better 
> suited to a DSL.
> 
> a generic script language would have the advantage of making it easier to 
> implement, but would potentially make the syntax more cumbersome. in such a 
> case, most mix-related commands would likely accept/return "stream handles" 
> (the mixer would probably itself be written in plain C either way). my 
> current leaning is this way (if I should choose to implement this).
> 
> a special-purpose DSL could have a more narrowly defined syntax, but would 
> make implementation probably more complex (and could ultimately hinder 
> usability if the purpose is too narrow, say, because one can't access files 
> or whatever...).
> 
> 
> say, for example, if written in BGBScript syntax:
> var wub=mixLoadSample("sound/patches/wub.wav");
> var wub250ms=mixScaleTempoLength(wub, 0.25);
> var wub125ms=mixScaleTempoLength(wub, 0.125);
> var drum=mixLoadSample("sound/patches/drum.wav");
> var cymbal=mixLoadSample("sound/patches/cymbal.wav");
> 
> //play sequences of samples
> function beatPlay3Play1(a, b)
>     mixPlaySequence([a, a, a, b]);
> function beatPlay3Play2(a, b)
>     mixPlaySequence([a, a, a, b, b]);
> 
> var beat0=mixBassBoost(beatPlay3Play2(wub250ms, wub125ms), 12.0);    //add 
> 12db of bass
> var beat1=mixPlaySequenceDelay([drum, cymbal], 0.5);    //drum and cymbal
> var beat2=mixPlayTogether([beat0, beat1]);
> 
> mixPlayOutput(beat2);    //mix and send to output device
> 
> 
> or such...
> 
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to