Re: [fonc] Beats
Hi all-- ...live-coding is very boring to watch and there is little connection between what is seen (the source code in a text editor) and what is heard. I'll chime in with my take on this... :) http://netjam.org/quoth -C -- Craig Latta www.netjam.org/resume +31 06 2757 7177 + 1 415 287 3547 ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
Somewhat similar to this discussion, but more focused on live-coding performances rather than studio production, is Text which was presented at a local computer meetup recently by its author[1]. It's a drag 'n' drop interface where component names and parameters are typed in and connected together, similar to PureData, but the backend behaves in a different way. All of the text that is entered is Haskell code, eg. function names, and the connections determine the (partial) order of the tokens. The really bizarre part is that the connections are not specified by the user, they are automatically determined based on the proximity of the components (things which are close together will get connected, eg. by one becoming a parameter to the other) and their types (components are only ever connected in ways that are type-safe, so the program always* has correct semantics). The reasoning given for creating Text is that live-coding is very boring to watch and there is little connection between what is seen (the source code in a text editor) and what is heard. With Text, the flow of the code and the meaning of the manipulations is more intuitive for the audience. Also, the chaotic nature of the connections gives a lot of unintended output which the coder can incorporate and follow. Note that it's not just a music creation environment, as it can be used to write any Haskell code (although it only seems useful for artistic creations, rather than tasks with more rigid requirements). Thanks, Chris [1] http://yaxu.org/text-update-and-source/ * Not sure how rigorous this is, it might just be a case of works most of the time ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
Ian, as an excellent musician, is making the big important point here ... that musical time is not about integer ratios. It is often wrongly taught that way, but it is actually about meaning, pulse, emphasis, and phrasing. Musical notation is not a program to be followed literally, but hints to the player from the composer, similar to the script of a play to an actor from the playwright. To get from hints to something that sounds musical requires a fair amount of knowledge and taste -- again similar to the difference between the even monotone delivery of words from the printed page and the soaring delivery done by skilled actors. Many of these can be represented by heuristics, more parameters, plus a little more outside advice. For example, the score authoring system Sibelius has several performing modes, and some of these are great improvements over the metronomic notation. I'm guessing that something could be learned in Fonc by thinking about interpreters that actually can interpret Cheers, Alan From: Cedric Roux s...@free.fr To: Fundamentals of New Computing fonc@vpri.org Sent: Tue, May 17, 2011 12:11:44 AM Subject: Re: [fonc] Beats - Josh McDonald j...@joshmcdonald.info wrote: Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats Hey, I made a little drum machine at some point of my life: http://sed.free.fr/vdm You write something like: void main_rhythm(int argc, char **argv) { tempo = (double)atoi(argv[1]); while(1) { * b * b * b c } } And you have a 3 times metronome after compilation. Polyrhythms supported, like in: void main_rhythm(void) { int i; tempo = 30; * c for (i=0; i5; i++) { * ( b . b . b ) ( a . a ) } * c } Sorry if OT. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
On 5/16/2011 9:22 PM, Ian Piumarta wrote: Dear Josh, Thanks for posting this! Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats I think this is pretty cool. (It puts us well on the way to archiving the entire output of Kraftkwerk as ASCII files. ;-) However... Music is one area where direct manipulation clearly wins over the command line. So... I'm curious what you (generically) think about what's missing from this representation, and how it might be added back, to reach the expressiveness of (for example) a well-made MIDI track. (The largest amount of time assembling a nice-sounding MIDI track is not inputting the basic timing and pitch/instrument information but rather in tweaking the velocities, expression, etc., to make it sound like humans are performing.) FWIW, in some of my own stuff before, I had converted MIDI (fairly directly) to/from an ASCII representation. I had done a few things with this. unlike a few other (vaguely similar) things I had seen, I didn't really abstract over it much at all, and so the ASCII representation was not all that far removed from the binary command-stream. I considered the possibility of using this to create original music, but not being particularly musically inclined, this never really got much beyond being a drum-machine. more elaborate was at one point trying to use MIDI-style audio synthesis in a TTS (Text-To-Speech) engine, but this was nowhere near being comprehensible (I was basically trying to use MIDI to represent the formant-control data). note that in this attempt, extended samples (via an extended sample bank) were used, with most of these extended samples containing derived formants and similar. IIRC, I had at one point considering a tweak (to the synthesizer) to allow also supporting a channel bank, such that one could use 32 or 64 channels in a track (vs 15 or so), and maybe have multiple percussive instruments at once. never got to this though. likewise for allowing customizable (per-track) sample-banks. so that arbitrary sound effects could be used in addition to any standard samples (currently, there is a compromise, but it involves registering sounds with the synthesizer via API calls, and these samples are global rather than track-specific). as-is, in my case, my MIDI synthesizer, main mixer, and TTS engine, ended up remaining as separate components (each doing different things). so: MIDI manages music; TTS manages speech synthesis; and the mixer manages assigning audio-streams to point-sources (and accounting for spatial location/velocity and doing doppler-shifting and so on). this is, vs, say, some sort of unified super-mixer... and, meaning, say, one would presently need to playback multiple MIDI streams if they want instruments in different spatial locations (but, nothing near this elaborate is presently done, me mostly using MIDI for unattenuated background music and similar in a 3D engine). yeah, I wrote pretty much all my own code here, as I guess I am funny that way (well, and want to keep full legal control over my code, which generally means staying clear of GPL and similar). I'm also curious what you (generically) think could/should be added to this to make a full-blown sequencing language, capable of representing (e.g.) anything that can be programmed/manipulated graphically in something like Ableton Live. I've always had a slightly frustrating experience with Ableton (and Garbage Band, etc.) feeling that the semantic content of an assembled track is a lot less than the amount of manipulation required to achieve the final result: copy and paste is a (very) poor substitute for subroutines! On the other hand, I have no idea if a written representation could be much more (or even anything like) as concise. Maybe a combination of the two is needed? FWIW, it's worth following the link to the author's other projects. Degrafa, in particular, is very interesting. can't say... little personal experience here... some of my efforts (such as the TTS experience) did involve printing the textually-serialized ASCII representation, which was generally handled in C code (IIRC, largely by the usual combination of while() loops and switch() blocks...). a bit lacking though is the inability to produce multiple independent streams and have them be combined by the synthesizer just prior to (or during) playback (as is, it would require using the API to convert them all into binary globs, and then use other calls to merge them all into a single track, which would then be played). granted, the above may not matter as much in musical settings, since then it is more a matter of composition, rather than needing real-time control over the output. but, as-is, music is still not really my area... ___ fonc mailing list fonc@vpri.org
Re: [fonc] Beats
Cool! I've been hoping to see some more multimedia stuff happen for Ruby, and I actually like the little DSL they've got going there: it's very visual, and a grid is perfect when what you're emulating is a drum machine which usually has a grid interface or some such, and doesn't know about inexact timing like a drummer does. It looks like fun... too many shiny distractions:) On Mon, May 16, 2011 at 8:21 PM, Josh McDonald j...@joshmcdonald.infowrote: Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats -- Therefore, send not to know For whom the bell tolls. It tolls for thee. Josh 'G-Funk' McDonald - j...@joshmcdonald.info - http://twitter.com/sophistifunk - http://flex.joshmcdonald.info/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
I really liked the idea mentioned on hacker news of using a numeric value in place of the x to indicate velocity. I am going to mess around with a really simple web interface for this over the weekend. On 5/17/11, Casey Ransberger casey.obrie...@gmail.com wrote: Cool! I've been hoping to see some more multimedia stuff happen for Ruby, and I actually like the little DSL they've got going there: it's very visual, and a grid is perfect when what you're emulating is a drum machine which usually has a grid interface or some such, and doesn't know about inexact timing like a drummer does. It looks like fun... too many shiny distractions:) On Mon, May 16, 2011 at 8:21 PM, Josh McDonald j...@joshmcdonald.infowrote: Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats -- Therefore, send not to know For whom the bell tolls. It tolls for thee. Josh 'G-Funk' McDonald - j...@joshmcdonald.info - http://twitter.com/sophistifunk - http://flex.joshmcdonald.info/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger -- Sent from my mobile device ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
On 18/05/2011, at 8:06 AM, Casey Ransberger wrote: Here's something ironic: we've instead focused on ways to *correct* human error in music. Pitch correction for your vocals, but don't use too much, or you'll sound like a fax machine (unless that's what you're going for, in which case you turn it up all the way.) I love how the kids these days call it autotune and recognize the sound of it right away. And then there's quantization, which is what you do to a recording when your timing was wrong to fix it. Among the most interesting slices of life I've seen was a drummer who totally lost it and went agro when his guitarist said you want me to quantize that for you? They call it autotune because the first tool that did this form of automatic pitch correction in any form of wide-spread way was the product antares autotune. Julian. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
Dear Josh, Thanks for posting this! Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats I think this is pretty cool. (It puts us well on the way to archiving the entire output of Kraftkwerk as ASCII files. ;-) However... Music is one area where direct manipulation clearly wins over the command line. So... I'm curious what you (generically) think about what's missing from this representation, and how it might be added back, to reach the expressiveness of (for example) a well-made MIDI track. (The largest amount of time assembling a nice-sounding MIDI track is not inputting the basic timing and pitch/instrument information but rather in tweaking the velocities, expression, etc., to make it sound like humans are performing.) I'm also curious what you (generically) think could/should be added to this to make a full-blown sequencing language, capable of representing (e.g.) anything that can be programmed/manipulated graphically in something like Ableton Live. I've always had a slightly frustrating experience with Ableton (and Garbage Band, etc.) feeling that the semantic content of an assembled track is a lot less than the amount of manipulation required to achieve the final result: copy and paste is a (very) poor substitute for subroutines! On the other hand, I have no idea if a written representation could be much more (or even anything like) as concise. Maybe a combination of the two is needed? FWIW, it's worth following the link to the author's other projects. Degrafa, in particular, is very interesting. Regards, Ian Therefore, send not to know For whom the bell tolls. It tolls for thee. Semper Donne, semper dolens. :-) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc