On Sun, 2012-03-04 at 08:38 -0500, Paul Davis wrote: > On 3/4/12, J. Liles <[email protected]> wrote: > > > I personally don't think that the way notes are encoded is the primary > > limitation imposed by MIDI. A note is a frequency, an attack/decay > > modulation, and a duration. > > apparently you're forgetting or have not been a part of the many > debates with the music technology community about how to define "a > note". personally i'm happy with what you wrote, but i know several > people who have made cogent arguments that defining notes in terms of > frequencies completely misses one of the most musical semantics.
To me, one of the primary limitations about MIDI is the lack of ability to control individual notes. For example, a good protocol could let you start at C, slide up to A, do some vibrato, etc., e.g. you could draw lines in your sequencer. Identifying the note by frequency would preclude this. Note numbers are better. > > The way OSC is used, and in libmapper in particular, is to say things > > about the input device, not the music, which, as far as the input > > device is concerned, doesn't exist. > > as as receiver of OSC, i'd be entirely happy with such a standard. the > problem for users is that it leaves the mappings unspecified, and > although there are some clever solutions for this (several of them), > from a user's perspective it always adds an extra layer of complexity. > contrast with MIDI, where almost all the messages that most people > will generate have a defined meaning even from the sender's > perspective (though sure, the receiver can still map it to something > else if it wants to). The thing is, reality has turned MIDI in practice to everything being learn-based (except notes) anyway. Most of the crap in MIDI can just go away, since the use case is "send a whatever with a number(s) in it somewhere the receiver can pick up on". There are a few things that need to be standardized, like notes and transport control, but I think everything having a universal meaning is at best dubious, and likely a mistake. It is inevitable that you need learn and/or controller mappings anyway, so the protocol can be a simple description of what's happening on the *sender*. What the receiver does with it is its own business. This is a pretty controller-centric opinion, but I don't think OSC is really good for much beyond that anyway, and the only cases that it might be (controlling Pd and such) the (power) user is designing their own messages anyway so it's a non-issue. -dr _______________________________________________ Linux-audio-dev mailing list [email protected] http://lists.linuxaudio.org/listinfo/linux-audio-dev
