On Mon, 2006-07-24 at 13:59 -0700, lazzaro wrote: > On Jul 24, 2006, at 1:39 PM, linux-audio-dev- > [EMAIL PROTECTED] wrote: > > > what about applying the journal data to an OSC-over-UDP stream. the > > journal data could be encapsulated in OSC. sounds like a paper and > > liblo patch waiting to happen ;) > > > Personally, my suggestion is that the community starts by > defining OSC profiles for specific classes of gestural input > and synthesis methods that are widely used in the community. > These profiles should standardize syntax and semantics. If > you are working on a music project that is doing something > that fits a profile, use the profile. Otherwise, do as you do today. > > If OSC goes down this route, one can imagine developing a > recovery-journal system with recovery semantics for all the > standard profiles. Part of developing a new OSC profile would > be defining the recovery journal for the profile. > > The least of the benefits of a design like this would be > network resiliency. The big win is by defining OSC profiles > with semantics, it starts to make sense to create a hardware > or software synth that "understands OSC profile X" out of > the box, in the same way a synth understands MIDI. And > you can also create mass-market controller hardware that > "puts out OSC data using profile X". And so, you can > connect the two boxes up and get plug and play -- just > like MIDI.
But you don't "just get plug and play" with MIDI. It's all about learning with MIDI. At the very least with OSC you need to have a (dynamically changeable) path prefix for everything (eg such a defined "profile" would definitely have to allow for an undefined prefix portion), so no matter how you slice it you end up needing some sort of "learn"-ish system anyway. So even with such profiles the real problem to be solved is still service discovery and namespace enumeration (eg back to square one). -DR-
