Hey, I was just checking out the synthesis tookit, (http://ccrma.stanford.edu/software/stk/information.html) and it got me wondering if you ever thought of using it for Nova. Tim, have you been implementing all the objects from scratch, or mostly porting from PD, or something else? I found it from looking at the Percolate objects for Max and PD. Anyone ever try them out?
Also, reading a bit about the Chuck project, it got me thinking about two different aspects of nova. First is this idea of being sample synchronous across virtual machines. In Nova the DSP engine is processing in windows of samples. Would it be hard to synchronize multiple engines at the granularity of a window? Does this sort of have to happen already when moving to a multi-threaded algorithm, or is it a different problem? I guess at a basic level this would let you do things like have synchronous metronomes running on multiple machines, for example, so that a group of computers could control different instruments and effects but still be in time with each other. The second thing is supporting live coding. I was wondering if the Nova engine needed to be specific to a visual programming environment like PD, or if it could just be a realtime audio system that could be operated on in a number of ways. In the back-end there is the idea of gobj, for example, in a number of places. Does it really have to think at all about graphical anything, or can it just operate at the level of interacting audio objects? External programs, whether they are language interpreters, virtual machines, visual programming environments or whatever could just send messages to operate on the DSP graph and modify attributes. I would love to write a live-coding system in Ruby some day... -Jeff _______________________________________________ nova-dev mailing list [email protected] http://klingt.org/cgi-bin/mailman/listinfo/nova-dev http://tim.klingt.org/nova
