Tim Blechmann wrote:
> i wrote most of the dsp objects from scratch ... mainly because the
> libraries, that i had a closer look at, had some issues ...
> 
> stk seem to be some textbook implementations, not optimized for
> performance or anything (iirc, they are using double precision floating
> point numbers for sample representation) ...
> some other libraries (SndObj, csl, clam) had other issues ...
> 
> some of the dsp algorithms are implemented as generic c++ templates, so
> they can be reused in other environments (e.g. i'm using some of the
> filter implementations as ladspa plugins for mastering)

Cool.  That makes a lot of sense.  Having it all templated like you do 
seems like the perfect way to do it.  Actually, if we have the ability 
to record all of the commands in a Nova session this could be really 
useful.  You could record a live session, and then later run it back 
through Nova using some super high quality sample size (and maybe rate?) 
for recording into an audio file...

> nova does the audio computation in blocks of 64 samples. however there
> is the notion of dsp contexts, which can be used to run a part of the
> dsp graph with different block sizes.

How do these DSP contexts work?

> sample-accurate synchronization between different nova interpreters is
> nothing, that nova deals with. first, one would have to solve hardware
> issues, though ... synchronizing the clocks of the audio interfaces
> i am also not sure, whether this is an issue, that nova should deal
> with ... 

Yeah, I don't think sample accurate is that important for doing 
collaborative synthesis anyway, but on a local network I bet it should 
be easy enough to have multiple computers sharing synchronized 
metronomes.  This could let a group of people jam together, while still 
splitting the actual CPU load of synthesis rather then having to send 
all the commands to a single machine.

> in theory nova can be used for live-coding ... however, it is not
> optimized for live-coding, but for runtime performance ... i.e. when dsp
> objects are added, the dsp graph needs to be resorted ...

Hmmm, I'm not sure I see the difference.  In live-coding the synthesis 
engine has to be optimized for live modification of the dsp chain and 
object attributes as well right?

> supercollider is optimized for dsp live-coding (afaik, adding synths to
> the server is a very cheap operation) ... 

Cool, maybe I'll check it out to see how they do this.

> nova is using a dataflow paradigm for the language. visual programming
> is one way to implement the dataflow paradigm. the term `gobj' is not
> really a 'graphical' object, but more a 'patchable' object ...

So maybe it should be p_obj?

> i would be quite interested in a way to define dataflow graphs in a
> scripting language, but didn't find a decent syntax, yet ...
> last year, i tried to write an api to define nova patches in python,
> however the syntax was not really usable ... (it can be found in the
> patch-generator subfolder in the git archive)

This will be something to think about then.  It would be especially 
interesting if you could live-code while simultaneously seeing a 
representation of your dataflow graph as it is created and modified.

Thanks for the info,
Jeff
_______________________________________________
nova-dev mailing list
[email protected]
http://klingt.org/cgi-bin/mailman/listinfo/nova-dev
http://tim.klingt.org/nova

Reply via email to