On Sat, Mar 8, 2008 at 6:40 PM, Roger Dannenberg <[EMAIL PROTECTED]> wrote:
> Evan,
>     Nyquist is pretty much off-topic for this list, but I'll answer here
>  anyway, and others can skip this if they wish.

Feel free to take it off list if you think that would be more appropriate.

>     I've been using and developing Nyquist pretty actively for a long
>  time now. The latest changes include an option to write in SAL, which
>  uses conventional function notation [ f(x) instead of (f x) ] and infix
>  operators. SAL comes from Taube's Common Music.

Yeah, I saw that, although it just looks like a more ad-hoc and harder
to generate version of the lisp syntax.  Maybe I'm unusual in that I
actually enjoy writing sexprs :)

>  Although Nyquist does
>  really support real-time *input*, it is pretty amazing on modern
>  machines in terms of real-time *output* -- garbage collection usually
>  runs in a small fraction of a second, so with a few hundred ms of audio
>  buffering, the output streams out with no real delay. Some recent
>  additions include support for Open Sound Control, so you really can
>  change input parameters on-the-fly, and you can set the latency lower to
>  trade off latency with the probability of drop-outs. Various people have
>  contributed phase vocoder, physical modeling, a decent grand piano, and
>  lots of other unit generators and libraries. Nyquist also includes
>  Common Music-like pattern generators and a generalized note-list for
>  saving, editing, manipulating score data destined for MIDI or synthesis
>  in Nyquist.

All very interesting... I'll definitely check it out.  I remember
wondering at the time if it couldn't treat a realtime input signal as
just another lazily evaluated nyquist signal.

>  Your view of XLISP is different from mine: I rarely saw XLISP crash (but
>  eventually fixed a couple of bugs, so now I *never* see it crash,

This was a long time ago, on a 486 that had trouble running linux in
the first place, and I recall some hackery to get it to compile, so
I'm sure I will have a better experience now.

>  although Nyquist is not quite so bullet-proof). Error reporting in XLISP
>  is basically the ability to walk the stack and see functions and
>  arguments, but I don't remember anything different in Common Lisps that
>  I used. "lack of all nice lisp features" -- maybe I see the small
>  extensible kernel of XLISP as actually a feature, especially for
>  building above.

There is that... though the way I see it, the bonus of nyquist over
languages like csound and reaktor is that it's a real language.  So in
addition to the higher-order programming benefit, you don't need to
divide your app into a score generating part and a score playing part,
and you have access to the full power of the signal generating part
even at the highest level.  You don't need that whole extra step of
thinking "I'm going to want to control this parameter, so I'll export
a knob for it and then manually associate a knob in my score with that
knob." (in the case of midi, you have to pick an arbitrary controller
number with only one data type, but even in OSC you still have to
export the control and pick a name, and keep both sides in sync by
hand---no type checking for you!).

Now score generating and processing can be pretty complex, involving
symbolic processing, often a GUI, maybe IPC with other processes or
computers, etc.  It's nice to have access to basic features like
methods, modules, exceptions, sockets, GUI libs, etc.  If I can't
write a GUI in nyquist, it means I have to use some other language
that generates nyquist code... and now it's basically equivalent to
generating csound score, isn't it?  It's true I can do symbolic
processing in nyquist and talk to it at a higher level than I can talk
to csound, but I still need to draw a line and put the GUI / main app
representation and processing on one side, and the nyquist, audio and
"low level" processing on the other.

If I have to do that anyway, I might as well put all the processing in
the app layer, where I have full language support like GUI access,
compilation to native code and test frameworks or static typing (pick
your poison), and then I can target both csound and nyquist and midi
and whatever else, without having to have all my high level code in
nyquist restricted to only working on sounds generated by nyquist.

So it seems to me the main cool thing about nyquist, which is the
ability to talk at a high level, is lost!  I will be thinking "do I
write this cool arpeggiator in nyquist and be able to e.g. flange
every other note easily, or do I write it in the sequencer and be able
to play it on the capybara, but be forced to add a 'flanger depth'
knob to every instrument it applies to (not to mention the ability to
use it on a whole phrase)?"  And though I would like to, say, do
spectral resynthesis directly at the score level with the ability to
graphically edit each partial as a note, I will have to do some
awkward thing where one program generates and serializes the partials,
ships it to the sequencer which specially understands that message,
and then the sequencer reserializes and sends it to nyquist for
resynthesis.  All doable, but awkward.

As another example, it would be nice to render a controller curve as a
nyquist expression that will run at audio rate instead of doing the
midi/osc thing of interpolating myself and emitting a million
controller msgs, but as soon as I write my own wacky interpolation
method, or if nyquist's behaviour is different from mine, I'm back to
sending a million msgs.

Anyway, I really do think nyquist is a great piece of work and hope to
use it, I just wish I could write a sequencer with it!  If you really
need the deep GC hacks to get good performance, tying it to a "real"
language would have caused an endless keep-up game for you though, so
I definitely see the benefit of a small simple language that isn't
going anywhere.

>  One of the most powerful features of Nyquist is the support for
>  transformations including abstract time warping (abstract because you
>  can choose at what level time warps take place, e.g. should trills or
>  vibrato slow down in a rubato?). In retrospect, I do not see these
>  features in actual use, and even my own programs tend to avoid it. It's
>  elegant but I think it is just too hard to think about and therefore
>  isn't worth it in most cases. This was one of the biggest surprises to
>  me, that something so obviously the right thing to do from a
>  mathematical, programming language, and computer science standpoint just
>  isn't all that useful.

Yeah, that surprises me too.  I plan on supporting composable time
warping too, because I can easily think of cases where I would want
it: say the ability to repeat a phrase with rubato at half time, or
warp the tempo to fit with a recorded audio segment which then has an
embedded ritard.  Even in a straightforward classical style piece I
want different tempo tracks for different parts.  As far as the
abstract part, since I imagine building instruments by using stretched
versions of notes and building parts with stretched sequences, it
would be useful to not stretch the attack section or the vibrato.  But
since practically speaking the envelopes may mostly be (unfortunately)
hardcoded in the instruments, maybe this part is not so useful.  Maybe
this would be easier to think about in a graphical context where you
can see the results of various transformations?  I would think the
ability to transpose a score and have the nonpitched percussion stay
put is nice.

The trickiest thing that I'd like to do is have score and then derived
score and give the ability to edit the derived score.  So if you have
a process that generates notes and a tempo stretch that does a ritard,
you can then go and edit the derived version to alter some of the
generated notes or move things around inside the ritard.  Obviously,
when it comes time to regenerate the derived score, the general
problem of merging the changes back into the derived score is
unsolvable, but I hope with some heuristics, hints, and hacks, I can
make it work well enough for practical uses.  It's basically the patch
merging problem, and there's some interesting work being done there in
the SCMS world.

See, this is the sort of thing that I would hate to have to write in
xlisp, since I can't download edit distance type libraries, and I
can't call ones in other languages since it doesn't have an FFI, and
if it turns out to be a lot of work I can't compile with optimization
on...

>  Nyquist was intended to be a stepping stone to a real-time interactive
>  system, but there are various hard limitations, so I began working on
>  Aura, more of an object-oriented application framework than a language.
>  I've been using Aura for real-time compositions for many years, but it's
>  so open-ended (and undocumented) that no one else uses it, nor would I
>  encourage it in this state. Aura includes Serpent, a real-time scripting
>  language based on Python that runs stand-alone and can be embedded into
>  other applications. Serpent is my favorite language now. It's very small
>  and very simple.

Yeah, I remember seeing reference to that a long time ago and being
interested, but then there wasn't much info out on it.  Looks like
there are some papers and stuff up, so I'll take a look.
_______________________________________________
media_api mailing list
media_api@create.ucsb.edu
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to