Evan Laforge schrieb:

> This sounds like something I've noticed, and if it's the same thing, I
> agree.  But I disagree that you need to separate orchestra and score
> to get it.  Namely that notes are described hierarchically (e.g.
> phrase1 `then` phrase2 :=: part2 or whatever), but that many musical
> transformations only make sense on a flat stream of notes.  For
> example, decide which note a string would be played on and pick a
> corresponding corresponding base note + bend.  You can't do this
> without a memory of which notes have been played (to know currently
> sounding strings) and maybe a look a little ways into the future (to
> pick between alternatives).  Hierarchical composition has no access
> the previous and next notes, so it winds up having to be a
> postprocessing step on the eventual note output stream, which means
> you have to have something in between the score and the sound.  By why
> be limited to one one instance of this player and a static score ->
> player -> sound pipeline?

For other examples a hierarchical structure is exactly the right thing:
Think of a filter sweep or a reverb that shall be applied during a
certain time interval to a certain set of instruments, say all
instruments but drums and the melody. You had to filter those events out
of the performance stream and you have to specify the overall duration
of the filter effect, since it cannot be derived from the performance.
The performance stores only start times and durations of individual
events, but it does not store trailing pauses of music sub-trees.

> I'm not totally convinced the integration is valuable, but seeing as
> almost all other systems don't have it, it seems interesting to
> experiment with one that does and see where it leads.  Maybe another
> way of putting it is that different interpretations of abstract
> instructions like legato are not necessarily always along instrument
> (piano vs. violin) lines: it may vary from phrase to phrase, or
> section to section.

I think the hierarchical music structure has its value in being able to
be converted to a lot of back-ends. The Performance structure is good
for MIDI and Csound. The hierarchical structure is better for
SuperCollider and pure Haskell signal processing, because of such
effects like filter sweeps, speed variation at signal level, or
reversing parts of the music. The hierarchical structure can be simply
converted to Performance. I would have thought that the hierarchical
structure is also better for music notation, but the actual
implementations show, that it is not.

_______________________________________________
haskell-art mailing list
haskell-art@lurk.org
http://lists.lurk.org/mailman/listinfo/haskell-art

Reply via email to