On 13.01.25 09:33, Kristian Amlie wrote:
But I have one "side question": Does you suggestion relate at all to
crossfades
between kit items? We had a discussion there a while back about how
the cross
fades are happening by fading velocity up and down for the two
crossfaded parts,
but it should really be fading the volume. I've been meaning to look
at fixing
this, but it's not entirely trivial and I got a bit side tracked from
that project.
Yes indeed, this is spot on.
To recall the whole story why I'd like to engage into this project:
I am looking for ways to help with "fine-tuning and voicing" of a patch or
instrument, in a similar way how you'd do that for a physical instrument.
An organ stop is a good example. If you'd just build an array of pipes,
these would not work well together in a musical way. If you'd hit a chord,
the individual tones would not sound balanced. And if you'd be playing
some arpeggio or runs, it would not "run well".
These are the very problems we often encounter with synth sounds.
You create an interesting spectrum, but then you'll notice that it does
not work well in a score. Sometimes you'd get a much to drastic change
of tone within a single octave.
Anyway, we all know these kind of problems, and we've found ways to work
around
them. Which, however, often tend to be limiting in other ways, and at
the very
least are often quite labour intensive.
A cross fade based on volume, not velocity, would be indeed just another
tool
in the toolbox, that could be used to handle such a "tuning and voicing".
Another solution would be to introduce a new, single top-level control,
which
allows to make an automatic adjustment of the note volume based on the
MIDI note.
At this point for me the most relevant question is "how far can and
should we
go with implementing such a new feature"?
* if we are overly timid, we create yet-another-not-so-useful special
feature
* if we approach it in a excessively broad way, we might degrade
performance,
or create a maintenance problem in the code base, and the feature might
be too abstract to grasp, at least for people without technical
background.
That is the reason why I'm proposing to handle this as a "project",
progressing
in several steps. bottom-ip. And then, the secondary issue with a per-key
aftertouch was just the final idea that also hinted into a similar
direction.
Thus I want to start out from a very abstract notion, which is that of a
"cross linking" between control channels. And my proposal was to first
find out about the reality in the code base, to look into possible issues
and to conduct a thorough performance investigation. If that goes well,
we'd create a base platform for several new features. And we could then
hopefully take the time to discuss how to actually provide those features,
and to what degree.
And yes, a cross fade based on volume would just be a special case, that
could be implemented on top of that base feature. Because this base feature
would foremost require to transport a small record of per-note-settings to
some parts of the engine, which right now are invoked as simple functions.