Hello Yoshimi-devs,
First things first: a happy new year to everybody!
<sarcasm>
Some people around the world are engaged with determination to make
this new
year a big mess.... so let's hope matters go south in mostly
unexpected ways!
</sarcasm>
After we've somehow managed to close out the last round of internal changes
(seemingly without major problems up to now...), I was wondering if it's
the
time to pick up on one topic which I have on my mind since quite some time.
On 31.12.24 19:44, Will Godfrey wrote:
I'd like to be able to make another minor point release in February,
so it would be good to have a significant development to hang it on.
Ideas, most welcome!
My most important wish is to let us proceed in a considerate way.
The time and capacity I'm able to put in *is limited*, for sure, because
I have
to focus on my main project (Lumiera). So, while I'm proposing something
bold,
I'd like to ask to break it down into several small increments, and --
above all
-- to avoid those "*big bang*" changes. I mean, if that's possible at
all, since
we've all made the experience that often matters can escalate to a much
wider
scope in the Yoshimi code base, than was initially expected. :-/
In a nutshell: I'd like to pick up that topic of *balancing the
presence* of an
instrument or the closely related global parameter links like *crossfades*.
See the threads "Note-Weight-Adjustment" and "Crossfade" from last summer.
My proposal is to proceed in several steps:
(1) have a up-front discussion about *what* we want to achieve;
take the necessary time to define the scope of where we're headed.
(2) do a technical prototype to bring in the foundational changes into the
engine. Make performance measurements to see if our guesses were
right.
Recall from the mentioned discussion that we assume that it should be
possible to bring the base extension into the engine without a
measurable
impact on performance (at least in disabled state)
(3) then merge that change into mainline, test it and bring it into
widespread
use in some minor release, so that we have to option to back out or
search
for a better approach before anyone starts to use the new feature in
creative ways.
(4) only then build the GUI part and gradually make the feature visible for
the users. Consider how it relates to existing features like Crossfade
and Velocity sensing. Mark it as "experimental" first and try to get
feedback from the users.
Following this outline, we'd be currently engaged into (1), and we'd be
able
to change course and reshape the direction up into point (3), in case it
turns out to be to much for us to handle....
To make the first step: here is how I see this possible bold new feature:
* The point of operation is in the "note on" code. In each Part, right now
there is basically a similar piece of code, which takes the raw
parameters,
does some fusion on them and computes the parameter values effective
for this
note, i.e. the effective frequency, gain, filter cut-off or vowel
position.
* the core idea is to allow for a /very small number/ of additional control
parameters to be transported into this code. Right now it is only driven
by the note (midi) value, the velocity and the volume. So the idea is
to extend that and allow -- say maybe two or three additional controller
values as input. Which ones these are would be a matter of instrument?
patch? or global instance? (do be discussed!) setup. The prominent
example
would be a channel pressure, or a per-key aftertouch. This
immediately shows
that we'd have to re-evaluate this piece of code while playing a note
under
some circumstances; and we should carefully plan how to do that.
* and then, at the heart of the feature would be *optional cross-links*
between those parameters. For performance reasons, these would obviously
be implemented as Look-up-Table (LUT), with a suitable interpolation.
These would implement a *function* of the input parameter and produce
a *factor* applied to some output parameter. Lots of fine points to
consider here, but overall doable and within our abilities and close
to what the existing code already does right now.
- Volume could be adjusted as function of MIDI note (my primary use
case!)
- Volume could be adjusted as function of channel pressure
- Frequency could be adjusted as function of velocity
- Cut-off could be adjusted as function of aftertouch
...
* Needless to say: this is a very advanced feature and will just confuse
most users. Yet people familiar with the fine points of sound design
will
understand it intuitively, because, in a way, its obvious you want to
do that.
* So there would be a full-control GUI in each Part. It's hidden behind
a button
and will bring up a matrix of knobs or graph widgets. For example,
the input
could be in the columns (note, volume, velocity, ctrl-1, ctrl-2). And in
the rows, there would be the target parameters (frequency, gain,
filter-freq,
filter-q, what else...?). Each "point" in that correlation matrix
would either
be disabled, or a simple knob similar to velocity-sensing, or a graph
widget
with a bezier-curve, similar to the gamma or colour channels in Gimp.
(Yes, I know, its messy to build such a beast, but it is well doable)
* we have already some controls which effectively so something similar.
They
can be seamlessly translated into the new machinery. Notably, the
velocity-
sensing is already a function of volume based on velocity or filter gain
based on velocity. I'd leave the familiar controls in place and just map
them down into the new feature.
Likewise, all existing cross-fades can seamlessly be translated into
such
a cross-correlation of parameters. Because they are just a function of
velocity (or volume!) based on MIDI note value. But then an advanced
user could go in and adjust and tweak the transition curve. Yikes!!
What I'm proposing is bold, for sure. But IMHO it is not too bold: it is a
generalisation of what we're doing already in some aspects; it would make
handling those cases more symmetric. And what's more important, I expect
that such a feature will unleash a lot of further possibilities present
in Yoshimi's system of layered parts, because it's a generic component
we'll build once and can then integrate into each part.
Its also a lot of work, but -- as I've said in the introduction -- the
proposal would be to proceed in small steps. Moreover, such a feature is
certainly a bold bet that the zombie apocalypse would not happen right now
and we'll all might have some further time left to build interesting stuff.
-- Hermann
_______________________________________________
Yoshimi-devel mailing list
Yoshimi-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/yoshimi-devel