Am 16.05.21 um 13:52 schrieb Rainer Hans Liffers:
All right, let us start a discussion. 😁  There are two major topics, and
they should be discussed separately: (1) Development and description of an
object-oriented model for the /Yoshimi /synthesiser (in terms of English
prose, major objects, and formal definitions), pretending that there is
presently no code at all (idealistic approach), and (2) further evolutionary
development of the current code base (pragmatic approach).


Hi Rainer, Will, Kristian,

at this point we should talk about one topic,
which is of uttermost importance for any refactoring, reworking, rewriting
of an existing code base: that is compatibility and testing.


As you probably know, effectively I am mostly a *User* of yoshimi, and I tend
to build extremely complex presets with subtle sonic qualities, drawing form
the rich occidental tradition of building physical music instruments, especially
pipe organ. I did contribute and I will contribute as developer in the future,
yet my time budget is rather limited; I work part-time as software developer in
the finance industry to earn my living, and I am involved into several
artwork and OpenSource projects already

From a users POV, Yoshimi as an application is mature.
It is "basically there" and has all you need to build highly inspiring stuff.
Obviously, there is always a lot you might aspire and dream of to create even
more advanced sonic structures -- but letting that aside, to be honest, Yoshimi
has all you need in a very usable form. And it is the only free soft synth
offering this kind of musical qualities. Using Yoshimi as a platform for
my musical work literally saved my years of private development, which
I was already prepared having to spend to build something ontop of Faust
or supercollider, in order to be able to implement my musical ideas.

It lies in the very nature of this kind of sound synthesis, that we use
those algorithms in an borderline unstable state of operation. We do that
to squeeze out qualities of sonic richness form the machinery.
On top of that, some users are virtuosos on the keyboard and need low
and stable latency when plying live (something totally out of reach for me)
And in this scenario, they need the last bit of performance to be able
to play without dropouts.

I kindly ask you to realise how much the developer makes the musician
suffer, when as result of some "code beautification" the real beauty of
the sound is corrupted and poisoned in subtle ways. We had that in past,
and it made me feel sick and hopeless for weeks.

And I am deeply grateful for Will always being responsive and open minded
in such a situation. "You can not change the sound of a tin whistle"


Thus -- to state the obvious: Yoshimi sorely lacks automated acceptance tests.

This is the most important goal I see before we start refactoring / rewriting
or reworking anything close to the path of sound generation.

And this is /totally doable/, as I found out just some weeks ago
This might come a bit of a surprise, because we all considered it
to be a tough challenge, due to randomness...

However, as it turns out, if you patch yoshimi and change the seed of the
random generator from "current time" to a fixed value, then the *very first*
note played is totally deterministic and 100% reproducible up to the last bit.
But as soon as you hit the next key (or send the next MIDI event) while sound
calculation is running, the results turn to true randomness, based on the
randomness of that MIDI-event coming in in relation to the ongoing sequence
of numbers from our PRNG.


Thus, what I propose as first step, is to build at least a very basic
automated coverage of the status-quo.

I propose to build in some kind of "hook" for testing. Maybe this could
be a new command on the CLI. This would then stop the sound engine, reseed
with a fixed value, and then issue one single node internally with the
currently loaded preset. Maybe we can even arrange to write that
immediately into a file or even better an output stream?

Based on that, we could than easily set up a suite of shell scripts,
which load some preset from the standard collection, issue a test note,
and subtract the produced sound from a saved sample. If the result is
all zero, the test is green.

In a similar vein, I propose to create a variant of that setup, which
issues this test note N times and captures the overall time. This would
give us a rough integration-micro-benchmark.


Probably its immediately clear that such a setup would take out
most of the tension from any refactoring effort. Because then we'd
be able to tell objectively if we're still fine, just by running
a test suite for maybe 10 minutes.

Cheers,
Hermann




_______________________________________________
Yoshimi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/yoshimi-devel

Reply via email to