On 25.05.2021 17:07, Ichthyostega wrote:
Currently I'd prefer to build some dedicated test commands into the CLI.
But obviously, having a dedicated commandline argument to launch a test
would be another option. A third (less preferrable IMHO) option would be
to build a dedicated test executable, which links against yoshimi code.
On Wed, 26 May 2021 10:53:14 +0200 Kristian Amlie <krist...@amlie.name>
wrote:
What about using LV2? I believe it checks off most of the boxes your
brought up
...
...
* Somewhat secondary but still a bonus: We test LV2 in the process.
* Downside: If you don't know LV2, it is kind of hard to understand at
first, but it does make sense!
Am 26.05.21 um 12:05 schrieb Will Godfrey:
Another way, you could immediately get very complete access, is running
scripts from the CLI. These are plain text files such as the one below. It
has a deliberate error to demonstrate how it behaves. It occurs to me that we
could easily add a new command 'seed {n}' which would set random to a known
value for just this sort of thing.
Hi Kristian,
Hi Will,
very helpful observations; using LV2 seems like another very interesting angle.
In a similar vein, I recall a talk from LAC Berlin by Fons Adrianssen, where
he demonstrated his framework for audio testing based on Jack + Python scripts
https://lac.linuxaudio.org/2018/pages/event/46/
After some more time spent thinking over those matters...
Probably I should expand on my preferences for the testing approach -- which
are based on my own experience plus a body of common knowledge about testing.
Because, when a test setup is brittle, the maintenance can become a real burden.
And the pattern of the "test pyramid" helps to mitigate those tensions.
Basically we have two conflicting goals:
- as much as possible, we want to run "the real thing" in the tests
- but, the simpler the scaffolding, the higher the chances it can evolve
alongside with the code and thus stay alive.
And the typical solution ("test pyramid") means to *decompose* the test-subject.
That is, we try to cover as much ground as possible with smaller units of
functionality, which are easier and more robust to test. While the more
complicated integration tests, which are hard to setup and to maintain,
only cover the proper integration of those smaller units.
To translate that into the situation with yoshimi: we should strive at covering
as much ground as possible through direct calls into SynthEngine, noteOn/Off and
to SynthEngine::MasterAudio(). Put another way, I think it is even /desirable/
to take the Data-IO and MIDI processing out of the equation for the majority of
the functionality tests. Because the interplay of CPU bound and IO bound
processing can be complex and highly dependent on the setup and the hardware.
And here I agree with Will: the most natural way to hook that in would be
through the CLI. The test suite would thus use some scripting to organise
the presets and baseline samples and then launch a given yoshimi release
executable with a CLI script. Have still to figure out the details, but
it seems we can pipe in the "run /path/to/script' into STDIN of yoshimi
and the rest should then work automatically.
My intention thus is to build some new CLI functions into standard yoshimi
- set seed (I really like that idea!)
- calculate one test note and pipe the results out into a file descriptor
- calculate N test notes and just throw the audio away but capture the time.
Besides that, we really also should try to complement that approach by builing
an integration testing setup. Kristian's idea really looks promising here: we
could build a very rudimentary LV2 host (from existing sample code), launch
Yoshimi as a plugin and produce some simple and totally predictable wave forms,
probably with timing measurements.
-- Hermann
_______________________________________________
Yoshimi-devel mailing list
Yoshimi-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/yoshimi-devel