Am 20.09.21 um 02:40 schrieb Ichthyostega:
You can see my progress in the new Git repository published on Github https://github.com/Ichthyostega/yoshimi-test.git

Am 08.10.21 um 02:23 schrieb Ichthyostega:
.... The basic idea is to use a simple linear regression to fit with the
basic samples vs runtime dependency over the whole test suite. This gives us
a prediction for each test, and relative to this prediction I store an "expense factor" for each individual test case, so that

prediction * expense = expectedRuntime



Hi Will and Kristian,

now the question is, how *portable* are our Testsuite results...

This afternoon, I systematically repeated the testsuite execution
on a Laptop, which significantly differs in performance to my Desktop PC.

The good news is: the concept works as intended!
 * you check out the testsuite, including the "Baseline data",
   which for Timings is the *Expense Factor* for each test case.
 * you run the suite a number of times to get an average
 * then you run once with "--calilbrate".
   This creates a new linear regression fit for the *platform model*
 * and after that, all Tests run GREEN, using the checked-in Expense Factors


However, some Details are notable.
The measurements are really different, and also slightly different in structure.
The "GREEN" result can really only be achieved with the help of Statistics.

The error margin for such measurements is quite high,
after running several times, we typically see fluctuations around 10-20%

This is no surprise, but common for measurements under real-world conditions.
This well known fact is also the reason why we typically do not engage into
micro-optimisations just to gain some % -- unless we're really desperate.


See attached the corresponding graph, with similar legend than the image
I showed you yesterday. Note the following

- different Y axis, since everything is slower on that laptop
- similar curve shapes, with the notable difference that the AddSynth
  becomes faster on that Laptop for the long note time (10s, right)

- the "platform model" i.e. the (blue) regression line differs significantly

  Y = 6.61ms + 151ns/smp * samples

 (vs. my Desktop: Y = 4.38ms + 103ns/smp * samples)



Another irritating observation however is, that I get a *sound difference*
on the calculated WAV file. The difference is extremely small,
peak Δ -121.511dB(RMS). I can not hear it, it becomes only apparent when
applying an extreme amplification on the residual WAV. It is basically
slightly coloured white noise, with an faint trace of the actual sound.

However, there should not be *any* difference in the generated sound.
But I am not sure, if some differences in the default values might have
caused that difference, like a slightly different volume setting somewhere.


Which brings me to the next topic I have to address for that Testing endeavour:
The Testsuite must launch Yoshimi with totally controlled default settings.
We all know, the topic of configuration is complicated (sigh) ;-)

-- Hermann





_______________________________________________
Yoshimi-devel mailing list
Yoshimi-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/yoshimi-devel

Reply via email to