Hello.

Le lun. 30 août 2021 à 15:02, Alex Herbert <alex.d.herb...@gmail.com> a écrit :
>
> Hi,
>
> This test for the SimplexOptimizer is not robust.

Indeed; sorry for that.
Perhaps those "standard" test functions (being purposefully "difficult"
problems) should be moved to some "integration tests" module (or to
the "examples" module; thus leaving as "unit" tests only "simple"
functions...

> It uses random seeding to
> run optimisation problems. If these do not work from the chosen start point
> then the test fails. Up to 10 repeats of the test are allowed. Sometimes
> this is not enough.
>
> The tests originate from a long time ago in CM. Keeping the tests ensures
> that the library continues to support these problems and avoids regressions.

Yes; but it is not sure that the old test suite ever exercised the "difficult"
nature of the test functions (possibly because only "lucky" start points
were selected).

>
> Gilles has been working on the optimizers and may be able to provide more
> details. Recent work has been successful in enabling many previously
> ignored test cases to be reintroduced to the test suite. However all this
> good progress has not totally eliminated spurious failures.

Failures can be caused by a wrong implementation or by an inherent
weakness of the algorithm(s) that may or may not show depending on
the input data (start point, initial simplex, ...).

Gilles

> For the time being you can ignore failures in these tests which are likely
> unrelated to changes you are performing. Any PRs raised against the master
> branch are evaluated in the context of the change. Any failures of these
> tests during the CI build will be noted and often just restarting the CI
> build will pass the next time round.
>
> Alex
>
>
>> [...]

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to