Simon Tournier <zimon.touto...@gmail.com> writes:

> Since it is computing, we could ask about the bootstrap of such
> generated data.  I think it is a slippery slope because it is totally
> not affordable to re-train for many cases: (1) we would not have the
> hardware resources from a practical point of view,, (2) it is almost
> impossible to tackle the source of indeterminism (the optimization is
> too entailed with randomness).  From my point of view, pre-trained
> weights should be considered as the output of a (numerical) experiment,
> similarly as we include other experimental data (from genome to
> astronomy dataset).
>
> 1: https://salsa.debian.org/deeplearning-team/ml-policy
> 2: https://people.debian.org/~lumin/debian-dl.html
>

Hello, zamfofex submited a package 'lc0', Leela Chess Zero” (a chess
engine) with ML model, also it turn out that we already had 'stockfish'
a similiar one with pre-trained model packaged.  Does we reached a
conclusion (so lc0 can also be accepted)?  Or should we remove 'stockfish'?

Thanks!

Reply via email to