Pjotr Prins <pjotr.publi...@thebird.nl> skribis:
> On Thu, Apr 05, 2018 at 05:24:12PM +0200, Ludovic Courtès wrote:
>> Pjotr Prins <pjotr.publi...@thebird.nl> skribis:
>> > I am *not* suggesting we stop testing and stop writing tests. They are
>> > extremely important for integration (thought we could do with a lot
>> > less and more focussed integration tests - ref Hickey). What I am
>> > writing is that we don't have to rerun tests for everyone *once* they
>> > succeed *somewhere*. If you have a successful reproducible build and
>> > tests on a platform there is really no point in rerunning tests
>> > everywhere for the exact same setup. It is a nice property of our FP
>> > approach. Proof that it is not necessary is the fact that we
>> > distribute substitute binaries without running tests there. What I am
>> > proposing in essence is 'substitute tests'.
>> > If tests are so important to rerun: tell me why we are not running
>> > tests when substituting binaries?
>> Because you have a substitute if and only those tests already passed
>> somewhere. This is exactly the property we’re interested in, right?
> Yup. Problem is substitutes go away. We don't retain them and I often
> encounter that use case.
I agree this is a problem. We’ve tweaked ‘guix publish’, our nginx
configs, etc. over time to mitigate this, but I suppose we could still
When that happens, could you try to gather data about the missing
substitutes? Like what packages are missing (where in the stack), and
also how old is the Guix commit you’re using.
More generally, I think there are connections with telemetry as we
discussed it recently: we should be able to monitor our build farms to
see concretely how much we’re retaining in high-level terms.
FWIW, today, on mirror.hydra.gnu.org, the nginx cache for nars contains
94G (for 3 architectures).
On berlin.guixsd.org, /var/cache/guix/publish takes 118G (3
architectures as well), and there’s room left.
> Providing test-substitutes is much lighter and can be retained
I understand. Now, I agree with Ricardo that this would target the
specific use case where you’re building from source (explicitly
disabling substitutes), yet you’d like to avoid running tests.
We could adresss this using specific mechanisms (although like I said, I
really don’t see what it would look like.) However, I believe
optimizing substitute delivery in general would benefit everyone and
would also address the running-tests-takes-too-much-time issue.
Can we focus on measuring the performance of substitute delivery and
thinking about ways to improve it?
Thanks for your feedback,