Hi,
Lars-Dominik Braun <[email protected]> skribis:
>> --8<---------------cut here---------------start------------->8---
>> $ time guix environment --ad-hoc r-learnr --search-paths
>> export
>> PATH="/gnu/store/n4wxbmqpafjfyawrla8xymzzdm5hxwph-profile/bin${PATH:+:}$PATH"
>>
>> real 0m11.328s
>> user 0m20.155s
>> sys 0m0.172s
>> $ time ./pre-inst-env guix environment --ad-hoc r-learnr --search-paths
>> export
>> PATH="/gnu/store/if6z77la3mx0qdzvcyl4qv9i5cyp48i0-profile/bin${PATH:+:}$PATH"
>>
>> real 0m4.602s
>> user 0m6.189s
>> sys 0m0.136s
>> --8<---------------cut here---------------end--------------->8---
> that’s awesome and brings me much closer to my goal of running all
> applications
> inside a `guix environment` container for reproducibility. Including the
> protocol fixes from #41720 I’m now down to ~30s from ~50s, which may be called
> somewhat usable. Obviously I’d be very interested in further speedups.
That’s over SSH, right?
Probably what’s killing us is the round-trip time for all these small
RPCs. We would need pipelining but the RPC protocol is not designed to
make that easy.
Perhaps you could “strace -Tt” the thing to check whether this
hypothesis is correct by looking at the time we spend waiting for
replies?
As for the CPU cost (i.e., going below the 4.6s above), we should keep
profiling just like you did.
Thanks,
Ludo’.