Ricardo Wurmus <ricardo.wur...@mdc-berlin.de> skribis:
> Ludovic Courtès <l...@gnu.org> writes:
>> Ricardo Wurmus <ricardo.wur...@mdc-berlin.de> skribis:
>>> here’s my problem: I need to have the store on a big slow NFS server
>>> with online compression and deduplication. This means that *everything*
>>> in Guix is slow: downloading binaries, building packages from source,
>>> building a new profile generation — it’s all *very* slow.
>> This is a tricky use case…
ISTR it’s not possible in your case, but in an ideal we’d build on a
local file system and then export it over NFS (instead of building
directly on NFS).
>>> Could the build hook feature be used for this, maybe by wrapping the
>>> normal build such that a script is run when it finishes?
>> The build hook “protocol” doesn’t work like this. The daemon sends a
>> build request, which the hook can accept, postpone, or decline (see
>> (guix scripts offload)). When it accepts, the hook cannot invoke the
>> daemon (it’s not “reentrant”.) The substituter protocol is similar.
> Hmm, thanks for your input!
> Okay, so the build hook feature couldn’t be (ab)used for this, but would
> it be okay to patch the daemon to optionally run a script upon
> completing a store action?
OK in what sense? :-)
It’s certainly doable.
>> Otherwise maybe a file system level hack? Like making /gnu/store a
>> unionfs that writes elsewhere?
> I find the file system to be the wrong level of abstraction. And
> unionfs seems like a brittle solution. I’d much rather operate on
> individual store items on demand. (The localstatedir is small enough to
> copy fully each time something happens.)
Yes, but we’re really working around the slowness of the file system, so
in that sense the file system is precisely the place where (bad) things
> I’d really like to take advantage of the facts that the store is append
> only (with the exception of “guix gc”) and that the daemon knows what’s
> going on.
A mechanism to invalidate store items that have been reclaimed would
also be needed, though.
Another option would be to have two full-blown stores, one on a fast
file system and the other one on NFS, each with its own database and
possibly guix-daemon instance. You would periodically send any missing
items from the former to the latter, along the lines of:
guix archive --export \
`guix gc --list-live | ssh machine-with-slow-fs guix archive --missing` \
| ssh machine-with-slow-fs guix archive --import
It’s a bit of a sledgehammer, but it would copy only the new items to
the target machine.
(I plan to have a ‘guix copy’ command that does the 3 lines above.)
With the hook that you suggest above, you could run this synchronization
command upon build completion rather than periodically.
Yet another option, assuming you have two separate stores like this,
would be to export the first store with ‘guix publish’ and have the
second one take everything from there.