Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Ricardo Wurmus



Hi Roel,

Maybe I'm making things needlessly complex, but I would really 
like a

solution that can be applied to an existing cluster setup. :)


Oh, I’d also like something automated that works for an existing 
cluster setup (like mine that prompted this conversation), but in 
my imagination this does not involve users running commands that 
would become part of the permanent Guix command line footprint.


Instead I imagine a sysadmin would have enough privileges to 
impersonate the users and rewrite the links for them.


Full automation is not possible in my case anyway, because I can’t 
necessarily tell who owns the target file (it’s not just user home 
directories but also some other obscurely named shared 
directories), nor can I figure out automatically on which server 
the link was created (when I see a link like “/local/foo/bar”, 
which is a server-local directory that is not exported on any 
other system).



Even though the two of us seem to agree that this change is 
necessary, it does result in a regression: for systems where none 
of this is a problem (e.g. single-user laptops) it’s rather 
inconvenient to have all these temporary gcroots accumulate. 
Prior to my proposed change they would be freed up for garbage 
collection as soon as the link was removed.


(I also wonder if the implementation of “guix gc 
--delete-generations” would need adjusting; I haven’t looked yet.)


Is there something we can do to have our cake and eat it too?

--
Ricardo



Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Roel Janssen
On Mon, 2021-05-10 at 13:59 +0200, Ricardo Wurmus wrote:
> 
> Hi Roel,
> 
> thanks for your feedback!
> 
> > Would it be possible to add an option to retrospectively apply 
> > this
> > transformation?  Maybe that could work somewhat like this:
> > 
> > $ ls -lha
> > ... /home/me/projects/mrg1_chipseq/.guix-profile-1-link ->
> > /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile
> 
> This wouldn’t work, because we can’t read 
> /home/me/projects/mrg1_chipseq/.guix-profile-1-link centrally.  In 
> this particular case only the user “me” could resolve the link and 
> thus migrate the link.  (I would do this semi-manually by 
> impersonating the users to read their links.)
> 

Indeed we cannot resolve /home/me/projects/mrg1_chipseq/.guix-profile-
1-link as another user than "me".  So "me" would have to run the
hypothetical "guix gc --synchronize-profiles-to-gcroots".

>From my point of view, that would be fine.  We can simply ask the users
on the cluster to execute that command once.

This could work if we have a mechanism to determine how complete the
garbage-collection picture for "root" is, and only continue GCing when
the picture is complete.  A starting point for this is 
$localstatedir/gcroots/auto which provides an overview of all profiles
that have been created. Perhaps we can reuse the filename of the
symlink.  So in the example given in the previous e-mail we could add
another link like so, but in a different directory:
8ypp8dmwnydgbsgjcms2wyb32mng0wri -> /gnu/store/...-profile

For which this already existed in $localstatedir/gcroots/auto:
8ypp8dmwnydgbsgjcms2wyb32mng0wri ->
/home/me/projects/mrg1_chipseq/.guix-profile-1-link

And then a one-time transition can be made by looking up 
8ypp8dmwnydgbsgjcms2wyb32mng0wri in both places.

Maybe I'm making things needlessly complex, but I would really like a
solution that can be applied to an existing cluster setup. :)

So, your initial proposal sounds good to me already. How it could be
made more useful for the retrospective matching of unreachable symlinks
to /gnu/store paths can be considered a separate issue to solve.

> ~ ~ ~
> 
> Another related problem I found is that the names of the links in 
> unreadable space may have names that only make sense on the system 
> where they were created.  For example, on the server “beast” we 
> may mount the cluster home directory as “/clusterhome/me”, whereas 
> on cluster nodes it would be mounted as “/home/me”.  When I have 
> Guix record a gcroot while working on “beast” I would get a link 
> that is no longer valid when I work on the cluster.
> 

Yes, I agree that this is another reason to implement your suggested
change. :)

Kind regards,
Roel Janssen




Re: GNU Guix 1.3.0rc2 available for testing!

2021-05-10 Thread Leo Famulari
On Sun, May 09, 2021 at 12:25:06PM -0400, Leo Famulari wrote:
> I tested this on Debian x86_64, and it worked well. It's nice that it
> offers to fetch the signing keys for the user.

I also had a successful test on aarch64.



Guix on NFS

2021-05-10 Thread Ricardo Wurmus



Hi Sébastien,


Please excuse my hijacking this thread: I have been trying out
configurations to get Guix working on the cluster I have access 
to, without root access for the daemon, and one problem I ran 
into is

a curious bug when the store is on an NFS share. See
https://lists.gnu.org/archive/html/guix-science/2021-03/msg00010.html 
(Case 3).


This sounds like localstatedir is mismatched.


In your setup, is the store on an NFS share? If so, did you do
anything special to get that working?


It is on an NFS share, but I’m about to change that for 
performance reasons.


--
Ricardo



Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Sébastien Lerique

Hi Ricardo, all,

On 10 May 2021 at 18:59, Ricardo Wurmus  
wrote:


On my cluster installation I ran “guix gc --list-dead” out of 
curiosity.  When
finding roots, the daemon also removes what it considers stale 
links.  On my
cluster installation not all links always resolve, because the 
target files
reside on remote file systems.  These remote locations are not 
readable by the
root user on the server where guix-daemon runs (ignoring local 
root
privileges is pretty common for NFS servers), so they cannot 
possibly be

resolved.


Please excuse my hijacking this thread: I have been trying out 
configurations to get Guix working on the cluster I have access 
to, without root access for the daemon, and one problem I ran into 
is a curious bug when the store is on an NFS share. See 
https://lists.gnu.org/archive/html/guix-science/2021-03/msg00010.html 
(Case 3).


In your setup, is the store on an NFS share? If so, did you do 
anything special to get that working?


Best,
Sébastien



Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Ricardo Wurmus



Hi Roel,

thanks for your feedback!

Would it be possible to add an option to retrospectively apply 
this

transformation?  Maybe that could work somewhat like this:

$ ls -lha
... /home/me/projects/mrg1_chipseq/.guix-profile-1-link ->
/gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile


This wouldn’t work, because we can’t read 
/home/me/projects/mrg1_chipseq/.guix-profile-1-link centrally.  In 
this particular case only the user “me” could resolve the link and 
thus migrate the link.  (I would do this semi-manually by 
impersonating the users to read their links.)


~ ~ ~

Another related problem I found is that the names of the links in 
unreadable space may have names that only make sense on the system 
where they were created.  For example, on the server “beast” we 
may mount the cluster home directory as “/clusterhome/me”, whereas 
on cluster nodes it would be mounted as “/home/me”.  When I have 
Guix record a gcroot while working on “beast” I would get a link 
that is no longer valid when I work on the cluster.


--
Ricardo



Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Roel Janssen
On Mon, 2021-05-10 at 11:59 +0200, Ricardo Wurmus wrote:
> Hi Guix,
> 
> On my cluster installation I ran “guix gc --list-dead” out of 
> curiosity.  When finding roots, the daemon also removes what it 
> considers stale links.  On my cluster installation not all links 
> always resolve, because the target files reside on remote file 
> systems.  These remote locations are not readable by the root user 
> on the server where guix-daemon runs (ignoring local root 
> privileges is pretty common for NFS servers), so they cannot 
> possibly be resolved.
> 
> So the daemon ends up deleting all these links from 
> /var/guix/gcroots/auto/.
> 
> This may not be too bad on its own, but it also means that the 
> next time around “guix gc” would consider the eventual target 
> profiles to be garbage.
> 
> There are two problems here:
> 
> 1) I don’t think “guix gc --list-dead” (or “--list-live”, or more 
> generally “findRoots” in nix/libstore/gc.cc) should delete 
> anything.  It should just list and not clean up.
> 

I agree it would be better if --list-* options don't remove store items
and/or database entries.

> 2) For cluster installations with remote file systems perhaps 
> there’s something else we can do to record gcroots.  We now have 
> this excursion into unreadable space because we use a symlink, but 
> the start ($localstatedir/gcroots/auto) and endpoints 
> (/gnu/store/…) are both accessible by the daemon.  Since these 
> intermediate locations are tied to user accounts, could we not 
> store them in a per-user directory?
> 
> This problem does not exist for user profiles, because the link in 
> unreadable home directories is not all that important; it merely 
> points to $localstatedir, which is always readable by the daemon. 
> Perhaps we could do the same for temporary roots and let *users* 
> decide when to let go of them by giving them a command to erase 
> the important links in $localstatedir.
> 
> So instead of having a link from 
> /gnu/var/guix/gcroots/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri to 
> /home/me/projects/mrg1_chipseq/.guix-profile-1-link pointing to 
> /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, we would 
> record 
> /var/guix/profiles/per-user/me/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri 
> pointing to /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, 
> and then point /home/me/projects/mrg1_chipseq/.guix-profile-1-link 
> at that.  Yes, removing 
> /home/me/projects/mrg1_chipseq/.guix-profile-1-link would no 
> longer free up the profile for garbage collection, but removing 
> $(readlink /home/me/projects/mrg1_chipseq/.guix-profile-1-link) 
> would.
> 
> This change would pretty much solve the problem for cluster 
> deployments, which otherwise leads to “guix gc” avoidance.
> 
> What do you think?
> 

We are facing a similar issue on our cluster deployment. In our case,
directories are only readable to certain users but not root.  So the
garbage collector can't read the symlinks, and therefore cannot
determine whether a profile is still in use or not.

So this seems like a good idea to me.

Would it be possible to add an option to retrospectively apply this
transformation?  Maybe that could work somewhat like this:

$ ls -lha
... /home/me/projects/mrg1_chipseq/.guix-profile-1-link ->
/gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile


$ ls -lh $localstatedir/gcroots/auto
$ guix gc --synchronize-profiles-to-gcroots
$ ls -lh $localstatedir/gcroots/auto
... 8ypp8dmwnydgbsgjcms2wyb32mng0wri ->
/gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile

$ rm /home/me/projects/mrg1_chipseq/.guix-profile-1-link 

$ guix gc --synchronize-profiles-to-gcroots
$ ls -lh $localstatedir/gcroots/auto

Kind regards,
Roel Janssen





“guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Ricardo Wurmus

Hi Guix,

On my cluster installation I ran “guix gc --list-dead” out of 
curiosity.  When finding roots, the daemon also removes what it 
considers stale links.  On my cluster installation not all links 
always resolve, because the target files reside on remote file 
systems.  These remote locations are not readable by the root user 
on the server where guix-daemon runs (ignoring local root 
privileges is pretty common for NFS servers), so they cannot 
possibly be resolved.


So the daemon ends up deleting all these links from 
/var/guix/gcroots/auto/.


This may not be too bad on its own, but it also means that the 
next time around “guix gc” would consider the eventual target 
profiles to be garbage.


There are two problems here:

1) I don’t think “guix gc --list-dead” (or “--list-live”, or more 
generally “findRoots” in nix/libstore/gc.cc) should delete 
anything.  It should just list and not clean up.


2) For cluster installations with remote file systems perhaps 
there’s something else we can do to record gcroots.  We now have 
this excursion into unreadable space because we use a symlink, but 
the start ($localstatedir/gcroots/auto) and endpoints 
(/gnu/store/…) are both accessible by the daemon.  Since these 
intermediate locations are tied to user accounts, could we not 
store them in a per-user directory?


This problem does not exist for user profiles, because the link in 
unreadable home directories is not all that important; it merely 
points to $localstatedir, which is always readable by the daemon. 
Perhaps we could do the same for temporary roots and let *users* 
decide when to let go of them by giving them a command to erase 
the important links in $localstatedir.


So instead of having a link from 
/gnu/var/guix/gcroots/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri to 
/home/me/projects/mrg1_chipseq/.guix-profile-1-link pointing to 
/gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, we would 
record 
/var/guix/profiles/per-user/me/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri 
pointing to /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, 
and then point /home/me/projects/mrg1_chipseq/.guix-profile-1-link 
at that.  Yes, removing 
/home/me/projects/mrg1_chipseq/.guix-profile-1-link would no 
longer free up the profile for garbage collection, but removing 
$(readlink /home/me/projects/mrg1_chipseq/.guix-profile-1-link) 
would.


This change would pretty much solve the problem for cluster 
deployments, which otherwise leads to “guix gc” avoidance.


What do you think?

--
Ricardo



Re: Succeed to run guile-bash on Guile 3.0.5

2021-05-10 Thread david larsson

On 2021-05-10 02:09, Oleg Pykhalov wrote:

Hello,

I succeed to run guile-bash on Guile 3.0.5.


[..]


Oleg.


Hi!

This is great news! I tested your instructions and it works for me as 
well.


Thanks and regards,
David



Re: bug#47615: [PATCH 0/9] Add 32-bit powerpc support

2021-05-10 Thread Efraim Flashner
On Thu, May 06, 2021 at 04:38:38PM +0200, Ludovic Courtès wrote:
> Hi Efraim,
> 
> Efraim Flashner  skribis:
> 
> > On Sat, Apr 17, 2021 at 06:04:02PM +0200, Ludovic Courtès wrote:
> 
> [...]
> 
> >>   3. OTOH, what will be the status of this architecture?  I don’t think
> >>  new 32-bit PPC hardware is being made (right?), so I guess we
> >>  probably won’t have substitutes for that architecture.  That means
> >>  it won’t be supported at the same level as other architectures and
> >>  may quickly suffer from bitrot.
> >
> > I don't know about new 32-bit powerpc hardware, I think it's only being
> > newly created for the embedded and networking space. As far as operating
> > systems with support¹ Adélie Linux is the only one I know that's
> > actually targeting the machines.
> >
> > I found that emulation on my desktop (Ryzen 3900XT, 24 threads) is
> > faster than building on native hardware (1 core, 1.5GB of RAM, original
> > 4200 RPM disk), edging it out on single threaded compiling and doing
> > great when it comes to using multiple cores and parallel builds.
> > Ignoring how to create an OS image if we just targeted, say, mesa and
> > maybe one or two other packages, we could have a core set which doesn't
> > change regularly and won't take up too much emulated build time but will
> > save days of compile time.
> 
> [...]
> 
> > The fear of bit-rot is real and I think we should mention in the manual
> > (when I actually write the section) that support is best-effort with
> > minimal substitutes.
> 
> I feel like “best-effort with minimal substitutes” is already more than
> I’d be willing to commit to as a maintainer.

I have also learned that 'best effort' is a grey area in other circles;
does it mean you'll move mountains and spare no expense (The Best
Effort!™) or that you'll give it a shot but make no promises. I
definitely meant the second.

> We just added POWER9, for which we have actual hardware, and even
> getting to this minimal state where we provide a binary tarball required
> quite some effort.
> 
> Doing the same with 32-bit PowerPC would require us to set up emulation;
> we wouldn’t even have real hardware.
> 
> All in all, my preference would be to take the patches but not mention
> PowerPC 32-bit support anywhere, or at least, not provide substitutes
> and binary installation tarball.  IOW, few people would know whether it
> actually works :-) but tinkerers could still play with it.
> 
> WDYT?
> 
> Ludo’.

How about changing the mips64el documentation to say that there is
minimal support for the two architectures, with no substitutes, and may
be fun for tinkerers with the hardware. Then we could also change the
check in the guix.m4 to add mips64el-linux as supported in case anyone
does actually want to play with it.

Current text:

@item mips64el-linux (deprecated)¬
little-endian 64-bit MIPS processors, specifically the Loongson series,
n32 ABI, and Linux-Libre kernel.  This configuration is no longer fully
supported; in particular, there is no ongoing work to ensure that this
architecture still works.  Should someone decide they wish to revive this
architecture then the code is still available.

Proposed text:

@item Alternative architectures
In addition to architectures which are actually supported there are a
few formally unsupported architectures which may be of interested to
tinkerers. Namely mips64el-linux, little-endian 64-bit MIPS processors,
specifically the Loongson series, n32 ABI, and powerpc-linux, big-endian
32-bit POWER processors, specifically the PowerPC 74xx series. There are
no installation tarballs, substitutes or promises that these
architectures are functional.

And then I'd move it lower than the powerpc64le-linux entry.


-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: bug#47615: [PATCH 0/9] Add 32-bit powerpc support

2021-05-10 Thread Efraim Flashner
On Thu, May 06, 2021 at 04:45:30PM +0200, Ludovic Courtès wrote:
> Efraim Flashner  skribis:
> 
> > * gnu/packages/guile.scm (guile-3.0)[arguments]: On powerpc add two
> > phases to adjust for 32-bit big-endian systems.
> 
> [...]
> 
> > + (add-after 'unpack 'adjust-bootstrap-flags
> > +   (lambda _
> > + ;; Upstream knows about suggested solution.
> > + ;; https://debbugs.gnu.org/cgi/bugreport.cgi?bug=45214
> 
> The first comment line should preferably mention the problem, like:
> 
>   ;; Change optimization flags to work around crash on
>   ;; 32-bit big-endian architectures: .
> 
> > + (substitute* "bootstrap/Makefile.in"
> > +   (("^GUILE_OPTIMIZATIONS.*")
> > +"GUILE_OPTIMIZATIONS = -O1 -Oresolve-primitives 
> > -Ocps\n"
> > + (add-after 'unpack 'remove-failing-tests
> > +   (lambda _
> > + ;; TODO: Discover why this test fails on powerpc-linux
> > + (delete-file 
> > "test-suite/standalone/test-out-of-memory"))
> 
> I’m surprised removing the executable works.  See
> ‘guile-2.2-skip-oom-test.patch’.

I took another look at it and I'm building 3.0.5 again but with
the 'remove-failing-tests phase removed. We'll find out in about 42
hours if it's needed or not.

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: #:cargo-inputs don't honor --with-input

2021-05-10 Thread Efraim Flashner
On Sat, May 01, 2021 at 11:20:51AM +0200, Hartmut Goebel wrote:
> Hi Ludo,
> 
> Am 30.04.21 um 12:45 schrieb Ludovic Courtès:
> 
> > Uh.  More generally, Rust packages kinda create a “shadow dependency
> > graph” via #:cargo-inputs & co., which breaks all the tools that are
> > unaware of it.  It was discussed several times on this list, and
> > apparently it’s unfortunately unavoidable at this time.  :-/
> 
> Maybe we can get rid of #:cargo-inputs at least:
> 
> guix/build-system/cargo.scm says: "Although cargo does not permit cyclic
> dependencies between crates,
> however, it permits cycles to occur via dev-dependencies"

That I don't remember, but it would make it easier.

> So we could change #:cargo-inputs into normal inputs and get at least part
> of the dependencies right.
> 
> I'm aware of the "special treatment" of cargo-inputs. Anyhow we could apply
> the following changes to the cargo build-system:
> 
>  *
> 
>The cargo build-system copies the "pre-built crate" (more on this
>below) into a new output called "rlib" or "crate". There already is
>a phase "packaging" which only needs to be changed to use the other
>output.
> 
>  *
> 
>All of today's #:cargo-inputs will be changed into normal inputs
>using the "rlib/crate" output. (To avoid duplicate assoc-rec keys we
>might need to change the name/keys, but this should be a minor issue.)
> 
>  *
> 
>If required, the cargo build-system can easily identify former
>#:cargo-inputs  by being inputs from a "rlib/crate" output.
> 
> Benefits up to here:
> 
>  * The dependency graph would be much more complete - although
>"#:cargo-development-inputs" would still be missing.

This is the biggest one IMO.

>  * Package transformation options would work -again except for
>"#:cargo-development-inputs".

IIRC they're pulled in as (package-source rust-foo-0.x) so some of the
transformations should work (I would assume).

>  * If(!) we actually manage to make cargo pick "pre-built" crates,
>package definition will already be adjusted to use them.

And cut down on some of the big build times.

> |Drawbacks up to here:|
> 
>  * ||Since the "packaging" phase copies the source, there is not much
>benefit in having a "rlib/crate" output yet. Actually, when a
>"rlib/crate" output needs to be build, the user will end up with two
>copies of the source (one from the git-checkout, one from packaging)

The benefit of copying the source is that in theory you should be able
to set $GUIX_ENVIRONMENT/share/cargo/registry (or whatever) as a cache for
crates.io when developing, so if you want different features from the
crates you won't have to download the source, it would already be cached
locally.

> About "pre-built" crate: Given the many possible ways to build crates (e.g.
> switching on and off "features", different crate types), we might never be
> able to provide pre-built packages for all cases. Thus we might end up
> always providing the source, even if we manage to make cargo pick of
> pre-built artifacts.

Right now we use the 'default' feature set, which seems to be the
default for most crates when they're used.

> About the output name: Rust has a notion of "rlib" (a specialized .a file),
> which seems to be the pre-built artifacts we are seeking. Thus the proposed
> name.
> 
> WDYT?
> 
> -- 
> Regards
> Hartmut Goebel

When I last touched it I started from rust-apps.scm (or rust-minisign)
and tried transitioning as much as possible, but doing even just the
cargo-inputs would be a very good start.

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature