Re: python-pytest in references graph

2022-07-25 Thread Roel Janssen
On Sun, 2022-07-24 at 23:01 +0200, Maxime Devos wrote:
> 
> On 24-07-2022 22:25, Roel Janssen wrote:
> > I'm trying to understand the output of:
> > $ guix graph --type=references python-rdflib | dot -Tsvg -o rdflib.svg
> > 
> > Particularly, I'm looking at why python-pytest has an input arrow from 
> > python-rdflib, while it's
> > "only" a native-input?  I thought the "references" graph type would only 
> > include run-time
> > references, but I don't know what happens in this case.
> > 
> > What am I missing?
> 
> It should, but sometimes there are bugs in the package definition or 
> build system, in this case causing python-rdflib to refer to the 
> native-input python-pytest.  Likely it's the 'add-install-to-path' phase 
> adding too much, a known issue, which could be solved by separating 
> inputs and native-inputs on the build side when compiling natively (and 
> not only when cross-compiling) (currently they are merged together into 
> 'inputs'), though non-trivial.

Thanks for your explanation! I see indeed that in the build output a couple
of programs include pytest in their "GUIX_PYTHONPATH".

Kind regards,
Roel Janssen



python-pytest in references graph

2022-07-24 Thread Roel Janssen
Dear Guix,

I'm trying to understand the output of:
$ guix graph --type=references python-rdflib | dot -Tsvg -o rdflib.svg

Particularly, I'm looking at why python-pytest has an input arrow from 
python-rdflib, while it's
"only" a native-input?  I thought the "references" graph type would only 
include run-time
references, but I don't know what happens in this case.

What am I missing?

Thank you for your time.

Kind regards,
Roel Janssen




Re: License of your contributions to the blog at guix.gnu.org

2022-04-05 Thread Roel Janssen
Hi all,

Sorry for the late reply.  I didn't realize I had any say in this matter.

On Sat, 2022-02-05 at 14:47 +0100, Ludovic Courtès wrote:
> Hello,
> 
> I am emailing you on behalf of the GNU Guix project because you are the
> author or coauthor of one or more articles to the blog at
> <https://guix.gnu.org/en/blog>.
> 
> With a few exceptions, these articles do not have a clear license, which
> we would like to fix.  We propose to dual-license all the articles under
> CC-BY-SA 4.0 and GFDL version 1.3 or later, with no Invariant Sections,
> no Front-Cover Texts, and no Back-Cover Texts.
> 
> Do you agree with the proposed licensing terms for your contributions to
> the blog?
> 

I agree.


Kind regards,
Roel Janssen



Re: Guix Package Search API Server

2022-01-05 Thread Roel Janssen
On Wed, 2022-01-05 at 08:38 +0100, Tissevert wrote:
> Hi,
> 
> This JSON file sounds nice and useful. By the way, it may only be a
> transient error but the instance of hpcguix-web running at UMC Utrecht
> mentioned in the github repos seems to be down
> (https://hpcguix.op.umcutrecht.nl/). Is it supposed to be so ?
> 
> Tissevert
> 

I can comment on that because I worked at UMC Utrecht and set this particular 
instance up.
It only responds to certain IP addresses (those within the IP range of UMC 
Utrecht).

So this is normal.

Kind regards,
Roel Janssen




Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Roel Janssen
On Mon, 2021-05-10 at 13:59 +0200, Ricardo Wurmus wrote:
> 
> Hi Roel,
> 
> thanks for your feedback!
> 
> > Would it be possible to add an option to retrospectively apply 
> > this
> > transformation?  Maybe that could work somewhat like this:
> > 
> > $ ls -lha
> > ... /home/me/projects/mrg1_chipseq/.guix-profile-1-link ->
> > /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile
> 
> This wouldn’t work, because we can’t read 
> /home/me/projects/mrg1_chipseq/.guix-profile-1-link centrally.  In 
> this particular case only the user “me” could resolve the link and 
> thus migrate the link.  (I would do this semi-manually by 
> impersonating the users to read their links.)
> 

Indeed we cannot resolve /home/me/projects/mrg1_chipseq/.guix-profile-
1-link as another user than "me".  So "me" would have to run the
hypothetical "guix gc --synchronize-profiles-to-gcroots".

>From my point of view, that would be fine.  We can simply ask the users
on the cluster to execute that command once.

This could work if we have a mechanism to determine how complete the
garbage-collection picture for "root" is, and only continue GCing when
the picture is complete.  A starting point for this is 
$localstatedir/gcroots/auto which provides an overview of all profiles
that have been created. Perhaps we can reuse the filename of the
symlink.  So in the example given in the previous e-mail we could add
another link like so, but in a different directory:
8ypp8dmwnydgbsgjcms2wyb32mng0wri -> /gnu/store/...-profile

For which this already existed in $localstatedir/gcroots/auto:
8ypp8dmwnydgbsgjcms2wyb32mng0wri ->
/home/me/projects/mrg1_chipseq/.guix-profile-1-link

And then a one-time transition can be made by looking up 
8ypp8dmwnydgbsgjcms2wyb32mng0wri in both places.

Maybe I'm making things needlessly complex, but I would really like a
solution that can be applied to an existing cluster setup. :)

So, your initial proposal sounds good to me already. How it could be
made more useful for the retrospective matching of unreachable symlinks
to /gnu/store paths can be considered a separate issue to solve.

> ~ ~ ~
> 
> Another related problem I found is that the names of the links in 
> unreadable space may have names that only make sense on the system 
> where they were created.  For example, on the server “beast” we 
> may mount the cluster home directory as “/clusterhome/me”, whereas 
> on cluster nodes it would be mounted as “/home/me”.  When I have 
> Guix record a gcroot while working on “beast” I would get a link 
> that is no longer valid when I work on the cluster.
> 

Yes, I agree that this is another reason to implement your suggested
change. :)

Kind regards,
Roel Janssen




Re: “guix gc”, auto gcroots, cluster deployments

2021-05-10 Thread Roel Janssen
On Mon, 2021-05-10 at 11:59 +0200, Ricardo Wurmus wrote:
> Hi Guix,
> 
> On my cluster installation I ran “guix gc --list-dead” out of 
> curiosity.  When finding roots, the daemon also removes what it 
> considers stale links.  On my cluster installation not all links 
> always resolve, because the target files reside on remote file 
> systems.  These remote locations are not readable by the root user 
> on the server where guix-daemon runs (ignoring local root 
> privileges is pretty common for NFS servers), so they cannot 
> possibly be resolved.
> 
> So the daemon ends up deleting all these links from 
> /var/guix/gcroots/auto/.
> 
> This may not be too bad on its own, but it also means that the 
> next time around “guix gc” would consider the eventual target 
> profiles to be garbage.
> 
> There are two problems here:
> 
> 1) I don’t think “guix gc --list-dead” (or “--list-live”, or more 
> generally “findRoots” in nix/libstore/gc.cc) should delete 
> anything.  It should just list and not clean up.
> 

I agree it would be better if --list-* options don't remove store items
and/or database entries.

> 2) For cluster installations with remote file systems perhaps 
> there’s something else we can do to record gcroots.  We now have 
> this excursion into unreadable space because we use a symlink, but 
> the start ($localstatedir/gcroots/auto) and endpoints 
> (/gnu/store/…) are both accessible by the daemon.  Since these 
> intermediate locations are tied to user accounts, could we not 
> store them in a per-user directory?
> 
> This problem does not exist for user profiles, because the link in 
> unreadable home directories is not all that important; it merely 
> points to $localstatedir, which is always readable by the daemon. 
> Perhaps we could do the same for temporary roots and let *users* 
> decide when to let go of them by giving them a command to erase 
> the important links in $localstatedir.
> 
> So instead of having a link from 
> /gnu/var/guix/gcroots/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri to 
> /home/me/projects/mrg1_chipseq/.guix-profile-1-link pointing to 
> /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, we would 
> record 
> /var/guix/profiles/per-user/me/auto/8ypp8dmwnydgbsgjcms2wyb32mng0wri 
> pointing to /gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile, 
> and then point /home/me/projects/mrg1_chipseq/.guix-profile-1-link 
> at that.  Yes, removing 
> /home/me/projects/mrg1_chipseq/.guix-profile-1-link would no 
> longer free up the profile for garbage collection, but removing 
> $(readlink /home/me/projects/mrg1_chipseq/.guix-profile-1-link) 
> would.
> 
> This change would pretty much solve the problem for cluster 
> deployments, which otherwise leads to “guix gc” avoidance.
> 
> What do you think?
> 

We are facing a similar issue on our cluster deployment. In our case,
directories are only readable to certain users but not root.  So the
garbage collector can't read the symlinks, and therefore cannot
determine whether a profile is still in use or not.

So this seems like a good idea to me.

Would it be possible to add an option to retrospectively apply this
transformation?  Maybe that could work somewhat like this:

$ ls -lha
... /home/me/projects/mrg1_chipseq/.guix-profile-1-link ->
/gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile


$ ls -lh $localstatedir/gcroots/auto
$ guix gc --synchronize-profiles-to-gcroots
$ ls -lh $localstatedir/gcroots/auto
... 8ypp8dmwnydgbsgjcms2wyb32mng0wri ->
/gnu/store/ap0vrfxjdj57iqdapg8q83l4f7aylqzm-profile

$ rm /home/me/projects/mrg1_chipseq/.guix-profile-1-link 

$ guix gc --synchronize-profiles-to-gcroots
$ ls -lh $localstatedir/gcroots/auto

Kind regards,
Roel Janssen





Re: configure: error: A recent Guile-zlib could not be found; please install it.

2021-04-07 Thread Roel Janssen
On Wed, 2021-04-07 at 15:37 +0100, Paul Garlick wrote:
> Hi Roel,
> 
> > How can I get a working development environment to work on Guix?
> 
> A 'guix pull' within your profile will update the guile-zlib version
> that is used by 'guix environment ...'.  Then the configure script
> requirement will be met.
> 

Thanks!

Cheers,
Roel




configure: error: A recent Guile-zlib could not be found; please install it.

2021-04-07 Thread Roel Janssen
Hi Guix,

I'm at version 2e5ac371e799cb91354ffafaf8af2da37d11fa3f, and when doing
this:
$ guix environment -C guix --ad-hoc guile-zlib
[env]$ ./configure --localstatedir=/var

.. then configure ends with:
checking whether Guile-zlib is available and recent enough... no
configure: error: A recent Guile-zlib could not be found; please
install it.

How can I get a working development environment to work on Guix?

Kind regards,
Roel Janssen




Re: Guix in Debian!

2021-01-25 Thread Roel Janssen
On Sat, 2021-01-23 at 20:04 -0800, Vagrant Cascadian wrote:
> So, a while back I mentioned that Guix was present in Debian
> "experimental":
> 
>   https://lists.gnu.org/archive/html/guix-devel/2020-11/msg00254.html
> 
> And it was useable for a brief window of time, but was broken due to
> some issues with guile-gnutls and guile-3.0:
> 
>   https://bugs.debian.org/964284
> 
> Somewhat deterred, I back-burnered it for a while while I focused on
> other things...
> 
> 
> Just a few days ago, I decided to attempt to get Guix into Debian's
> next
> release, and went with the fallback plan of building it against
> guile-2.2, and a few disabled tests later...
> 
>   https://tracker.debian.org/guix
> 
> 
> If all goes well, it should migrate to "bullseye" in a few
> days. Hopefully in a few months "bullseye" will become Debian's
> stable
> release shipping with guix! Presumeably Guix will also eventually
> find
> itself in Ubuntu and other Debian derivatives...
> 
> 
> Now on Debian you should be able to:
> 
>   apt install guix
>   guix install dpkg
>   guix environment --ad-hoc dpkg -- dpkg -i ./guix_1.2.0-3_amd64.deb
> 
> It is almost like symmetry!
> 

This is really awesome.  I'm also grateful for fixing the guile-gnutls
packaging in Debian. Thank you!

Kind regards,
Roel Janssen




Re: Updating to latest Bioconductor release

2020-11-20 Thread Roel Janssen
On Fri, 2020-11-20 at 14:42 +0100, Ricardo Wurmus wrote:
> 
> Ricardo Wurmus  writes:
> 
> > There are some open questions: the changes to r-mutationalpatterns
> > invalidate the comment
> > 
> >    ;; These two packages are suggested packages
> > 
> > above r-bsgenome-hsapiens-1000genomes-hs37d5 and
> > r-bsgenome-hsapiens-ucsc-hg19.  Would be good to adjust the comment or
> > to move it down.
> > 
> > I also saw that r-rsubread fails to build.
> 
> And one more:
> 
> The commit that updates %bioconductor-version (“import: cran: Update the
> Bioconductor version to 3.12.”) should also change “bioconductor-uri” in
> (guix build-system r).  This is still at 3.11.
> 

Can/may I rewrite the history of my own commits to fix these?

Kind regards,
Roel Janssen




Re: Updating to latest Bioconductor release

2020-11-20 Thread Roel Janssen
On Thu, 2020-11-19 at 22:13 +0100, Ricardo Wurmus wrote:
> Hi,
> 
> > > > http://logs.guix.gnu.org/guix/2020-11-19.log#182349
> > 
> > > Right, so I shouldn't have pushed to "wip-r" in the first place.
> > 
> > Well, I think it is a lack of synchronisation between all of 3;
> > especially with this work around via external GitHub upstream.
> 
> Yeah, sorry.  I didn’t realize wip-r was worked on by any one other than
> simon and myself.  The commits are still there, they just don’t have a
> named pointer (= branch) to them any longer:
> 
> https://git.savannah.gnu.org/cgit/guix.git/log/?id=8ed6a08a998d4abd58eb67c85699f38f87f76d05
> 
> If that’s the last commit (or if you have another one that was pushed) I
> can simply reset the branch pointer to it and then stay out of it :)
> 

This is the most recent commit I pushed:
https://git.savannah.gnu.org/cgit/guix.git/log/?id=ea92cec586893748b6de32b930117c598225c38c

It'd be great if you could reset the branch pointer! :)

> > 
> > > Perhaps I should do it "the old way" and base my patches on the master
> > > branch and send the gazillion patches to the mailing list. :)
> > 
> > What Ricardo did previously (my rewrite of history on Nov. 10 on
> > GitHub, and then pushed by Ricardo to Savannah today), quoting their
> > word: "delete origin/wip-r, reset my local copy to zimoun/wip-r,
> > rebased on top of origin/master, and pushed origin/wip-r".  Maybe you
> > could do the same.
> > 
> > Or if you have the super power to do that: you can delete the branch
> > and re-push.
> 
> Deleting the branch is “git push -d origin wip-r”.
> 
> > > Then we can discuss each line of the commit messages separately before
> > > pushing to the master branch.
> > 
> > Well, I do not know what Ricardo thinks, but personally I would prefer
> > first a wip-r branch then merge.  It will avoid avoid annoyance of
> > possible broken packages, I mean we could detect them.
> 
> Same.  I prefer having a branch so ci.guix.gnu.org can build things and
> we can keep an eye on the fall-out (if any).

On IRC you noted that the commit messages are wrong.  I interpret that as "my
commit messages are wrong", which I guess will be those where I updated the
inputs of packages.  Could you tell me how I need to format the commit messages?
Then I can perhaps rewrite the most recent history of the wip-r branch to fix my
commit messages.

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-19 Thread Roel Janssen
Hi Simon,

On Thu, 2020-11-19 at 18:27 +0100, zimoun wrote:
> Hi Roel,
> 
> Quick heads up of Ricardo from IRC.
> 
> On Thu, 19 Nov 2020 at 17:25, Roel Janssen  wrote:
> 
> > Well, they got removed from Savannah. Now I get this when I try to
> > push:
> 
> http://logs.guix.gnu.org/guix/2020-11-19.log#182429
> 
> > $ git push -f origin wip-r
> > Enumerating objects: 1599, done.
> > Counting objects: 100% (1599/1599), done.
> > Delta compression using up to 4 threads
> > Compressing objects: 100% (362/362), done.
> > Writing objects: 100% (1590/1590), 359.31 KiB | 14.37 MiB/s, done.
> > Total 1590 (delta 1271), reused 1545 (delta 1228), pack-reused 0
> > remote: error: denying non-fast-forward refs/heads/wip-r (you
> > should
> > pull first)
> > To git.savannah.gnu.org:/srv/git/guix.git
> >  ! [remote rejected]   wip-r -> wip-r (non-fast-forward)
> > error: failed to push some refs to
> > 'git.savannah.gnu.org:/srv/git/guix.git'
> > 
> > So, that's not going to work.
> > Perhaps @Ricardo knows more about the setup of the "wip-r" branch.
> 
> http://logs.guix.gnu.org/guix/2020-11-19.log#182349
> 
> Hope that helps,
> 

Right, so I shouldn't have pushed to "wip-r" in the first place.
Perhaps I should do it "the old way" and base my patches on the master
branch and send the gazillion patches to the mailing list. :)

Then we can discuss each line of the commit messages separately before
pushing to the master branch.

I can prepare it this evening and tomorrow evening.

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-19 Thread Roel Janssen
On Thu, 2020-11-19 at 17:18 +0100, zimoun wrote:
> On Thu, 19 Nov 2020 at 16:57, Roel Janssen  wrote:
> 
> > My bad, I jumped to a conclusion too quickly. :)
> 
> No worries. :-)
> 
> 
> > So, *something* removed my commits to the wip-r branch. Is it some
> > kind
> > of automation that syncs the Github and the Savannah branches?
> 
> Which upstream?
> I have no access on Savannah.  And I think you could should there.
> My wip-r branch on my personal GitHub account, no automation. 
> Ricardo
> did couple of days ago: fetch my branch and push it to Savannah. 
> Then
> they reported a tiny mistake that I corrected on my branch, by
> rewriting the history since it is WIP.  It was before you have
> started
> to work on, I guess.
> 
> To be concrete, the last commit on my branch is 9 days ago.  And the
> last commit on Savannah wip-r is 30th Otc.

Yeah, because mine got removed.

> > The good news is that in my local checkout I've fixed the build
> > problem
> > with r-rhdf5lib, so I should be able to build the remaining
> > packages
> > this evening.
> 
> Really cool!  Thank you.
> 
> 
> > I am hesitant to push it to the "wip-r" branch, because it seems
> > pointless. :)  So where can I push my updates to?
> 
> Please push your changes to wip-r on Savannah, rewriting the history
> is fine with me.  Then Cuirass will rebuild everything, check.  Then
> we can merge to master.

Well, they got removed from Savannah. Now I get this when I try to
push:
$ git push -f origin wip-r
Enumerating objects: 1599, done.
Counting objects: 100% (1599/1599), done.
Delta compression using up to 4 threads
Compressing objects: 100% (362/362), done.
Writing objects: 100% (1590/1590), 359.31 KiB | 14.37 MiB/s, done.
Total 1590 (delta 1271), reused 1545 (delta 1228), pack-reused 0
remote: error: denying non-fast-forward refs/heads/wip-r (you should
pull first)
To git.savannah.gnu.org:/srv/git/guix.git
 ! [remote rejected]   wip-r -> wip-r (non-fast-forward)
error: failed to push some refs to
'git.savannah.gnu.org:/srv/git/guix.git'

So, that's not going to work. 
Perhaps @Ricardo knows more about the setup of the "wip-r" branch.

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-19 Thread Roel Janssen
On Thu, 2020-11-19 at 16:36 +0100, zimoun wrote:
> Hi,
> 
> On Thu, 19 Nov 2020 at 16:31, Roel Janssen  wrote:
> 
> > I fixed the build of r-rhfd5lib.
> 
> Cool!
> 
> > It seems, however, that you removed all of my changes to the wip-r
> > branch. Why?
> 
> Who is "you"? ;-)
> Personally, I did nothing; or I have nights that I am not aware. :-)
> (Even, I do not have commit access.)
> So if I did something wrong, I am sorry and could you point me to
> what?
> 

My bad, I jumped to a conclusion too quickly. :)

So, *something* removed my commits to the wip-r branch. Is it some kind
of automation that syncs the Github and the Savannah branches?

The good news is that in my local checkout I've fixed the build problem
with r-rhdf5lib, so I should be able to build the remaining packages
this evening.

I am hesitant to push it to the "wip-r" branch, because it seems
pointless. :)  So where can I push my updates to?

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-19 Thread Roel Janssen
On Wed, 2020-11-18 at 17:59 +0100, zimoun wrote:
> On Wed, 18 Nov 2020 at 17:33, Roel Janssen  wrote:
> 
> > Okay.  Then I'll look into it.  I currently have only these left as
> > changed in my tree:
> > - r-atacseqqc: Needs r-rhdf5lib.
> > - r-cytoml: Needs r-rhdf5lib.
> > - r-scater: Needs r-rhdf5lib.
> > - r-scuttle (new package for r-scran): Needs r-rhdf5lib.
> > - r-scran: Needs r-scuttle.
> 
> Now it rings a bell and I think you missed this email (and I forgot
> to
> point you, sorry):
> 
> # NOT DONE
> r-rhdf5lib: consider removing this native input: hdf5-source
> r-rhdf5lib: consider removing this propagated input: hdf5
> 
> https://lists.gnu.org/archive/html/guix-devel/2020-11/msg00047.html
> 

I fixed the build of r-rhfd5lib.

It seems, however, that you removed all of my changes to the wip-r
branch. Why?

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-18 Thread Roel Janssen
On Wed, 2020-11-18 at 17:23 +0100, zimoun wrote:
> Hi Roel,
> 
> On Wed, 18 Nov 2020 at 17:13, Roel Janssen  wrote:
> 
> > I pushed updates and fixes for various packages. Now I'm at r-
> > rhdf5lib.
> 
> Cool!
> 
> 
> > In 38881c9368595c5a894abe3695d98aabb1ef0029 you updated it to
> > 1.12.0,
> > but it seems that building has changed quite a bit.  I wonder how
> > it
> > could've built for you?
> 
> I have not.  The idea proposed by Ricardo was to update everything
> and
> let Cuirass build.  BTW, I do not have enough power at hand to
> rebuild
> all the BioConductor packages.
> 

Okay.  Then I'll look into it.  I currently have only these left as
changed in my tree:
- r-atacseqqc: Needs r-rhdf5lib.
- r-cytoml: Needs r-rhdf5lib.
- r-scater: Needs r-rhdf5lib.
- r-scuttle (new package for r-scran): Needs r-rhdf5lib.
- r-scran: Needs r-scuttle.

So it seems we are almost there.

I pushed the rest to the wip-r branch.

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-18 Thread Roel Janssen
Hi Simon,

On Wed, 2020-11-18 at 12:50 +0100, zimoun wrote:
> Hi Roel,
> 
> On Wed, 18 Nov 2020 at 10:31, Roel Janssen  wrote:
> 
> > I fixed the build issue with r-delayedarray and a couple of others
> > on
> > my local machine.  I also updated the bioconductor version variable
> > to
> > 3.12.  Can I push these changes to the "wip-r" branch, along with
> > updates of the many packages involved?
> 
> Please go ahead.  Then Berlin will rebuild all since it tracks the
> branch wip-r.

I pushed updates and fixes for various packages. Now I'm at r-
rhdf5lib. 

In 38881c9368595c5a894abe3695d98aabb1ef0029 you updated it to 1.12.0,
but it seems that building has changed quite a bit.  I wonder how it
could've built for you?

Kind regards,
Roel Janssen






Re: Updating to latest Bioconductor release

2020-11-18 Thread Roel Janssen
On Tue, 2020-11-17 at 23:58 +0100, zimoun wrote:
> Hi Roel,
> 
> Just to let you know:
> 
>   <https://ci.guix.gnu.org/eval/18656?status=failed>
> 
> I think the failure is coming from IO on the Berlin machine.
> 
> Are you building this branch too?

I fixed the build issue with r-delayedarray and a couple of others on
my local machine.  I also updated the bioconductor version variable to
3.12.  Can I push these changes to the "wip-r" branch, along with
updates of the many packages involved?

I still have about 30 packages to fix the build for before we can
consider merging to master.

Kind regards,
Roel Janssen




Re: Updating to latest Bioconductor release

2020-11-16 Thread Roel Janssen
Hi Simon,

On Mon, 2020-11-16 at 13:29 +0100, zimoun wrote:
> Hi Roel,
> 
> On Mon, 16 Nov 2020 at 13:10, Roel Janssen  wrote:
> 
> > Hehe. I'm building all R packages in the "wip-r" branch now to see
> > what's left for me to fix.
> 
> Cool!  I do not know if Ricardo fetched my branch to update ’wip-r’
> because an error about a missing symbol was there and then I
> corrected
> it in my branch.  Anyway.
> 
> BTW, I would like to add the type ’workflow’ in addition to
> ’annotation’
> and ’experiment’.  Because I need:
> 
> <https://bioconductor.org/packages/3.11/workflows/html/cytofWorkflow.html
> >
> 
> Well, I do not know if that should go to ’wip-r’ or directly to
> ’master’.  WDYT?

Well, I see we're getting R packages that fail to build because the
source cannot be downloaded (anymore), due to the outdated Bioconductor
version.  So if we can do an upgrade to 3.12 soon, and add the
'workflow' type later, that'd be my preferred route.

So to achieve that, I think we need to:
1. Update the Bioconductor version to 3.12
2. Update/build the Bioconductor packages that are already in Guix.
3. --> When all build failures have been resolved, push to master?

4. Add the bits for 'workflow' packages.
5. Add workflow packages / adjust the ones we have (do we have any?)
and build/test them.
6. --> Push to master?

WDYT?

Kind regards,
Roel Janssen





Re: Updating to latest Bioconductor release

2020-11-16 Thread Roel Janssen
Hi Simon,

On Mon, 2020-11-16 at 12:27 +0100, zimoun wrote:
> Hi Roel,
> 
> On Mon, 16 Nov 2020 at 11:49, Roel Janssen  wrote:
> 
> > Is anyone working on updating Bioconductor to the latest (3.12)
> > release?  If so, what's the status? :)
> 
> Some work is already done in the branch wip-r pulled from:
> 
>    <https://github.com/zimoun/guix>
> 
> Paretto’s principle says that 80% of the work is remaining. ;-)
> 
> Please go ahead.

Hehe. I'm building all R packages in the "wip-r" branch now to see
what's left for me to fix.

Thanks!

Kind regards,
Roel Janssen




Updating to latest Bioconductor release

2020-11-16 Thread Roel Janssen
Dear Guix,

Is anyone working on updating Bioconductor to the latest (3.12)
release?  If so, what's the status? :)

Kind regards,
Roel Janssen





Re: “guix pack -RR r“ fails?

2020-11-05 Thread Roel Janssen
ry/grDevices/libs/grDevices.so: cannot open shared
> object file: Bad address
> Error: package or namespace load failed for 'graphics' in
> dyn.load(file, DLLpath = DLLpath, ...):
>  unable to load shared object
> '/gnu/store/nqqhaz59gdr5q6mb6mw9dd8jk133rna2-r-minimal-
> 4.0.3/lib/R/library/grDevices/libs/grDevices.so':
>   /gnu/store/nqqhaz59gdr5q6mb6mw9dd8jk133rna2-r-minimal-
> 4.0.3/lib/R/library/grDevices/libs/grDevices.so: cannot open shared
> object file: Bad address
> Error: package or namespace load failed for 'stats' in dyn.load(file,
> DLLpath = DLLpath, ...):
>  unable to load shared object
> '/gnu/store/nqqhaz59gdr5q6mb6mw9dd8jk133rna2-r-minimal-
> 4.0.3/lib/R/library/grDevices/libs/grDevices.so':
>   /gnu/store/nqqhaz59gdr5q6mb6mw9dd8jk133rna2-r-minimal-
> 4.0.3/lib/R/library/grDevices/libs/grDevices.so: cannot open shared
> object file: Bad address
> During startup - Warning messages:
> 1: package 'grDevices' in options("defaultPackages") was not found 
> 2: package 'graphics' in options("defaultPackages") was not found 
> 3: package 'stats' in options("defaultPackages") was not found 
> 4: Setting LC_CTYPE failed, using "C" 
> 5: Setting LC_COLLATE failed, using "C" 
> 6: Setting LC_TIME failed, using "C" 
> 7: Setting LC_MESSAGES failed, using "C" 
> 8: Setting LC_MONETARY failed, using "C" 
> 9: Setting LC_PAPER failed, using "C" 
> 10: Setting LC_MEASUREMENT failed, using "C" 
> > 
> --8<---cut here---end--->8---
> 
> 
> The cluster machine is an old kernel:
> 
> --8<---cut here---start->8---
> HEAD$ uname -a
> Linux HEAD 2.6.32-573.8.1.el6.x86_64 #1 SMP Tue Nov 10 18:01:38 UTC
> 2015 x86_64 x86_64 x86_64 GNU/Linux
> --8<---cut here---end--->8---
> 
> 
> What do I miss?

Perhaps completely misguided, but is this inside an SGE or SLURM job?
I've seen similar errors when starting R on a cluster node with too
little memory allocated to the compute job. In my experience you need
at least 2G of memory available.

Kind regards,
Roel Janssen





Re: Using #true and #false everywhere?

2020-10-21 Thread Roel Janssen
On Wed, 2020-10-21 at 11:59 +0200, Ludovic Courtès wrote:
> Hi,
> 
> Andreas Enge  skribis:
> 
> > on the bikeshedding front: I find #true and #false confusing, since
> > everything I see on the Scheme language seems to use #t and #f.
> 
> What material are you referring to?  SICP & co.?
> 

Sorry to inject in the thread, but here's more material that uses #t
and #f:

  $ guile
  GNU Guile 3.0.4
  Copyright (C) 1995-2020 Free Software Foundation, Inc.

  Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
  This program is free software, and you are welcome to redistribute it
  under certain conditions; type `,show c' for details.

  Enter `,help' for help.
  scheme@(guile-user)> (= 1 1)
  $1 = #t
  scheme@(guile-user)> (= 1 2)
  $2 = #f

I don't remember ever being confused about #t and #f, so perhaps my
opinion doesn't matter much in this, but I'd prefer #t and #f because
it's already widespread use in Scheme.  (And with widespread I mean:
Pretty much all Scheme code I read).

I do remember being confused about the double parenthesis on "let" ;).

Kind regards,
Roel Janssen





Re: branch master updated: gnu: Add r-useful.

2020-09-10 Thread Roel Janssen
On Wed, 2020-09-09 at 18:05 +0200, Ricardo Wurmus wrote:
> Hi Roel,
> 
> > This is an automated email from the git hooks/post-receive script.
> > 
> > roelj pushed a commit to branch master
> > in repository guix.
> > 
> > The following commit(s) were added to refs/heads/master by this
> > push:
> >  new a9401b4  gnu: Add r-useful.
> > a9401b4 is described below
> > 
> > commit a9401b4c948552d6a5a95bbd295e61871f4c6d74
> > Author: Roel Janssen 
> > AuthorDate: Wed Sep 9 16:59:42 2020 +0200
> > 
> > gnu: Add r-useful.
> > 
> > * gnu/packages/cran.scm (r-useful): New variable.
> > ---
> >  gnu/packages/cran.scm | 30 ++
> >  1 file changed, 30 insertions(+)
> > 
> > diff --git a/gnu/packages/cran.scm b/gnu/packages/cran.scm
> > index 3b13bbd..8f7e379 100644
> > --- a/gnu/packages/cran.scm
> > +++ b/gnu/packages/cran.scm
> > @@ -3724,6 +3724,36 @@ algorithm.  The interface of @code{ucminf}
> > is designed for easy interchange
> >  with the package @code{optim}.")
> >  (license license:gpl2+)))
> >  
> > +(define-public r-useful
> > +  (package
> > +   (name "r-useful")
> > +   (version "1.2.6")
> > +   (source (origin
> > +(method url-fetch)
> > +(uri (cran-uri "useful" version))
> > +(sha256
> > + (base32
> > +  "0n50v1q75k518sq23id14jphwla35q4sasahrnrnllwrachl67v
> > 1"
> > +   (properties `((upstream-name . "useful")))
> > +   (build-system r-build-system)
> > +   (propagated-inputs
> > +`(("r-assertthat" ,r-assertthat)
> > +  ("r-dplyr" ,r-dplyr)
> > +  ("r-ggplot2" ,r-ggplot2)
> > +  ("r-magrittr" ,r-magrittr)
> > +  ("r-matrix" ,r-matrix)
> > +  ("r-plyr" ,r-plyr)
> > +  ("r-purrr" ,r-purrr)
> > +  ("r-scales" ,r-scales)))
> > +   (home-page "https://github.com/jaredlander/useful;)
> > +   (synopsis "A Collection of Handy, Useful Functions")
> 
> “guix lint” should have complained about the leading “A”.  Please
> also
> use lowercase.

Fixed in 811985a7e0b8e7aad5a3c3818482b06996c94d02.

Kind regards,
Roel Janssen





Re: branch master updated: gnu: Add r-bisquerna.

2020-09-10 Thread Roel Janssen
On Thu, 2020-09-10 at 13:31 +0200, Ricardo Wurmus wrote:
> Roel Janssen  writes:
> 
> > > > +(define-public r-bisquerna
> > > > +  (package
> > > > +   (name "r-bisquerna")
> > > > +   (version "1.0.4")
> > > > +   (source (origin
> > > > +(method url-fetch)
> > > > +(uri (cran-uri "BisqueRNA" version))
> > > > +(sha256
> > > > + (base32
> > > > +  "01g34n87ml7n3pck77497ddgbv3rr5p4153ac8ninpgjijl
> > > > m3jw
> > > > 2"
> > > 
> > > Why is this in (gnu packages bioinformatics) and not in (gnu
> > > packages
> > > cran)?
> > 
> > It seemed so "bioinformatics"-specific.  But you're right, it's a
> > CRAN
> > package, so that may be a better fit.  Shall I move it to CRAN?
> 
> If you have time to do that, yes please.  Some time ago I started a
> half-hearted migration of R packages from (gnu packages
> bioinformatics)
> to (gnu packages cran) and (gnu packages bioconductor).  It’s not
> supremely important, but I think in the long term we’d like to have
> CRAN
> things in (gnu packages cran) and Bioconductor things in (gnu
> packages
> bioconductor), because it’s deliciously unsurprising. :)

I fully agree that it would be nice to have all packages originating
from CRAN in (gnu packages cram) and all things Bioconductor in (gnu
packages bioconductor).

I moved r-bisquerna and lowercased its synopsis in
66be746dc0c0f4ba3d748ed8d0983b2f9afdace8.

Kind regards,
Roel Janssen





Re: branch master updated: gnu: Add r-bisquerna.

2020-09-10 Thread Roel Janssen
Hi Ricardo,

On Wed, 2020-09-09 at 18:04 +0200, Ricardo Wurmus wrote:
> Hi Roel,
> 
> > This is an automated email from the git hooks/post-receive script.
> > 
> > roelj pushed a commit to branch master
> > in repository guix.
> > 
> > The following commit(s) were added to refs/heads/master by this
> > push:
> >  new 0574446  gnu: Add r-bisquerna.
> > 0574446 is described below
> > 
> > commit 0574446be82ef54b925441e4283bf754a86918a9
> > Author: Roel Janssen 
> > AuthorDate: Wed Sep 9 17:02:55 2020 +0200
> > 
> > gnu: Add r-bisquerna.
> > 
> > * gnu/packages/bioinformatics.scm (r-bisquerna): New variable.
> > ---
> >  gnu/packages/bioinformatics.scm | 25 +
> >  1 file changed, 25 insertions(+)
> > 
> > diff --git a/gnu/packages/bioinformatics.scm
> > b/gnu/packages/bioinformatics.scm
> > index f8792ef..9f2fd86 100644
> > --- a/gnu/packages/bioinformatics.scm
> > +++ b/gnu/packages/bioinformatics.scm
> > @@ -7923,6 +7923,31 @@ manipulating genomic intervals and variables
> > defined along a genome.")
> >  on Bioconductor or which replace R functions.")
> >  (license license:artistic2.0)))
> >  
> > +(define-public r-bisquerna
> > +  (package
> > +   (name "r-bisquerna")
> > +   (version "1.0.4")
> > +   (source (origin
> > +(method url-fetch)
> > +(uri (cran-uri "BisqueRNA" version))
> > +(sha256
> > + (base32
> > +  "01g34n87ml7n3pck77497ddgbv3rr5p4153ac8ninpgjijlm3jw
> > 2"
> 
> Why is this in (gnu packages bioinformatics) and not in (gnu packages
> cran)?

It seemed so "bioinformatics"-specific.  But you're right, it's a CRAN
package, so that may be a better fit.  Shall I move it to CRAN?

> > +   (synopsis "Decomposition of Bulk Expression with Single-Cell
> > Sequencing")
> 
> Please use lowercase where it would be normally used.

My mistake. I will take better care of this in the future.

Kind regards,
Roel Janssen





Re: branch master updated: gnu: Add r-loomr.

2020-09-10 Thread Roel Janssen
Hi Ricardo,


On Wed, 2020-09-09 at 18:02 +0200, Ricardo Wurmus wrote:
> Hi Roel,
> 
> > This is an automated email from the git hooks/post-receive script.
> > 
> > roelj pushed a commit to branch master
> > in repository guix.
> > 
> > The following commit(s) were added to refs/heads/master by this
> > push:
> >  new 1f56ec0  gnu: Add r-loomr.
> > 1f56ec0 is described below
> > 
> > commit 1f56ec08af704bdc7aa3e143bf5ce351c5306dea
> > Author: Roel Janssen 
> > AuthorDate: Wed Sep 9 16:56:02 2020 +0200
> > 
> > gnu: Add r-loomr.
> > 
> > * gnu/packages/bioinformatics.scm (r-loomr): New variable.
> 
> This is not free software.  See
> 
>https://github.com/mojaveazure/loomR/pull/24


Oh shoot. I'm sorry I didn't see this discussion!


> Aside from this, I would like to say two things:
> 
> >  gnu/packages/bioinformatics.scm | 26 ++
> 
> Let’s please not add R packages to (gnu packages bioinformatics) when
> it
> can be avoided.  (In this case there’s no CRAN package, so it’s
> fine.)

Where would I add a non-CRAN and non-Bioconductor package to?  Perhaps
this situation won't occur again, and should raise a flag, because I
think I've never had this case before.

> > +(define-public r-loomr
> > +  (package
> > +   (name "r-loomr")
> > +   (version "0.2.0-beta")
> > +   (source (origin
> > +(method url-fetch)
> > +(uri (string-append
> > +  "https://github.com/mojaveazure/loomR/archive/;
> > +  version ".tar.gz"))
> 
> This is not okay as these generated tarballs are not stable.  I
> haven’t
> seen these patches on guix-patches before — maybe I missed them.  But
> we
> have been avoiding these kind of URLs since a long time and had I
> seen
> the patches on guix-patches I and other would probably have pointed
> this
> out.
> 
> Can you please revert this ASAP?

I see you've reverted it already. Thanks for that!

I will default to submitting patches to guix-patches again.  I thought
it was trivial enough to just push.  My mistake.

Kind regards,
Roel Janssen





Re: Adding a subcommand "load-profile"

2020-04-29 Thread Roel Janssen
On Wed, 2020-04-29 at 15:48 +0200, zimoun wrote:
> Dear Roel,
> 
> On Wed, 29 Apr 2020 at 14:46, Roel Janssen  wrote:
> 
> > > > If there is interest in having this as a "load-profile" subcommand, I
> > > > will
> > > > post
> > > > an initial implementation to the mailing list ASAP.
> > > 
> > > Instead of another subcommand, I would suggest to add an option to
> > > "guix environment".
> > > Then an another further step should maybe combine "--load-profile" and
> > > "--ad-hoc" in order to create a temporary "augmented" profile.
> > 
> > I would strongly prefer to keep it backwards-compatible for our local HPC
> > users.
> > Also, the "environment" command generates a new profile, whereas the
> > proposed
> > "load-profile" merely applies the environment variables of an existing
> > profile
> > to a newly spawned shell.  I think the way they work differs enough to
> > warrant a
> > separate subcommand.
> 
> I understand that you would like to keep backward-compatibility with
> your current tools and ease the switch for your users. But you could
> do so by replacing all the shell code in the 'guixr load-profile' by
> "guix environment --load-profile". :-)

Well, I want to get rid of all the shell code.  Not replace it with less shell
code. :)

And cluttering everyone's environment on our cluster with a bash function that
figures out whether to emit "guix environment --load-profile" or another guix
command seems far from optimal too.

Kind regards,
Roel Janssen





Re: Adding a subcommand "load-profile"

2020-04-29 Thread Roel Janssen
Dear Simon,

Thank you for your ideas!

On Tue, 2020-04-28 at 18:54 +0200, zimoun wrote:
> 
> [...]
> 
> > Would there be any interest from others to have this as well? And also, the
> > shell implementation heavy relies on Bash.  What other shells should I
> > attempt
> > to implement?
> 
> It would be cool!
> However, if I remember correctly the previous discussion on such
> topic, the issue was to respect the user's shell ($SHELL). This
> argument is mitigated by the fact that "guix environment" already uses
> Bash to spawn the new shell, if I understand correctly.

I think we can respect the user's shell for the implementation of "load-
profile".  We could use Guile's "getenv" and "setenv" to prepare the shell
environment in Guile, and then spawn the $SHELL in such a way that it doesn't
load the user's default configuration.

> Well, I find annoying to not able to "go in" and "go out" in one
> profile and I prefer having a Bash specific implementation than
> nothing.
> So, thank you for the initiative. :-)
> 
> 
> > If there is interest in having this as a "load-profile" subcommand, I will
> > post
> > an initial implementation to the mailing list ASAP.
> 
> Instead of another subcommand, I would suggest to add an option to
> "guix environment".
> Then an another further step should maybe combine "--load-profile" and
> "--ad-hoc" in order to create a temporary "augmented" profile.

I would strongly prefer to keep it backwards-compatible for our local HPC users.
Also, the "environment" command generates a new profile, whereas the proposed
"load-profile" merely applies the environment variables of an existing profile
to a newly spawned shell.  I think the way they work differs enough to warrant a
separate subcommand.

Kind regards,
Roel Janssen





Adding a subcommand "load-profile"

2020-04-28 Thread Roel Janssen
Dear Guix,

Years ago we implemented GNU Guix on the high-performance computing cluster in
Utrecht.  One of the things we added was a wrapper around the "guix" command
(called "guixr") to enable communication between the guix-daemon (on one node),
and the client-side "guix" command.  (We actually copied the great "guixr"
script from Ricardo at the time.)

Lots of improvements have been made for the HPC use-case that the need for the
"guixr" wrapper script is no longer needed.   Except for one thing.

We added a subcommand in the "guixr" script called "load-profile".  It allows a
user to do the following:

$ guixr package -i ... -p /my/profile
$ guixr load-profile /my/profile
[env]$
  # ... A new shell is spawned here.
  # Inside this shell only the environment variables in
  # /my/profile/etc/profile are set ...
[env]$ exit
  # Return to the normal shell state 


The code of "guixr" is available at [1].

I sometimes wish I had this command available in "guix" itself.  So I'd like to
implement the "load-profile" subcommand in Scheme, so that it can be part of
Guix.

Would there be any interest from others to have this as well? And also, the
shell implementation heavy relies on Bash.  What other shells should I attempt
to implement?

If there is interest in having this as a "load-profile" subcommand, I will post
an initial implementation to the mailing list ASAP.

Thanks all!

Kind regards,
Roel Janssen

[1] 
https://github.com/UMCUGenetics/guix-additions/blob/master/umcu/packages/guix.scm#L191-L339




Re: New signing key

2020-03-05 Thread Roel Janssen
Hello Ludo’ and Guix,

I lost the password of the old key.  I updated my OpenPGP key on
Savannah to the new one (F556FD94FB8F8B8779E36832CBD0CD5138C19AFC).

I am trying to find the revocation key (printed) to revoke the old key
as reassurance that I am still me, and no malice is going on.  As I
moved twice since printing and securely storing the revocation key,
this will take some time.

Is there perhaps a key-signing party for GNU Guix maintainers to build
a better trust in the future?

Kind regards,
Roel Janssen


On Thu, 2020-03-05 at 18:13 +0100, Ludovic Courtès wrote:
> Hello Roel,
> 
> You signed commit cc51c03ff867d4633505354819c6d88af88bf919 and its
> parent with OpenPGP key F556FD94FB8F8B8779E36832CBD0CD5138C19AFC,
> which
> differs from the one registered in ‘build-aux/git-authenticate.scm’
> (17CB 2812 EB63 3DFF 2C7F 0452 C3EC 1DCA 8430 72E1) that you used
> previously.
> 
> Could you please reply to this message signed with the old key,
> stating
> that the new key is the right one?
> 
> As a last resort, if you lost control of the old key, could you
> ensure
> your Savannah account contains the new key and send a reply signed
> with
> the new key?
> 
> Thanks in advance,
> Ludo’.


signature.asc
Description: This is a digitally signed message part


Re: Request: Bazel build system (required for Tensorflow update)

2020-03-04 Thread Roel Janssen
On Wed, 2020-03-04 at 16:10 +0100, Pierre Neidhardt wrote:
> Roel Janssen  writes:
> 
> > Alright!  I've sent patches for abseil-cpp to the mailing list, but I
> > haven't
> > been able to get the googletest test suite to work.  Have you figured that
> > out?
> 
> I've reused your patch and fixed the tests.  It's a weird thing that
> we've got with the googletest package, I'm not sure why this is happening.

Well, thanks for fixing that!  It looks better than I thought it would like. :)

> > Perhaps we can share the work on Tensorflow 1.15.2.  Would you mind sharing
> > the
> > patch you've got so far?
> 
> Nothing significant, really :(
> 

Alright.

Kind regards,
Roel Janssen




Re: Request: Bazel build system (required for Tensorflow update)

2020-03-04 Thread Roel Janssen
On Wed, 2020-03-04 at 15:01 +0100, Pierre Neidhardt wrote:
> I've actually managed to fix the abseil issue.
> 
> Now I'm stuck at more broken CMake stuff.
> Duh, this will take a while...
> 
> --8<---cut here---start->8---
> -- Could NOT find c-ares (missing: c-ares_DIR)
> -- Found PythonInterp: /gnu/store/l8nphg0idd8pfddyad8f92lx8d1hc053-python-
> wrapper-3.7.4/bin/python (found version "3.7.4") 
> -- Found PythonLibs: /gnu/store/78w7y0lxar70j512iqw8x3nimzj10yga-python-
> 3.7.4/lib/libpython3.7m.so (found version "3.7.4") 
> CMake Error at tf_python.cmake:132 (message):
>   Python proto directory not found: tensorflow/contrib/tpu/profiler
> Call Stack (most recent call first):
>   CMakeLists.txt:613 (include)
> 
> 
> CMake Error at tf_python.cmake:217 (message):
>   Python module not found: tensorflow/contrib/rnn/kernels
> Call Stack (most recent call first):
>   CMakeLists.txt:613 (include)
> 
> 
> CMake Error at tf_python.cmake:217 (message):
>   Python module not found: tensorflow/contrib/rnn/ops
> Call Stack (most recent call first):
>   CMakeLists.txt:613 (include)
> 
> 
> CMake Error at tf_python.cmake:217 (message):
>   Python module not found: tensorflow/contrib/tpu/profiler
> Call Stack (most recent call first):
>   CMakeLists.txt:613 (include)
> 
> 
> -- Found SWIG: /gnu/store/1jamhp01xc911m68j8ndiwlcc55q8ikp-swig-
> 3.0.12/bin/swig (found version "3.0.12") 
> CMake Warning at tf_shared_lib.cmake:136 (export):
>   Cannot create package registry file:
> 
> /homeless-
> shelter/.cmake/packages/Tensorflow/31aad99a164c4099f5d0af2d4ec07d6f
> 
>   No such file or directory
> 
> Call Stack (most recent call first):
>   CMakeLists.txt:616 (include)
> 
> 
> -- Configuring incomplete, errors occurred!
> See also "/tmp/guix-build-tensorflow-1.15.0.drv-
> 0/source/tensorflow/contrib/build/CMakeFiles/CMakeOutput.log".
> See also "/tmp/guix-build-tensorflow-1.15.0.drv-
> 0/source/tensorflow/contrib/build/CMakeFiles/CMakeError.log".
> command "cmake" "../cmake" "-DCMAKE_BUILD_TYPE=Release" "-
> DCMAKE_INSTALL_PREFIX=/gnu/store/wm0iizlnsj55pdhjdm7k7pzz3bad84f0-tensorflow-
> 1.15.0" "-DCMAKE_INSTALL_LIBDIR=lib" "-
> DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE" "-
> DCMAKE_INSTALL_RPATH=/gnu/store/wm0iizlnsj55pdhjdm7k7pzz3bad84f0-tensorflow-
> 1.15.0/lib" "-DCMAKE_VERBOSE_MAKEFILE=ON" "-
> Dprotobuf_STATIC_LIBRARIES=/gnu/store/p77n8kpsl50qlrz5fk0mc9kvfkinh9dq-
> protobuf-3.6.1/lib/libprotobuf.so" "-
> DPROTOBUF_PROTOC_EXECUTABLE=/gnu/store/p77n8kpsl50qlrz5fk0mc9kvfkinh9dq-
> protobuf-3.6.1/bin/protoc" "-
> Dsnappy_STATIC_LIBRARIES=/gnu/store/3xrpbdhhb8nk9p9jqr19ljlyhxnxk18n-snappy-
> 1.1.8/lib/libsnappy.so" "-
> Dsnappy_INCLUDE_DIR=/gnu/store/3xrpbdhhb8nk9p9jqr19ljlyhxnxk18n-snappy-1.1.8"
> "-Djsoncpp_STATIC_LIBRARIES=/gnu/store/3vacq5lrnri4g8a5498qxkgxv4z8jyv8-
> jsoncpp-1.7.3/lib/libjsoncpp.so" "-
> Djsoncpp_INCLUDE_DIR=/gnu/store/3vacq5lrnri4g8a5498qxkgxv4z8jyv8-jsoncpp-
> 1.7.3" "-Dsqlite_STATIC_LIBRARIES=/gnu/store/i6l1579g80387rda658jy9cfqq82643d-
> sqlite-3.28.0/lib/libsqlite.a" "-
> DABSEIL_CPP_LIBRARIES=/gnu/store/j36v4bb2vzy02xfyxp6ryfm4aj0yiaxn-abseil-cpp-
> 20200225/lib/" "-
> DABSEIL_CPP_LIBRARIES_DIR_HINTS:STRING=/gnu/store/j36v4bb2vzy02xfyxp6ryfm4aj0y
> iaxn-abseil-cpp-20200225/lib/" "-Dsystemlib_ALL=ON" "-
> Dtensorflow_ENABLE_POSITION_INDEPENDENT_CODE=ON" "-
> Dtensorflow_BUILD_SHARED_LIB=ON" "-Dtensorflow_OPTIMIZE_FOR_NATIVE_ARCH=OFF"
> "-Dtensorflow_ENABLE_SSL_SUPPORT=OFF" 
> "-Dtensorflow_BUILD_CONTRIB_KERNELS=OFF" 
> failed with status 1
> note: keeping build directory `/tmp/guix-build-tensorflow-1.15.0.drv-8'
> builder for `/gnu/store/z9f8pcng2gsp7g9f93jd40c7g53wyc4s-tensorflow-
> 1.15.0.drv' failed with exit code 1
> build of /gnu/store/z9f8pcng2gsp7g9f93jd40c7g53wyc4s-tensorflow-1.15.0.drv
> failed
> View build log at '/var/log/guix/drvs/z9/f8pcng2gsp7g9f93jd40c7g53wyc4s-
> tensorflow-1.15.0.drv.bz2'.
> guix build: error: build of `/gnu/store/z9f8pcng2gsp7g9f93jd40c7g53wyc4s-
> tensorflow-1.15.0.drv' failed
> --8<---cut here---end--->8---
> 

Alright!  I've sent patches for abseil-cpp to the mailing list, but I haven't
been able to get the googletest test suite to work.  Have you figured that out?

Perhaps we can share the work on Tensorflow 1.15.2.  Would you mind sharing the
patch you've got so far?

Kind regards,
Roel Janssen





Re: Request: Bazel build system (required for Tensorflow update)

2020-03-04 Thread Roel Janssen
On Wed, 2020-03-04 at 06:56 -0500, Julien Lepiller wrote:
> Le 4 mars 2020 05:12:00 GMT-05:00, Pierre Neidhardt  a
> écrit :
> > Hi,
> > 
> > Has anyone worked on a Bazel build system?
> > https://docs.bazel.build/versions/master/install.html
> > 
> > It's required for Tensorflow somewhere between 1.10 and 1.14.
> 
> I and Ricardo have looked at it in the past, and it has a huge dependency
> graph. Their pinning of dependencies also makes it a bit difficult to work
> with, but we can probably find a workaround.

AFAICS does tensorflow 1.15.2 still have the CMake build system files in place.

What has changed since 1.9.0 that makes the CMake build system no option
anymore?

Kind regards,
Roel Janssen




Re: List of failing packages

2019-11-04 Thread Roel Janssen
On Mon, 2019-11-04 at 14:51 +0100, Julien Lepiller wrote:
> Le 4 novembre 2019 13:28:57 GMT+01:00, Roel Janssen  a
> écrit :
> > Dear Guix,
> > 
> > I'm trying to contribute to GNU Guix again, and I'd like to see if
> > I
> > can solve build failures to make for a more stable Guix.  It would
> > be
> > very convenient to have a list of packages that are currently
> > failing
> > to build, and perhaps also some history on it (how many times did
> > the
> > build fail, and since when does it fail).
> > 
> > Does such a list exist, and if so, where can I find it?
> > 
> > Kind regards,
> > Roel Janssen
> 
> I think you'ne looking for https://ci.guix.gnu.org. look for
> evaluations of guix master, and their failures. You can also look for
> evaluations of a package with the search bar at the top right.

But is there a list of *all packages that fail ot build*?
So, not only just a single evaluation like:
https://ci.guix.gnu.org/eval/8538?status=failed

But a list of ~11000 packages and their build status (like Hydra
showed).  With Hydra, when clicking on a failed build you could see
previous builds of that same package and its success or failure of
them.  I found that very useful.

Kind regards,
Roel Janssen





List of failing packages

2019-11-04 Thread Roel Janssen
Dear Guix,

I'm trying to contribute to GNU Guix again, and I'd like to see if I
can solve build failures to make for a more stable Guix.  It would be
very convenient to have a list of packages that are currently failing
to build, and perhaps also some history on it (how many times did the
build fail, and since when does it fail).

Does such a list exist, and if so, where can I find it?

Kind regards,
Roel Janssen





Re: Towards reproducibly Jupyter notebooks with Guix-Jupyter

2019-10-10 Thread Roel Janssen
On Thu, 2019-10-10 at 10:21 +0200, Ludovic Courtès wrote:
> Hello Guix!
> 
> I’m happy to announce the first release of Guix-Jupyter!
> 
>   
> https://hpc.guix.info/blog/2019/10/towards-reproducible-jupyter-notebooks/
> 
> Guix-Jupyter is a Jupyter “kernel” that is able to interpret
> annotations
> describing the environment in which notebook cells will be executed.
> The end goal is to be able to regard notebooks as pure functions.
> 
> The code is here:
> 
>   https://gitlab.inria.fr/guix-hpc/guix-kernel
> 
> I presented it minutes ago at JCAD, a conference gathering French HPC
> practitioners, and where many talks happen to talk about Jupyter.  :-
> )
> 
>   https://jcad2019.sciencesconf.org/resource/page/id/6
> 
> Feedback welcome!
> 
> Ludo’.

The animated GIFs are really useful!  If I understand this correctly,
the Guix Jupyter kernel allows one to use multiple (completely
distinct) environments in a single Notebook.  So, mix Python, R and
Scheme in a single notebook.  That's pretty neat!

Kind regards,
Roel Janssen







Re: GWL pipelined process composition ?

2018-07-19 Thread Roel Janssen


zimoun  writes:

> Hi Roel,
>
> Thank you for all your comments.
>
>
>> Maybe we can come up with a convenient way to combine two processes
>> using a shell pipe.  But this needs more thought!
>
> Yes, from my point of view, the classic shell pipe `|` has two strong
> limitations for workflows:
>  1. it does not compose at the 'process' level but at the procedure 'level'
>  2. it cannot deal with two inputs.

Yes, and this strongly suggests that shell pipes are indeed limited to
the procedures *the shell* can combine.  So we can only use them at the
procedure level.  They weren't designed to deal with two (or more)
inputs, and if they were, that would make it vastly more complex.

> As an illustration for the point 1., it appears to me more "functional
> spirit" to write one process/task/unit corresponding to "samtools
> view" and another one about compressing "gzip -c". Then, if you have a
> process that filters some fastq, you can easily reuse the compress
> process, and composes it. For more complicated workflows, such as
> RNAseq or other, the composition seems an advantage.

Maybe we could solve this at the symbolic (programming) level instead.

So if we were to try to avoid using "| gzip -c > ..." all over our code,
we could define a function to wrap this.  Here's a simple example:

(define (with-compressed-output command output-file)
  (system (string-append command " | gzip -c > " output-file)))

And then you could use it in a procedure like so:

(define-public A
  (process
(name "A")
(package-inputs (list samtools gzip))
(data-inputs "/tmp/sample.sam")
(outputs "/tmp/sample.sam.gz")
(procedure
 #~(with-compressed-output
 (string-append "samtools view " #$data-inputs)
 #$outputs

This isn't perfect, because we still need to include “gzip” in the
‘package-inputs’.  It doesn't allow multiple input files, nor does it
split the “gzip” command from the “samtools” command on the process
level.  However, it does allow us to express the idea that we want to
compress the output of a command and save that in a file without having
to explicitely provide the commands to do that.

>
> As an illustration for the point 2., I do not do with shell pipe:
>
>   dd if=/dev/urandom of=file1 bs=1024 count=1k
>   dd if=/dev/urandom of=file2 bs=1024 count=2k
>   tar -cvf file.tar file1 file2
>
> or whatever process instead of `dd` which is perhaps not the right example 
> here.
> To be clear,
>   process that outputs fileA
>   process that outputs fileB
>   process that inputs fileA *and* fileB
> without write on disk fileA and fileB.

Given the ‘dd’ example, I don't see how that could work without
reinventing the way filesystems work.

> All the best,
> simon

Thanks!

Kind regards,
Roel Janssen



Re: GWL pipelined process composition ?

2018-07-18 Thread Roel Janssen
Hello Simon,

zimoun  writes:

> Hi,
>
> I am asking if it should be possible to optionally stream the
> inputs/outputs when the workflow is processed without writing the
> intermediate files on disk.
>
> Well, a workflow is basically:
>  - some process units (or task or rule) that take inputs (file) and
> produce outputs (other file)
>  - a graph that describes the relationship of theses units.
>
> The simplest workflow is:
> x --A--> y --B--> z
>  - process A: input file x, output file y
>  - process B: input file y, output file z
>
> Currently, the file y is written on disk by A then read by B. Which
> leads to IO inefficiency. Especially when the file is large. And/or
> when there is several same kind of unit done in parallel.
>
>
> Should be a good idea to have something like the shell pipe `|` to
> compose the process unit ?
> If yes how ? I have no clue where to look...

That's an interesting idea.  Of course, you could literally use the
shell pipe within a single process.  And I think this makes sense, because
if a shell pipe is beneficial in your situation, then it is likely to be
beneficial to run the two programs connected by the pipe on a single
computer / in a single job.

Here's an example:
(define-public A
  (process
(name "A")
(package-inputs (list samtools gzip))
(data-inputs "/tmp/sample.sam")
(outputs "/tmp/sample.sam.gz")
(procedure
 #~(system (string-append "samtools view " #$data-inputs
  " | gzip -c > " #$outputs)

> I agree that the storage of intermediate files avoid to compute again
> and again unmodified part of the workflow. In this saves time when
> developing the workflow.
> However, the storage of temporary files appears unnecessary once the
> workflow is done and when it does not need to run on cluster.

I think it's either an efficient data transfer (using a pipe), or
writing to disk in between for better restore points.  We cannot have
both.  The former can already be achieved with the shell pipe, and the
latter can be achieved by writing two processes.

Maybe we can come up with a convenient way to combine two processes
using a shell pipe.  But this needs more thought!

If you have an idea to improve on this, please do share. :-)

> Thank you for all the work about the Guix ecosystem.
>
> All the best,
> simon

Thanks!

Kind regards,
Roel Janssen



Re: [PATCH] profiles: Let canonicalize-profile return an absolute path.

2018-07-12 Thread Roel Janssen


Ludovic Courtès  writes:

> Hi Roel,
>
> Roel Janssen  skribis:
>
>> I'd like to change the way the symlinks to custom profiles are created.
>> Here's what currently happens:
>>
>> $ guixr package -i hello -p guix-profiles/test
>> $ ls -l guix-profiles
>> lrwxrwxrwx. 1 user group 25 Jul  3 19:53 test -> guix-profiles/test-1-link
>> lrwxrwxrwx. 1 user group 51 Jul  3 19:53 test-1-link -> 
>> /gnu/store/...6qbaps-profile
>>
>> Now, that symlink is broken.
>> Instead, I'd like to have it always use absolute paths:
>
> How about instead making the link to the generation file (“test-1-link”)
> always a relative symlink?  Like this:
>
> --8<---cut here---start->8---
> $ ./pre-inst-env guix package -p foo/x -i sed
>
> [...]
>
> $ ls -l foo/*
> lrwxrwxrwx 1 ludo users  8 Jul 11 13:03 foo/x -> x-1-link
> lrwxrwxrwx 1 ludo users 51 Jul 11 13:03 foo/x-1-link -> 
> /gnu/store/qp6dqlbsf0pw9p9fwc3gzdcaxx40rn9v-profile
> --8<---cut here---end--->8---
>
> Patch below.
>
> FWIW I prefer avoiding ‘canonicalize-path’ in general because it’s
> inefficient and because it can surprise the user: you can end up with a
> long file name that you didn’t type in, or you can have ENOENT errors
> because ‘canonicalize-path’ requires the given file to exist.
>
> WDYT?
>
> Thanks,
> Ludo’.

I like your patch a lot better than mine!  It fixes the issue I run
into, so it'd be great to apply your patch soon.

There's one other thing I also run into that is somewhat related to
this:
On a multi-user system, where ‘root’ cannot see what's in a user's
directory, it's impossible to keep track of custom profiles.  However,
the default user profiles are fine, because they are actually stored in
the local state dir, and symlinked outside.  Could we do the same with
custom profiles?  The functionality stays the same, it might even be
cleaner in the user's directory because it only needs a single symlink
to the latest generation of a profile, and we might be able to do
garbage collection again on our cluster!

I'd image something like this:

--8<---cut here---start->8---
$ echo $HOME
/home/roel
$ guix package -i hello teeworlds -p ~/my/custom/profile
--> /var/guix/profiles/per-user/roel/home/roel/my/custom/profile -> ...
--> /var/guix/profiles/per-user/roel/home/roel/my/custom/profile-1-link
$ ls -l ~/my/custom
drwxrwxrwx ... profile -> 
/var/guix/profiles/per-user/roel/home/roel/my/custom/profile
--8<-------cut here---end--->8---

That way, if root cannot look into ‘/home/roel’, it can still keep
track of the profile because it can look into ‘/var/guix’.

Kind regards,
Roel Janssen



[PATCH] profiles: Let canonicalize-profile return an absolute path.

2018-07-03 Thread Roel Janssen
Dear Guix,

I'd like to change the way the symlinks to custom profiles are created.
Here's what currently happens:

$ guixr package -i hello -p guix-profiles/test
$ ls -l guix-profiles
lrwxrwxrwx. 1 user group 25 Jul  3 19:53 test -> guix-profiles/test-1-link
lrwxrwxrwx. 1 user group 51 Jul  3 19:53 test-1-link -> 
/gnu/store/...6qbaps-profile

Now, that symlink is broken.
Instead, I'd like to have it always use absolute paths:

$ guixr package -i hello -p guix-profiles/test
$ ls -l guix-profiles
lrwxrwxrwx 1 roel users 36  3 jul 19:56 test -> 
/home/user/guix-profiles/test-1-link
lrwxrwxrwx 1 roel users 51  3 jul 19:56 test-1-link -> 
/gnu/store/...6qbaps-profile

This symlink isn't broken.

In this patch I implemented this behavior by modifying
canonicalize-profile to return an absolute path when it's not
“~/.guix-profile”.

I hope we can merge this, or a similar solution so that creating
profiles in custom locations is a little more robust.

Kind regards,
Roel Janssen

>From 95178018beb8c5458c154771ac9d1ff4866cc507 Mon Sep 17 00:00:00 2001
From: Roel Janssen 
Date: Tue, 3 Jul 2018 19:49:04 +0200
Subject: [PATCH] profiles: Let canonicalize-profile return an absolute path.

* guix/profiles.scm (canonicalize-profile): Return an absolute path.
---
 guix/profiles.scm | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/guix/profiles.scm b/guix/profiles.scm
index ebd7da2a2..4a6a0a80e 100644
--- a/guix/profiles.scm
+++ b/guix/profiles.scm
@@ -1547,8 +1547,8 @@ because the NUMBER is zero.)"
 
 (define (canonicalize-profile profile)
   "If PROFILE is %USER-PROFILE-DIRECTORY, return %CURRENT-PROFILE.  Otherwise
-return PROFILE unchanged.  The goal is to treat '-p ~/.guix-profile' as if
-'-p' was omitted."   ; see <http://bugs.gnu.org/17939>
+return PROFILE as an absolute path.  The goal is to treat '-p ~/.guix-profile'
+as if '-p' was omitted."   ; see <http://bugs.gnu.org/17939>
 
   ;; Trim trailing slashes so that the basename comparison below works as
   ;; intended.
@@ -1558,7 +1558,10 @@ return PROFILE unchanged.  The goal is to treat '-p ~/.guix-profile' as if
(dirname %user-profile-directory))
  (string=? (basename profile) (basename %user-profile-directory)))
 %current-profile
-profile)))
+(string-append
+ (canonicalize-path (dirname profile))
+ file-name-separator-string
+ (basename profile)
 
 (define (user-friendly-profile profile)
   "Return either ~/.guix-profile if that's what PROFILE refers to, directly or
-- 
2.17.0



Re: Videos

2018-05-30 Thread Roel Janssen


Alex Vong  writes:

> Catonano  writes:
>
>> 2018-05-29 17:45 GMT+02:00 Julien Lepiller :
>>
>>  Le 2018-05-29 16:48, Ricardo Wurmus a écrit :
>>
>>  Hi Guix,
>>
>>  I’d like us to produce a series of short videos (< 4 mins each) that
>>  introduce functional package management with Guix.
>>
>>  This is supposed to be aimed at people who are intimidated by the manual
>>  and wouldn’t know where to begin reading. Each of the videos should
>>  focus on a single feature and be on the point. The final seconds should
>>  point the viewer to the manual to learn more.
>>
>>  Who would like to be involved in the planning and production of the
>>  videos? There are many tasks such as:
>>
>>  * collecting topics that should be covered
>>  * writing canonical narration scripts for each episode
>>  * translating the scripts into different languages
>>  * recording the narrations in different languages
>>  * drafting the storyboard for each video (i.e. what exactly is to be
>>  shown and for how long)
>>  * recording the video portions
>>  * mixing different audio tracks and the video track
>>  * designing intro and outro frames
>>  * recording or finding freely licensed music for the intro / outro
>>  * coordinating with all volunteers
>>
>>  What do you think?
>>
>>  --
>>  Ricardo
>>
>>  That sounds like a great plan! Of course I'd like to be involved in 
>> translating
>>  the script into French and I could probably record a French version of the 
>> videos
>>  too. I don't have any experience in the other fields, but I guess I could 
>> learn.
>>
>> I think I have demonstrated my aptitude in recording video fragments on the 
>> field 
>>
>> As for storyboarding and scripting, instead, I'd love to receive suggestions.
>>
>> Also, I wouldn't know how to mix audio/video. What software could we use ?
>> I don't know.
>
> You may want to try simplescreenrecorder. I tried it before, it is
> reasonably easy to use.

Or, if you're using GNOME: Ctrl + Alt + Shift + R.  A small red dot will
appear in the upper right corner.  Press the key combination again to
stop the recording, and a webm video will appear in your ‘Videos’
folder.

Kind regards,
Roel Janssen



Re: Help needed to update VLC to 3.0.3 on the master branch

2018-05-30 Thread Roel Janssen


Mark H Weaver  writes:

> Hello Guix,
>
> I just pushed a major VLC update to the 'core-updates' branch, commit
> 76277052939524d1ea3394f83739f06efd0dd8ae.  I pushed it to 'core-updates'
> because I've been living on that branch for months, and so it's the only
> branch that I can easily test for.
>
> However, I strongly suspect that the old VLC on master, and the old
> ffmpeg-2.8 that it depends on, are most likely a security risk.
>
> So, I would be grateful if someone could cherry-pick this commit to
> 'master', do some basic testing on it, and then push it on my behalf if
> it works.

I've had the VLC 3.0.1 commit applied on the master branch and VLC works
fine.  I only use ‘vlc’, and not ‘cvlc’.  We may have to wrap ‘cvlc’ as
well.  But we can do that at a later time if someone finds a problem.

Kind regards,
Roel Janssen



Re: Cleaning up the /bin for guix.

2018-04-25 Thread Roel Janssen

Ludovic Courtès <l...@gnu.org> writes:

> Roel Janssen <r...@gnu.org> skribis:
>
>> From 9455c7b94e0010ff4038132affc7a5c796313894 Mon Sep 17 00:00:00 2001
>> From: Roel Janssen <r...@gnu.org>
>> Date: Tue, 24 Apr 2018 12:48:32 +0200
>> Subject: [PATCH] gnu: guile-ssh: Move files from bin to examples directory.
>>
>> * gnu/packages/ssh.scm (guile-ssh): Move files from bin to the examples
>>   directory.
>
> [...]
>
>> +(invoke "mv" (string-append bin "/ssshd.scm") 
>> examples)
>> +(invoke "mv" (string-append bin "/sssh.scm") 
>> examples)
>
> Please use ‘rename-file’ instead of invoking “mv”.  :-)

Done, and pushed in d00026429.

Thanks!

Kind regards,
Roel Janssen



Re: Update the Guix package/version

2018-04-25 Thread Roel Janssen

Ludovic Courtès <l...@gnu.org> writes:

> Roel Janssen <r...@gnu.org> skribis:
>
>> Ludovic Courtès <l...@gnu.org> writes:
>>
>>> Hello,
>>>
>>> Roel Janssen <r...@gnu.org> skribis:
>>>
>>>> What's the decision process for updating the ‘guix’ package revision,
>>>> like in commit b1fb247b?
>>>
>>> There’s no real process, just do it when there’s a good reason to do it.
>>>
>>>> The reason I ask this is because I'd like to bump the revision so that
>>>> the changes from 5cefb13d to ‘guix-daemon’ are available when I install
>>>> ‘guix’ in a profile.
>>>
>>> Makes sense!
>>>
>>> Note that you can run “make update-guix-package” to do the work.  Be
>>> sure to do that from a commit that is available on Savannah.  And of
>>> course, make sure the package builds locally.  :-)
>>
>> Cool! I didn't know about this build step.  May I commit the change, so
>> that I won't end up with an ambiguous revision number later on?
>
> Sure if you have an update based on an upstream commit that builds,
> you’re welcome to push it.
>
> Perhaps you can also run “guix build -S guix --check” to be 100% sure
> that the commit and hash are correct.

I did:
$ git reset --hard
$ git pull
$ make update-guix-package
$ guix build -S guix --check
$ guix build guix

And after verifying that all went fine, I then pushed the change in
5b862761f.

Thanks!

Kind regards,
Roel Janssen



Re: Cleaning up the /bin for guix.

2018-04-24 Thread Roel Janssen

Roel Janssen <r...@gnu.org> writes:

> Dear Guix,
>
> When installing ‘guix’ in a profile, the ‘bin’ directory of that profile
> contains:
>
> asn1Coding -> 
> /gnu/store/2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12/bin/asn1Coding
> asn1Decoding -> 
> /gnu/store/2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12/bin/asn1Decoding
> asn1Parser -> 
> /gnu/store/2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12/bin/asn1Parser
> certtool -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/certtool
> gnutls-cli -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/gnutls-cli
> gnutls-cli-debug -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/gnutls-cli-debug
> gnutls-serv -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/gnutls-serv
> guix -> 
> /gnu/store/qmc24l49za832zpz4xqx9xsvw3w4hd41-guix-0.14.0-10.486de73/bin/guix
> guix-daemon -> 
> /gnu/store/qmc24l49za832zpz4xqx9xsvw3w4hd41-guix-0.14.0-10.486de73/bin/guix-daemon
> idn2 -> /gnu/store/ksyja5lbwy0mpskvn4rfi5klc00c092d-libidn2-2.0.4/bin/idn2
> nettle-hash -> 
> /gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/nettle-hash
> nettle-lfib-stream -> 
> /gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/nettle-lfib-stream
> nettle-pbkdf2 -> 
> /gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/nettle-pbkdf2
> ocsptool -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/ocsptool
> pkcs1-conv -> 
> /gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/pkcs1-conv
> psktool -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/psktool
> sexp-conv -> 
> /gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/sexp-conv
> srptool -> 
> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/srptool
> ssshd.scm -> 
> /gnu/store/g2k7v2wv9w2ybs1glwh42w55jq25zd4h-guile-ssh-0.11.2/bin/ssshd.scm
> sssh.scm -> 
> /gnu/store/g2k7v2wv9w2ybs1glwh42w55jq25zd4h-guile-ssh-0.11.2/bin/sssh.scm
>
> I suspect that the Scheme files don't belong in ‘bin’.  What about the
> others?  Can we do better here than propagate ‘gnutls’ and ‘nettle’?

I attached a patch that moves the ‘guile-ssh’ bin-items to its examples
directory.  Is that OK to push?

Kind regards,
Roel Janssen

>From 9455c7b94e0010ff4038132affc7a5c796313894 Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Tue, 24 Apr 2018 12:48:32 +0200
Subject: [PATCH] gnu: guile-ssh: Move files from bin to examples directory.

* gnu/packages/ssh.scm (guile-ssh): Move files from bin to the examples
  directory.
---
 gnu/packages/ssh.scm | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/gnu/packages/ssh.scm b/gnu/packages/ssh.scm
index afd41cd8e..e5702b9b7 100644
--- a/gnu/packages/ssh.scm
+++ b/gnu/packages/ssh.scm
@@ -259,8 +259,18 @@ Additionally, various channel-specific options can be negotiated.")
  (substitute* (find-files "." "\\.scm$")
(("\"libguile-ssh\"")
 (string-append "\"" libdir "/libguile-ssh\"")))
- #t)
-
+ #t
+  (add-after 'install 'remove-bin-directory
+(lambda* (#:key outputs #:allow-other-keys)
+  (let* ((out (assoc-ref outputs "out"))
+ (bin (string-append out "/bin"))
+ (examples (string-append
+out "/share/guile-ssh/examples")))
+(mkdir-p examples)
+(invoke "mv" (string-append bin "/ssshd.scm") examples)
+(invoke "mv" (string-append bin "/sssh.scm") examples)
+(delete-file-recursively bin)
+#t
;; Tests are not parallel-safe.
#:parallel-tests? #f))
 (native-inputs `(("autoconf" ,autoconf)
-- 
2.17.0



Cleaning up the /bin for guix.

2018-04-24 Thread Roel Janssen
Dear Guix,

When installing ‘guix’ in a profile, the ‘bin’ directory of that profile
contains:

asn1Coding -> 
/gnu/store/2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12/bin/asn1Coding
asn1Decoding -> 
/gnu/store/2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12/bin/asn1Decoding
asn1Parser -> 
/gnu/store/2fg01r58vv9w41kw6drl1wnvqg7rkv9d-libtasn1-4.12/bin/asn1Parser
certtool -> 
/gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/certtool
gnutls-cli -> 
/gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/gnutls-cli
gnutls-cli-debug -> 
/gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/gnutls-cli-debug
gnutls-serv -> 
/gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/gnutls-serv
guix -> 
/gnu/store/qmc24l49za832zpz4xqx9xsvw3w4hd41-guix-0.14.0-10.486de73/bin/guix
guix-daemon -> 
/gnu/store/qmc24l49za832zpz4xqx9xsvw3w4hd41-guix-0.14.0-10.486de73/bin/guix-daemon
idn2 -> /gnu/store/ksyja5lbwy0mpskvn4rfi5klc00c092d-libidn2-2.0.4/bin/idn2
nettle-hash -> 
/gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/nettle-hash
nettle-lfib-stream -> 
/gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/nettle-lfib-stream
nettle-pbkdf2 -> 
/gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/nettle-pbkdf2
ocsptool -> 
/gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/ocsptool
pkcs1-conv -> 
/gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/pkcs1-conv
psktool -> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/psktool
sexp-conv -> 
/gnu/store/x0jf9ckd30k3nhs6bbhkrxsjmqz8phqd-nettle-3.4/bin/sexp-conv
srptool -> /gnu/store/5kih0kxmipzjw10c53hhckfzkcs7c8mm-gnutls-3.5.13/bin/srptool
ssshd.scm -> 
/gnu/store/g2k7v2wv9w2ybs1glwh42w55jq25zd4h-guile-ssh-0.11.2/bin/ssshd.scm
sssh.scm -> 
/gnu/store/g2k7v2wv9w2ybs1glwh42w55jq25zd4h-guile-ssh-0.11.2/bin/sssh.scm

I suspect that the Scheme files don't belong in ‘bin’.  What about the
others?  Can we do better here than propagate ‘gnutls’ and ‘nettle’?

Kind regards,
Roel Janssen



Re: Update the Guix package/version

2018-04-23 Thread Roel Janssen

Ludovic Courtès <l...@gnu.org> writes:

> Hello,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> What's the decision process for updating the ‘guix’ package revision,
>> like in commit b1fb247b?
>
> There’s no real process, just do it when there’s a good reason to do it.
>
>> The reason I ask this is because I'd like to bump the revision so that
>> the changes from 5cefb13d to ‘guix-daemon’ are available when I install
>> ‘guix’ in a profile.
>
> Makes sense!
>
> Note that you can run “make update-guix-package” to do the work.  Be
> sure to do that from a commit that is available on Savannah.  And of
> course, make sure the package builds locally.  :-)

Cool! I didn't know about this build step.  May I commit the change, so
that I won't end up with an ambiguous revision number later on?

Thanks!

Kind regards,
Roel Janssen



Re: Help needed updating vlc to version 3.0.1.

2018-04-23 Thread Roel Janssen

Roel Janssen <r...@gnu.org> writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> Mark H Weaver <m...@netris.org> writes:
>>
>>> Hello Guix,
>>>
>>> Below I've attached a draft patch to update vlc to 3.0.1, and also to
>>> add several more inputs based on reading the output of the 'configure'
>>> script.
>>>
>>> It builds successfully and mostly works except for one problem: the
>>> icons are missing from the control buttons on the main window of the Qt
>>> interface.  The icons in question are .svg files in the source tarball,
>>> but are converted into data structures within C++ source code using
>>> 'rcc'.
>>>
>>> strace reveals that vlc is performing 'stat' system calls on bogus file
>>> names beginning with ":/", e.g. ":/toolbar/play_b.svg".  These
>>> correspond to the missing icons.  According to
>>> <https://doc.qt.io/archives/qt-4.8/resources.html>, these names that
>>> begin with ":/" are meant to be references to resources that were
>>> imported using 'rcc'.
>>>
>>> I can't afford to spend more time on this right now.  I don't use vlc
>>> myself, but for security reasons I think it's important to keep our
>>> media players up-to-date, especially media players like vlc that bundle
>>> their own codecs.  I expect that vlc is quite popular, which makes it
>>> all the more important.
>>>
>>> I'm hoping that someone with more knowledge of Qt will step up to debug
>>> this problem.  Any volunteers?
>>>
>>> Note, this patch is based on core-updates, but hopefully it would work
>>> on 'master' too.
>>
>> Thanks a lot for working on this!  I applied your patch to ‘master’ and
>> built VLC.  It is missing the icons.
>>
>> Then I manually built it inside a ‘guix environment vlc’.
>> Launching it shows the icons.  Leaving the environment and running the
>> same executable misses the icons.
>>
>> Could it be that we need to propagate an input?
>> I'll try to dissect it further.
>
> After setting QT_PLUGIN_PATH outside of the environment, the icons
> appear in the Guix-compiled vlc-3.0.1.  I think the files in
> QT_PLUGIN_PATH do not originate from VLC, but instead from Qt and
> QtSvg.
>
> Should we wrap the executable so that QT_PLUGIN_PATH is defined?

The attached patch adds such a wrap phase, with which running ‘vlc’ the
icons work again.

Kind regards,
Roel Janssen

>From 47d20a29bd237a211f2805d470fb4db9726103d6 Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Tue, 24 Apr 2018 00:26:42 +0200
Subject: [PATCH] DRAFT: gnu: vlc: Update to 3.0.1, and add more inputs.

---
 gnu/packages/video.scm | 81 +++---
 1 file changed, 69 insertions(+), 12 deletions(-)

diff --git a/gnu/packages/video.scm b/gnu/packages/video.scm
index dc5a37566..e90207185 100644
--- a/gnu/packages/video.scm
+++ b/gnu/packages/video.scm
@@ -1,7 +1,7 @@
 ;;; GNU Guix --- Functional package management for GNU
 ;;; Copyright © 2013, 2014, 2015, 2016 Andreas Enge <andr...@enge.fr>
 ;;; Copyright © 2014, 2015, 2016 David Thompson <da...@gnu.org>
-;;; Copyright © 2014, 2015, 2016 Mark H Weaver <m...@netris.org>
+;;; Copyright © 2014, 2015, 2016, 2018 Mark H Weaver <m...@netris.org>
 ;;; Copyright © 2015 Taylan Ulrich Bayırlı/Kammer <taylanbayi...@gmail.com>
 ;;; Copyright © 2015, 2016, 2017, 2018 Efraim Flashner <efr...@flashner.co.il>
 ;;; Copyright © 2015 Andy Patterson <ajpat...@uwaterloo.ca>
@@ -63,6 +63,7 @@
   #:use-module (gnu packages audio)
   #:use-module (gnu packages autotools)
   #:use-module (gnu packages avahi)
+  #:use-module (gnu packages backup)
   #:use-module (gnu packages base)
   #:use-module (gnu packages bison)
   #:use-module (gnu packages boost)
@@ -94,6 +95,7 @@
   #:use-module (gnu packages image)
   #:use-module (gnu packages imagemagick)
   #:use-module (gnu packages iso-codes)
+  #:use-module (gnu packages libidn)
   #:use-module (gnu packages libreoffice)
   #:use-module (gnu packages linux)
   #:use-module (gnu packages lua)
@@ -110,7 +112,9 @@
   #:use-module (gnu packages python-crypto)
   #:use-module (gnu packages python-web)
   #:use-module (gnu packages qt)
+  #:use-module (gnu packages rdesktop)
   #:use-module (gnu packages ruby)
+  #:use-module (gnu packages samba)
   #:use-module (gnu packages sdl)
   #:use-module (gnu packages serialization)
   #:use-module (gnu packages shells)
@@ -118,6 +122,7 @@
   #:use-module (gnu packages texinfo)
   #:use-module (gnu packages textutils)
   #:use-module (gnu packages tls)
+  #:use-module (gnu packages upnp)
   #:use-module (gnu packages 

Re: Help needed updating vlc to version 3.0.1.

2018-04-23 Thread Roel Janssen

Roel Janssen <r...@gnu.org> writes:

> Mark H Weaver <m...@netris.org> writes:
>
>> Hello Guix,
>>
>> Below I've attached a draft patch to update vlc to 3.0.1, and also to
>> add several more inputs based on reading the output of the 'configure'
>> script.
>>
>> It builds successfully and mostly works except for one problem: the
>> icons are missing from the control buttons on the main window of the Qt
>> interface.  The icons in question are .svg files in the source tarball,
>> but are converted into data structures within C++ source code using
>> 'rcc'.
>>
>> strace reveals that vlc is performing 'stat' system calls on bogus file
>> names beginning with ":/", e.g. ":/toolbar/play_b.svg".  These
>> correspond to the missing icons.  According to
>> <https://doc.qt.io/archives/qt-4.8/resources.html>, these names that
>> begin with ":/" are meant to be references to resources that were
>> imported using 'rcc'.
>>
>> I can't afford to spend more time on this right now.  I don't use vlc
>> myself, but for security reasons I think it's important to keep our
>> media players up-to-date, especially media players like vlc that bundle
>> their own codecs.  I expect that vlc is quite popular, which makes it
>> all the more important.
>>
>> I'm hoping that someone with more knowledge of Qt will step up to debug
>> this problem.  Any volunteers?
>>
>> Note, this patch is based on core-updates, but hopefully it would work
>> on 'master' too.
>
> Thanks a lot for working on this!  I applied your patch to ‘master’ and
> built VLC.  It is missing the icons.
>
> Then I manually built it inside a ‘guix environment vlc’.
> Launching it shows the icons.  Leaving the environment and running the
> same executable misses the icons.
>
> Could it be that we need to propagate an input?
> I'll try to dissect it further.

After setting QT_PLUGIN_PATH outside of the environment, the icons
appear in the Guix-compiled vlc-3.0.1.  I think the files in
QT_PLUGIN_PATH do not originate from VLC, but instead from Qt and
QtSvg.

Should we wrap the executable so that QT_PLUGIN_PATH is defined?

Thanks!

Kind regards,
Roel Janssen




Re: Help needed updating vlc to version 3.0.1.

2018-04-23 Thread Roel Janssen

Mark H Weaver <m...@netris.org> writes:

> Hello Guix,
>
> Below I've attached a draft patch to update vlc to 3.0.1, and also to
> add several more inputs based on reading the output of the 'configure'
> script.
>
> It builds successfully and mostly works except for one problem: the
> icons are missing from the control buttons on the main window of the Qt
> interface.  The icons in question are .svg files in the source tarball,
> but are converted into data structures within C++ source code using
> 'rcc'.
>
> strace reveals that vlc is performing 'stat' system calls on bogus file
> names beginning with ":/", e.g. ":/toolbar/play_b.svg".  These
> correspond to the missing icons.  According to
> <https://doc.qt.io/archives/qt-4.8/resources.html>, these names that
> begin with ":/" are meant to be references to resources that were
> imported using 'rcc'.
>
> I can't afford to spend more time on this right now.  I don't use vlc
> myself, but for security reasons I think it's important to keep our
> media players up-to-date, especially media players like vlc that bundle
> their own codecs.  I expect that vlc is quite popular, which makes it
> all the more important.
>
> I'm hoping that someone with more knowledge of Qt will step up to debug
> this problem.  Any volunteers?
>
> Note, this patch is based on core-updates, but hopefully it would work
> on 'master' too.

Thanks a lot for working on this!  I applied your patch to ‘master’ and
built VLC.  It is missing the icons.

Then I manually built it inside a ‘guix environment vlc’.
Launching it shows the icons.  Leaving the environment and running the
same executable misses the icons.

Could it be that we need to propagate an input?
I'll try to dissect it further.

Kind regards,
Roel Janssen



Update the Guix package/version

2018-04-23 Thread Roel Janssen
Dear Guix,

What's the decision process for updating the ‘guix’ package revision,
like in commit b1fb247b?

The reason I ask this is because I'd like to bump the revision so that
the changes from 5cefb13d to ‘guix-daemon’ are available when I install
‘guix’ in a profile.

Kind regards,
Roel Janssen



Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-19 Thread Roel Janssen

Ludovic Courtès <l...@gnu.org> writes:

> Heya,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> From fcbe7ebb3d205cf7310700e62b78b9aafd94f76f Mon Sep 17 00:00:00 2001
>> From: Roel Janssen <r...@gnu.org>
>> Date: Thu, 19 Apr 2018 17:11:30 +0200
>> Subject: [PATCH] guix-daemon: Disable garbage collection for remote
>>  connections.
>>
>> * nix/nix-daemon/nix-daemon.cc (isRemoteConnection): New variable.
>>   (performOp): For wopCollectGarbage, throw an error when isRemoteConnection
>>   is set.
>>   (acceptConnection): Set isRemoteConnection when connection is not AF_UNIX.
>> * tests/guix-daemon.sh: Add a test for the new behavior.
>
> LGTM, thanks for the quick reply!

Awesome.  Pushed in 5cefb13dd.

Thanks!

Kind regards,
Roel Janssen




Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-19 Thread Roel Janssen

Ludovic Courtès <ludovic.cour...@inria.fr> writes:

> Roel Janssen <r...@gnu.org> skribis:
>
>> Ludovic Courtès <ludovic.cour...@inria.fr> writes:
>
> [...]
>
>>>> diff --git a/nix/nix-daemon/nix-daemon.cc b/nix/nix-daemon/nix-daemon.cc
>>>> index deb7003d7..65770ba95 100644
>>>> --- a/nix/nix-daemon/nix-daemon.cc
>>>> +++ b/nix/nix-daemon/nix-daemon.cc
>>>> @@ -529,6 +529,11 @@ static void performOp(bool trusted, unsigned int 
>>>> clientVersion,
>>>>  }
>>>>  
>>>>  case wopCollectGarbage: {
>>>> +if (settings.isRemoteConnection) {
>>>> +throw Error("Garbage collection is disabled for remote 
>>>> hosts.");
>>>> +break;
>>>> +}
>>>>  GCOptions options;
>>>>  options.action = (GCOptions::GCAction) readInt(from);
>>>>  options.pathsToDelete = readStorePaths(from);
>>>
>>> I was wondering if we would like to allow some of the ‘GCAction’ values,
>>> but maybe it’s better to disallow them altogether like this code does.
>>
>> Could we please start with a “disable any GC” and start allowing cases
>> on a case-by-case basis?
>
> Sure, that’s what I was suggesting.  :-)
>
>>> Last thing: could you add a couple of tests?  tests/guix-daemon.sh
>>> already has tests for ‘--listen’, so you could take inspiration from
>>> those.
>>
>> I included a test, but I don't know how I can properly run this test.
>> Could you elaborate on how I can test the test(s)?
>
> Run:
>
>   make check TESTS=tests/guix-daemon.sh
>
> See
> <https://www.gnu.org/software/guix/manual/html_node/Running-the-Test-Suite.html>.

That is really nice.  Thanks for pointing to the manual.

>> From b29d3a90e1487ebda5ac5b6bc146f8c95218eab6 Mon Sep 17 00:00:00 2001
>> From: Roel Janssen <r...@gnu.org>
>> Date: Thu, 19 Apr 2018 14:01:49 +0200
>> Subject: [PATCH] guix-daemon: Disable garbage collection for remote hosts.
>>
>> * nix/nix-daemon/nix-daemon.cc (performOp): Display appropriate error 
>> message;
>>   (acceptConnection): Set isRemoteConnection when connection is over TCP.
>
> Rather:
>
> * nix/nix-daemon/nix-daemon.cc (isRemoteConnection): New variable.
> (performOp): For wopCollectGarbage, throw an error when
> isRemoteConnection is set.
> (acceptConnection): Set isRemoteConnection when connection is not AF_UNIX.
>
>> +output=`GUIX_DAEMON_SOCKET="$socket" guix gc`
>> +if [[ "$output" != *"GUIX_DAEMON_SOCKET=$socket" ]];
>> +then
>> +exit 1
>> +fi
>
> Perhaps simply check the exit code of ‘guix gc’ and fail if it succeeds?

Right.

> Like:
>
>   if guix gc; then false; else true; fi
>
> Also please try to avoid Bash-specific constructs like [[ this ]].

Right.

> Could you send an updated patch?

The attached patch should be fine.

Kind regards,
Roel Janssen

>From fcbe7ebb3d205cf7310700e62b78b9aafd94f76f Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Thu, 19 Apr 2018 17:11:30 +0200
Subject: [PATCH] guix-daemon: Disable garbage collection for remote
 connections.

* nix/nix-daemon/nix-daemon.cc (isRemoteConnection): New variable.
  (performOp): For wopCollectGarbage, throw an error when isRemoteConnection
  is set.
  (acceptConnection): Set isRemoteConnection when connection is not AF_UNIX.
* tests/guix-daemon.sh: Add a test for the new behavior.
---
 nix/nix-daemon/nix-daemon.cc | 10 +-
 tests/guix-daemon.sh | 14 ++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/nix/nix-daemon/nix-daemon.cc b/nix/nix-daemon/nix-daemon.cc
index deb7003d7..782e4acfc 100644
--- a/nix/nix-daemon/nix-daemon.cc
+++ b/nix/nix-daemon/nix-daemon.cc
@@ -54,7 +54,9 @@ static FdSink to(STDOUT_FILENO);
 
 bool canSendStderr;
 
-
+/* This variable is used to keep track of whether a connection
+   comes from a host other than the host running guix-daemon. */
+static bool isRemoteConnection;
 
 /* This function is called anytime we want to write something to
stderr.  If we're in a state where the protocol allows it (i.e.,
@@ -529,6 +531,11 @@ static void performOp(bool trusted, unsigned int clientVersion,
 }
 
 case wopCollectGarbage: {
+if (isRemoteConnection) {
+throw Error("Garbage collection is disabled for remote hosts.");
+break;
+}
+
 GCOptions options;
 options.action = (GCOptions::GCAction) readInt(from);
 options.pathsToDelete = readStorePaths(from);
@@ -934,6 +941,7 @@ static void acceptConnection(int f

Re: 01/01: gnu: Add guile-curl.

2018-04-19 Thread Roel Janssen

Mark H Weaver <m...@netris.org> writes:

> Hi Roel,
>
> r...@gnu.org (Roel Janssen) writes:
>
>> roelj pushed a commit to branch master
>> in repository guix.
>>
>> commit 5e3010a2ac651397e0cb69239a7d7aa3c0a5703e
>> Author: Roel Janssen <r...@gnu.org>
>> Date:   Wed Apr 18 23:00:41 2018 +0200
>>
>> gnu: Add guile-curl.
>> 
>> * gnu/packages/curl.scm (guile-curl): New variable.
>
> [...]
>
>> +  (modify-phases %standard-phases
>> +(add-after 'install 'patch-extension-path
>> +  (lambda* (#:key outputs #:allow-other-keys)
>> + (let* ((out  (assoc-ref outputs "out"))
>> +(curl.scm (string-append
>> +   out "/share/guile/site/2.2/curl.scm"))
>> +(curl.go  (string-append
>> +   out "/lib/guile/2.2/site-ccache/curl.go"))
>> +(ext  (string-append out "/lib/guile/2.2/"
>> + "extensions/libguile-curl")))
>> +   (substitute* curl.scm (("libguile-curl") ext))
>> +   ;; The build system does not actually compile the Scheme 
>> module.
>> +   ;; So we can compile it and put it in the right place in one 
>> go.
>> +   (system* "guild" "compile" curl.scm "-o" curl.go))
>> +   #t)
>
> Please use 'invoke' instead of 'system*' from now on, so that errors in
> the subprocess will be detected and reported using exceptions.  As you
> have it now, compile failures will be ignored.

Whoops.  I will try to clear my brain's internal cache.
Thanks for letting me know.

>
> Would you like to push a fix?

I pushed a fix in d28e5ad23.

Kind regards,
Roel Janssen



Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-19 Thread Roel Janssen

Ludovic Courtès <ludovic.cour...@inria.fr> writes:

> Hello Roel,
>
> Roel Janssen <r...@gnu.org> skribis:
>

[...]

>
>> From 00f489d6303720c65571fdf0bc9ee810a20f70e0 Mon Sep 17 00:00:00 2001
>> From: Roel Janssen <r...@gnu.org>
>> Date: Wed, 11 Apr 2018 09:52:11 +0200
>> Subject: [PATCH] guix-daemon: Disable garbage collection for remote hosts.
>>
>> * nix/libstore/gc.cc (collectGarbage): Check for remote connections.
>> * nix/libstore/globals.hh: Add isRemoteConnection setting.
>> * nix/nix-daemon/nix-daemon.cc (performOp): Display appropriate error 
>> message;
>>   (acceptConnection): Set isRemoteConnection when connection is over TCP.
>
> [...]
>
>> --- a/nix/libstore/gc.cc
>> +++ b/nix/libstore/gc.cc
>> @@ -595,6 +595,10 @@ void LocalStore::removeUnusedLinks(const GCState & 
>> state)
>>  
>>  void LocalStore::collectGarbage(const GCOptions & options, GCResults & 
>> results)
>>  {
>> +if (settings.isRemoteConnection) {
>> +return;
>> +}
>
> I think this is unnecessary since the daemon already checks for that.
> (It’s also nicer to keep ‘LocalStore’ unaware of the connection
> details.)

Right.  It is indeed unnecessary.  In the updated patch, I no longer do this.

>
>> diff --git a/nix/nix-daemon/nix-daemon.cc b/nix/nix-daemon/nix-daemon.cc
>> index deb7003d7..65770ba95 100644
>> --- a/nix/nix-daemon/nix-daemon.cc
>> +++ b/nix/nix-daemon/nix-daemon.cc
>> @@ -529,6 +529,11 @@ static void performOp(bool trusted, unsigned int 
>> clientVersion,
>>  }
>>  
>>  case wopCollectGarbage: {
>> +if (settings.isRemoteConnection) {
>> +throw Error("Garbage collection is disabled for remote hosts.");
>> +break;
>> +}
>>  GCOptions options;
>>  options.action = (GCOptions::GCAction) readInt(from);
>>  options.pathsToDelete = readStorePaths(from);
>
> I was wondering if we would like to allow some of the ‘GCAction’ values,
> but maybe it’s better to disallow them altogether like this code does.

Could we please start with a “disable any GC” and start allowing cases
on a case-by-case basis?  The reason I request this, is because it makes
it a lot easier to reason about it from a sysadmin point-of-view.

I'd like to think of it like this: “Garbage collection is effectively
turned off.”.  With the current patch, we can reason about it this way.

>
>> @@ -934,6 +939,7 @@ static void acceptConnection(int fdSocket)
>> connection.  Setting these to -1 means: do not change.  
>> */
>>  settings.clientUid = clientUid;
>>  settings.clientGid = clientGid;
>> +settings.isRemoteConnection = (remoteAddr.ss_family != 
>> AF_UNIX);
>
> I think you can make ‘isRemoteConnection’ a static global variable in
> nix-daemon.cc instead of adding it to ‘Settings’.  So it would do
> something like:
>
> --8<---cut here---start->8---
>   /* Fork a child to handle the connection. */
>   startProcess([&]() {
> close(fdSocket);
>
> /* Background the daemon. */
> if (setsid() == -1)
> throw SysError(format("creating a new session"));
>
> /* Restore normal handling of SIGCHLD. */
> setSigChldAction(false);
>
> /* For debugging, stuff the pid into argv[1]. */
> if (clientPid != -1 && argvSaved[1]) {
> string processName = std::to_string(clientPid);
> strncpy(argvSaved[1], processName.c_str(), 
> strlen(argvSaved[1]));
> }
>
> isRemoteConnection = …;  /*  <– this is the new line */
>
> /* Store the client's user and group for this connection. This
>has to be done in the forked process since it is per
>connection.  Setting these to -1 means: do not change.  */
> settings.clientUid = clientUid;
>   settings.clientGid = clientGid;
> --8<---cut here---end--->8---

Right.  I implemented it using a static global variable in
nix-daemon.cc.  This patch gets shorter and shorter. :)

>
> Last thing: could you add a couple of tests?  tests/guix-daemon.sh
> already has tests for ‘--listen’, so you could take inspiration from
> those.

I included a test, but I don't know how I can properly run this test.
Could you elaborate on how I can test the test(s

Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-17 Thread Roel Janssen
Hello there,

I'm not sure this made it to the mailing list.  Is the proposed patch
fine to disable the GC for remote connections?

Thanks!

Kind regards,
Roel Janssen


Roel Janssen <r...@gnu.org> writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> Ludovic Courtès <ludovic.cour...@inria.fr> writes:
>>
>>> Hello Roel,
>>>
>>> Roel Janssen <r...@gnu.org> skribis:
>>>
>>>> The patch adds a “disableGarbageCollection” boolean variable to the
>>>> guix-daemon settings, and on each occasion where a store item may be
>>>> deleted, it checks this option.
>>>>
>>>> This option can be set using “--disable-gc”.
>>>>
>>>> It would be great if someone could review this and discuss whether
>>>> this is the right way to implement such a feature.  And to point out
>>>> what else would be needed to include this option in guix-daemon.
>>>
>>> I suppose the use case is when guix-daemon runs on a machine and is
>>> accessed over TCP/IP (with GUIX_DAEMON_SOCKET=guix://…) from other
>>> machines, right?
>>
>> That's right.
>>
>>> In this case, I thought guix-daemon could explicitly check whether the
>>> peer is remote, and disable GC in that case.  That is, ‘guix gc’ would
>>> still work locally on the machine that runs guix-daemon, but it would no
>>> longer work remotely.
>>>
>>> How does that sound?
>>
>> That sounds like it solves our use-case, but only because in our
>> case the access to the machine running guix-daemon is limited.
>>
>> So, even though I'm not sure how to implement this, your solution is
>> fine with me.
>
> I implemented the solution in the attached patch.  When a connection
> does not come from the UNIX socket, it is treated as “remote”.  So,
> local TCP connections would also be treated as “remote”.
>
> I assumed ‘collectGarbage()’ is the entry point for all garbage collection,
> is that correct?
>
> Kind regards,
> Roel Janssen
>
> From 00f489d6303720c65571fdf0bc9ee810a20f70e0 Mon Sep 17 00:00:00 2001
> From: Roel Janssen <r...@gnu.org>
> Date: Wed, 11 Apr 2018 09:52:11 +0200
> Subject: [PATCH] guix-daemon: Disable garbage collection for remote hosts.
>
> * nix/libstore/gc.cc (collectGarbage): Check for remote connections.
> * nix/libstore/globals.hh: Add isRemoteConnection setting.
> * nix/nix-daemon/nix-daemon.cc (performOp): Display appropriate error message;
>   (acceptConnection): Set isRemoteConnection when connection is over TCP.
> ---
>  nix/libstore/gc.cc   | 4 
>  nix/libstore/globals.hh  | 4 
>  nix/nix-daemon/nix-daemon.cc | 6 ++
>  3 files changed, 14 insertions(+)
>
> diff --git a/nix/libstore/gc.cc b/nix/libstore/gc.cc
> index 72eff5242..1bc6eedb5 100644
> --- a/nix/libstore/gc.cc
> +++ b/nix/libstore/gc.cc
> @@ -595,6 +595,10 @@ void LocalStore::removeUnusedLinks(const GCState & state)
>  
>  void LocalStore::collectGarbage(const GCOptions & options, GCResults & 
> results)
>  {
> +if (settings.isRemoteConnection) {
> +return;
> +}
> +
>  GCState state(results);
>  state.options = options;
>  state.trashDir = settings.nixStore + "/trash";
> diff --git a/nix/libstore/globals.hh b/nix/libstore/globals.hh
> index 1293625e1..83efbcd50 100644
> --- a/nix/libstore/globals.hh
> +++ b/nix/libstore/globals.hh
> @@ -81,6 +81,10 @@ struct Settings {
>  uid_t clientUid;
>  gid_t clientGid;
>  
> +/* Whether the connection comes from a host other than the host running
> +   guix-daemon. */
> +bool isRemoteConnection;
> +
>  /* Whether, if we cannot realise the known closure corresponding
> to a derivation, we should try to normalise the derivation
> instead. */
> diff --git a/nix/nix-daemon/nix-daemon.cc b/nix/nix-daemon/nix-daemon.cc
> index deb7003d7..65770ba95 100644
> --- a/nix/nix-daemon/nix-daemon.cc
> +++ b/nix/nix-daemon/nix-daemon.cc
> @@ -529,6 +529,11 @@ static void performOp(bool trusted, unsigned int 
> clientVersion,
>  }
>  
>  case wopCollectGarbage: {
> +if (settings.isRemoteConnection) {
> +throw Error("Garbage collection is disabled for remote hosts.");
> +break;
> +}
> +
>  GCOptions options;
>  options.action = (GCOptions::GCAction) readInt(from);
>  options.pathsToDelete = readStorePaths(from);
> @@ -934,6 +939,7 @@ static void acceptConnection(int fdSocket)
> connection.  Setting these to -1 means: do not change.  */
>  settings.clientUid = clientUid;
>   settings.clientGid = clientGid;
> +settings.isRemoteConnection = (remoteAddr.ss_family != 
> AF_UNIX);
>  
>  /* Handle the connection. */
>  from.fd = remote;




Re: Paper preprint: Reproducible genomics analysis pipelines with GNU Guix

2018-04-11 Thread Roel Janssen

Ricardo Wurmus <rek...@elephly.net> writes:

> Hey all,
>
> I’m happy to announce that the group I’m working with has released a
> preprint of a paper on reproducibility with the title:
>
> Reproducible genomics analysis pipelines with GNU Guix
> https://www.biorxiv.org/content/early/2018/04/11/298653
>
> We built a collection of bioinformatics pipelines and packaged them with
> GNU Guix, and then looked at the degree to which the software achieves
> bit-reproducibility (spoiler: ~98%), analysed sources of non-determinism
> (e.g. time stamps), discussed experimental reproducibility at runtime
> (e.g. random number generators, kernel+glibc interface, etc) and
> commented on the idea of using “containers” (or application bundles)
> instead.
>
> The middle section is a bit heavy on genomics to showcase the features
> of the pipelines, but I think the introduction and the
> discussion/conclusion may be of general interest.

This looks really great!  I also like how you leverage GNU Autotools.

Finally there is a paper that uses GNU Guix as deployment tool for
scientific purposes. :)

Kind regards,
Roel Janssen



Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-11 Thread Roel Janssen

Roel Janssen <r...@gnu.org> writes:

> Ludovic Courtès <ludovic.cour...@inria.fr> writes:
>
>> Hello Roel,
>>
>> Roel Janssen <r...@gnu.org> skribis:
>>
>>> The patch adds a “disableGarbageCollection” boolean variable to the
>>> guix-daemon settings, and on each occasion where a store item may be
>>> deleted, it checks this option.
>>>
>>> This option can be set using “--disable-gc”.
>>>
>>> It would be great if someone could review this and discuss whether
>>> this is the right way to implement such a feature.  And to point out
>>> what else would be needed to include this option in guix-daemon.
>>
>> I suppose the use case is when guix-daemon runs on a machine and is
>> accessed over TCP/IP (with GUIX_DAEMON_SOCKET=guix://…) from other
>> machines, right?
>
> That's right.
>
>> In this case, I thought guix-daemon could explicitly check whether the
>> peer is remote, and disable GC in that case.  That is, ‘guix gc’ would
>> still work locally on the machine that runs guix-daemon, but it would no
>> longer work remotely.
>>
>> How does that sound?
>
> That sounds like it solves our use-case, but only because in our
> case the access to the machine running guix-daemon is limited.
>
> So, even though I'm not sure how to implement this, your solution is
> fine with me.

I implemented the solution in the attached patch.  When a connection
does not come from the UNIX socket, it is treated as “remote”.  So,
local TCP connections would also be treated as “remote”.

I assumed ‘collectGarbage()’ is the entry point for all garbage collection,
is that correct?

Kind regards,
Roel Janssen

>From 00f489d6303720c65571fdf0bc9ee810a20f70e0 Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Wed, 11 Apr 2018 09:52:11 +0200
Subject: [PATCH] guix-daemon: Disable garbage collection for remote hosts.

* nix/libstore/gc.cc (collectGarbage): Check for remote connections.
* nix/libstore/globals.hh: Add isRemoteConnection setting.
* nix/nix-daemon/nix-daemon.cc (performOp): Display appropriate error message;
  (acceptConnection): Set isRemoteConnection when connection is over TCP.
---
 nix/libstore/gc.cc   | 4 
 nix/libstore/globals.hh  | 4 
 nix/nix-daemon/nix-daemon.cc | 6 ++
 3 files changed, 14 insertions(+)

diff --git a/nix/libstore/gc.cc b/nix/libstore/gc.cc
index 72eff5242..1bc6eedb5 100644
--- a/nix/libstore/gc.cc
+++ b/nix/libstore/gc.cc
@@ -595,6 +595,10 @@ void LocalStore::removeUnusedLinks(const GCState & state)
 
 void LocalStore::collectGarbage(const GCOptions & options, GCResults & results)
 {
+if (settings.isRemoteConnection) {
+return;
+}
+
 GCState state(results);
 state.options = options;
 state.trashDir = settings.nixStore + "/trash";
diff --git a/nix/libstore/globals.hh b/nix/libstore/globals.hh
index 1293625e1..83efbcd50 100644
--- a/nix/libstore/globals.hh
+++ b/nix/libstore/globals.hh
@@ -81,6 +81,10 @@ struct Settings {
 uid_t clientUid;
 gid_t clientGid;
 
+/* Whether the connection comes from a host other than the host running
+   guix-daemon. */
+bool isRemoteConnection;
+
 /* Whether, if we cannot realise the known closure corresponding
to a derivation, we should try to normalise the derivation
instead. */
diff --git a/nix/nix-daemon/nix-daemon.cc b/nix/nix-daemon/nix-daemon.cc
index deb7003d7..65770ba95 100644
--- a/nix/nix-daemon/nix-daemon.cc
+++ b/nix/nix-daemon/nix-daemon.cc
@@ -529,6 +529,11 @@ static void performOp(bool trusted, unsigned int clientVersion,
 }
 
 case wopCollectGarbage: {
+if (settings.isRemoteConnection) {
+throw Error("Garbage collection is disabled for remote hosts.");
+break;
+}
+
 GCOptions options;
 options.action = (GCOptions::GCAction) readInt(from);
 options.pathsToDelete = readStorePaths(from);
@@ -934,6 +939,7 @@ static void acceptConnection(int fdSocket)
connection.  Setting these to -1 means: do not change.  */
 settings.clientUid = clientUid;
 		settings.clientGid = clientGid;
+settings.isRemoteConnection = (remoteAddr.ss_family != AF_UNIX);
 
 /* Handle the connection. */
 from.fd = remote;
-- 
2.16.3



Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-03 Thread Roel Janssen

Ludovic Courtès <ludovic.cour...@inria.fr> writes:

> Hello Roel,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> The patch adds a “disableGarbageCollection” boolean variable to the
>> guix-daemon settings, and on each occasion where a store item may be
>> deleted, it checks this option.
>>
>> This option can be set using “--disable-gc”.
>>
>> It would be great if someone could review this and discuss whether
>> this is the right way to implement such a feature.  And to point out
>> what else would be needed to include this option in guix-daemon.
>
> I suppose the use case is when guix-daemon runs on a machine and is
> accessed over TCP/IP (with GUIX_DAEMON_SOCKET=guix://…) from other
> machines, right?

That's right.

> In this case, I thought guix-daemon could explicitly check whether the
> peer is remote, and disable GC in that case.  That is, ‘guix gc’ would
> still work locally on the machine that runs guix-daemon, but it would no
> longer work remotely.
>
> How does that sound?

That sounds like it solves our use-case, but only because in our
case the access to the machine running guix-daemon is limited.

So, even though I'm not sure how to implement this, your solution is
fine with me.

>
> Thanks,
> Ludo’.

Thanks!

Kind regards,
Roel Janssen



Re: 01/01: gnu: Add perl-inline-c.

2018-04-03 Thread Roel Janssen

Roel Janssen <r...@gnu.org> writes:

> Ludovic Courtès <l...@gnu.org> writes:
>
>> Hi Roel,
>>
>> r...@gnu.org (Roel Janssen) skribis:
>>
>>> +(license (package-license perl
>>
>> Could you use (license perl-license) instead?  It doesn’t make any
>> difference in this case but it’s generally “safer” (see (guix
>> licenses)).
>
> Of course!  If I may ask, is the coreutils input and the substitution OK?

Nevermind, I'm mixing this up with another Perl package.

>
>> Thanks,
>> Ludo’.




Re: 01/01: gnu: Add perl-inline-c.

2018-04-03 Thread Roel Janssen

Ludovic Courtès <l...@gnu.org> writes:

> Hi Roel,
>
> r...@gnu.org (Roel Janssen) skribis:
>
>> +(license (package-license perl
>
> Could you use (license perl-license) instead?  It doesn’t make any
> difference in this case but it’s generally “safer” (see (guix
> licenses)).

Of course!  If I may ask, is the coreutils input and the substitution OK?

> Thanks,
> Ludo’.




Re: [PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-03 Thread Roel Janssen

Adam Van Ymeren <a...@vany.ca> writes:

> Just out of curiosity, what is your situation where the daemon can't
> see all GC roots?  Are you sharing the store over NFS or something?

Yes.  And the underlying storage system is configured in such a way that
“root” is not allowed to look into user's folders.

Kind regards,
Roel Janssen



[PATCH] guix-daemon: Add option to disable garbage collection.

2018-04-03 Thread Roel Janssen
Dear Guix,

I have an interesting situation where the guix-daemon cannot see all
directories that (may) contain Guix profiles, and therefore is not able
to judge whether a GC root is gone and can be collected, or whether it's
just inaccessible.

To be on the safe side, I'd like to disable garbage collection on this
system.  To achieve this, I wrote the attached patch.  Even though “it
works for me”, I don't think it's good to be added as-is.  At least I
need to figure out how to make the error message “Garbage collection is
disabled.” translatable.

The patch adds a “disableGarbageCollection” boolean variable to the
guix-daemon settings, and on each occasion where a store item may be
deleted, it checks this option.

This option can be set using “--disable-gc”.

It would be great if someone could review this and discuss whether
this is the right way to implement such a feature.  And to point out
what else would be needed to include this option in guix-daemon.

Thank you for your time.

Kind regards,
Roel Janssen

>From d842f320f0ee911d7d219bba7baa45240edcbe6d Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Tue, 3 Apr 2018 11:22:16 +0200
Subject: [PATCH] guix-daemon: Add option to disable garbage collection.

* nix/libstore/gc.cc: Return early on deleterious functions.
* nix/libstore/globals.hh (disableGarbageCollection): New settings variable.
* nix/nix-daemon/guix-daemon.cc: Implement new settings variable.
* nix/nix-daemon/nix-daemon.cc: Show appropriate message.
---
 nix/libstore/gc.cc| 20 
 nix/libstore/globals.hh   |  3 +++
 nix/nix-daemon/guix-daemon.cc |  6 ++
 nix/nix-daemon/nix-daemon.cc  |  5 +
 4 files changed, 34 insertions(+)

diff --git a/nix/libstore/gc.cc b/nix/libstore/gc.cc
index 72eff5242..b811cacd1 100644
--- a/nix/libstore/gc.cc
+++ b/nix/libstore/gc.cc
@@ -393,6 +393,10 @@ bool LocalStore::isActiveTempFile(const GCState & state,
 
 void LocalStore::deleteGarbage(GCState & state, const Path & path)
 {
+if (settings.disableGarbageCollection) {
+return;
+}
+
 unsigned long long bytesFreed;
 deletePath(path, bytesFreed);
 state.results.bytesFreed += bytesFreed;
@@ -401,6 +405,10 @@ void LocalStore::deleteGarbage(GCState & state, const Path & path)
 
 void LocalStore::deletePathRecursive(GCState & state, const Path & path)
 {
+if (settings.disableGarbageCollection) {
+return;
+}
+
 checkInterrupt();
 
 unsigned long long size = 0;
@@ -513,6 +521,10 @@ bool LocalStore::canReachRoot(GCState & state, PathSet & visited, const Path & p
 
 void LocalStore::tryToDelete(GCState & state, const Path & path)
 {
+if (settings.disableGarbageCollection) {
+return;
+}
+
 checkInterrupt();
 
 if (path == linksDir || path == state.trashDir) return;
@@ -552,6 +564,10 @@ void LocalStore::tryToDelete(GCState & state, const Path & path)
the link count. */
 void LocalStore::removeUnusedLinks(const GCState & state)
 {
+if (settings.disableGarbageCollection) {
+return;
+}
+
 AutoCloseDir dir = opendir(linksDir.c_str());
 if (!dir) throw SysError(format("opening directory `%1%'") % linksDir);
 
@@ -595,6 +611,10 @@ void LocalStore::removeUnusedLinks(const GCState & state)
 
 void LocalStore::collectGarbage(const GCOptions & options, GCResults & results)
 {
+if (settings.disableGarbageCollection) {
+return;
+}
+
 GCState state(results);
 state.options = options;
 state.trashDir = settings.nixStore + "/trash";
diff --git a/nix/libstore/globals.hh b/nix/libstore/globals.hh
index 1293625e1..54e04f218 100644
--- a/nix/libstore/globals.hh
+++ b/nix/libstore/globals.hh
@@ -218,6 +218,9 @@ struct Settings {
 /* Whether the importNative primop should be enabled */
 bool enableImportNative;
 
+/* Whether the disable garbage collection. */
+bool disableGarbageCollection;
+
 private:
 SettingsMap settings, overrides;
 
diff --git a/nix/nix-daemon/guix-daemon.cc b/nix/nix-daemon/guix-daemon.cc
index b71b100f6..13811cd99 100644
--- a/nix/nix-daemon/guix-daemon.cc
+++ b/nix/nix-daemon/guix-daemon.cc
@@ -89,6 +89,7 @@ builds derivations on behalf of its clients.");
 #define GUIX_OPT_TIMEOUT 18
 #define GUIX_OPT_MAX_SILENT_TIME 19
 #define GUIX_OPT_LOG_COMPRESSION 20
+#define GUIX_OPT_DISABLE_GC 21
 
 static const struct argp_option options[] =
   {
@@ -133,6 +134,8 @@ static const struct argp_option options[] =
   n_("disable automatic file \"deduplication\" in the store") },
 { "disable-store-optimization", GUIX_OPT_DISABLE_DEDUPLICATION, 0,
   OPTION_ALIAS | OPTION_HIDDEN, NULL },
+{ "disable-gc", GUIX_OPT_DISABLE_GC, 0, 0,
+  n_("disable garbage collection.") },
 
 { "impersonate-linux-2.6", GUIX_OPT_IMPERSO

Re: Use guix to distribute data & reproducible (data) science

2018-02-17 Thread Roel Janssen

Amirouche Boubekki writes:

> Hello again Ludovic,
>
> On 2018-02-09 18:13, ludovic.cour...@inria.fr wrote:
>> Hi!
>> 
>> Amirouche Boubekki <amirou...@hypermove.net> skribis:
>> 
>>> tl;dr: Distribution of data and software seems similar.
>>>Data is more and more important in software and reproducible
>>>science. Data science ecosystem lakes resources sharing.
>>>I think guix can help.
>> 
>> I think some of us especially Guix-HPC folks are convinced about the
>> usefulness of Guix as one of the tools in the reproducible science
>> toolchain (that was one of the themes of my FOSDEM talk).  :-)
>> 
>> Now, whether Guix is the right tool to distribute data, I don’t know.
>> Distributing large amounts of data is a job in itself, and the store
>> isn’t designed for that.  It could quickly become a bottleneck.
>
> What does it mean technically that the store “isn't designed for that”?
>
>> That’s one of the reasons why the Guix Workflow Language (GWL)
>> does not store scientific data in the store itself.
>
> Sorry, I did not follow the engineering discussion around GWL.
> Looking up the web brings me [0]. That said the question I am
> asking is not answered there. In particular there is no rationale
> for that in the design paper.
>
> [0] http://lists.gnu.org/archive/html/guix-devel/2016-10/msg01248.html
>
>> I think data should probably be stored and distributed out-of-band 
>> using
>> appropriate storage mechanisms.
>
> Then, in a follow up mail, you reply to Konrad:
>
>>> Konrad Hinsen <konrad.hin...@fastmail.net> skribis:
>> 
>> [...]
>> 
>>> It would be nice if big datasets could conceptually be handled in the
>>> same way while being stored elsewhere - a bit like git-annex does for
>>> git. And for parallel computing, we could have special build daemons.
>> 
>> Exactly.  I think we need a git-annex/git-lfs-like tool for the store.
>> (It could also be useful for things like secrets, which we don’t want
>> to have in the store.)
>> 

To answer your question:
> What does it mean technically that the store “isn't designed for that”?

I speak only from my own experience with “big data sets”, so may be it
is different for other people, but we use a separate storage system for
storing large amounts of data.  This separate storage is fault-tolerant
and is optimized for large files, meaning higher latency for file access
to reduce the financial footprint of such a system.

If we were to put data inside the store, we would need to optimize the
storage system for both low latency for small files, and a high storage
capacity.  This is extremely expensive.

Another issue I faced when providing datasets in the store is that
it's quite easy to end up with duplicated copies of the same dataset.

For example, I use the GNU build system for extracting a tarball that
contains a couple of files.  Whenever a package changes that affects the
GNU build system, the data package will be rebuild.

So you could use the trivial build system, but then I'd still need tar
and gzip to unpack the tarball.  Any change to these and the datasets
get duplicated.  This is not ideal.

Kind regards,
Roel Janssen



Re: Dinner in Brussels?

2018-01-30 Thread Roel Janssen

Ludovic Courtès writes:

> Hello Guix!
>
> To those going to the Guix workshop in Brussels this Friday: who’s in
> for dinner (+ drink) on Friday evening?

I'd like to join if that's possible.

>
> Even better: who would like to book something (I’m looking at you,
> Brusselers ;-))?
>
> Actually I’m arriving on Thursday afternoon, so if people are around,
> I’d be happy to have dinner on Thursday evening too!  :-) Let’s arrange
> something.

I will be in Brussels around 10 PM or so.  So I won't join on Thursday.

>
> Ludo’.

Thanks!

Kind regards,
Roel Janssen



Re: Guix Workflow Language ?

2018-01-25 Thread Roel Janssen

zimoun writes:

> Dear Roel,
>
> Thank you for your comments.
>
> I was imaging your point 2. And the softwares come from Guix.
> The added benefit was: a controlled and reproducible environment.
> In other words, the added benefit came from the GuixWorkflow (the
> engine of workflow), and not from the Language (lisp EDSL).
> But maybe it is a wrong way.

I get that point.  Maybe it's then a better idea to write the workflow
in CWL (like you would do), and use Guix to generate Docker containers.

Then you do get the benefit of Guix's strong reproducibility and
composability forscientific software, plus you get to keep writing the
workflow in CWL. :-)

>
>>From my experience, the classical strategy of writing pipelines is to
> adapt an already existing workflow for one another particular
> question. We fetch bits here and there, do some ugly and dirty hacks
> to have some results; then depending on them, a cleaner pipeline is
> written (or not! :-) or other pieces are tested.
> Again from my experience, there is (at least) 3 issues: the number of
> tools to learn and know enough to be able to adapt; the bits/pieces
> already available; the environment/dependencies and how they are
> managed.
>
> In this context, since 'lispy' syntax is not mainstream (and will
> never be), it appears to me as a hard position. That's why I asked if
> a Guix-backend workflow engine for CWL specs is doable. Run CWL specs
> workflow on the top of the GWL engine.

This is a good question, but how can you describe the origin of a
software package in CWL?  In the GWL, we use the Scheme symbols, and the
Guix programming interface directly, but that is unavailable in CWL.

This is a real problem that I don't see we can easily solve.


>
> However, I got your point, I guess.
> You mean: it is a lot of work with unclear benefits over existing engines.

So, I think it's impossible to express the deployment of a software
program in CWL.  It is not as expressive as GWL in this regard.
Translating to a precise Guix package recipe and its dependencies is
very hard from what we can write in CWL.

If I am mistaken here, please let me know.  Maybe we can figure
something out.

>
>
> Therefore, your point 1. reverses "my issue".
> Once the pipeline is well-established, write it with GWL! :-)
> Next, if it is possible to convert this GWL specs pipeline to CWL one
> [+ Docker] (with softwares coming from Guix), then we can enjoy the
> CWL-world engine capabilities.
> The benefit of that is from two sides: run the pipeline with different
> engines; and produce a clean docker image.
>
> So , instead of working on improving the GWL engine (adding features
> about efficiency, Grid,  Amazon, etc.) which is a very tough task, the
> doable plan would be to add an "exporter".
> Right ?

The plan is to implement back-ends, or 'process-engines' for GWL to work
with AWS, Kubernetes, Grid (this one is already supported).

These back-ends are surprisingly easy to write, because the Guix
programming interface allows us to generate virtual machines,
containers, or simply store items if Guix is available locally.

We also implemented a Bash-engine that can generate Bash scripts for
every step of the workflow.  That in combination with the variety of
deployment options solves most of the challenges.

>
>
> Another question, do you think it is doable to write "importers" ?
>
> I am not sure that the metaphor is good enough, but do you think it is
> a feasible goal from the existing GWL to go towards a kind of `Pandoc
> of workflows` ? also packing the softwares.
>
> And a start should be:
>  - write a parser for (subset of) CWL yaml file and obtain the GWL
> representation of the workflow
>  - write a exporter to CWL + Docker image
>
> What do you think ?

Maybe.  But in CWL we cannot describe precise software packages.  So
translating these things to Guix is hard.

>
>
> About the parser, I haven't found yet an easy-to-use Guile lib for
> parsing YAML-like files. Any pointer ? Adapt some Racket ones ?

I don't know of one, sorry.


> Thank you for your insights.
>
> All the best,
> simon

Thanks!

Kind regards,
Roel Janssen



Re: 01/01: gnu: vlc: Enable libdvdread and libdvdcss support.

2018-01-04 Thread Roel Janssen

Mark H Weaver writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> Danny Milosavljevic writes:
>>
>>> Hi Roel,
>>>
>>> On Thu, 04 Jan 2018 14:59:53 +0100
>>> Roel Janssen <r...@gnu.org> wrote:
>>>
>>>> I can confirm that this fixes the build of gnome-disk-utility.
>>>> 
>>>> Should we fix dvdread.pc, or propagate it with libdvdread?
>>>
>>> I think we should propagate.  If libdvdread is requiring libdvdcss
>>> (whether private or not) then libdvdcss ['s pc file] should be there
>>> when libdvdread is used...
>>
>> In that case, may I apply the attached patch?
>
> Looks good to me.  Thanks to you both for working on it!
>
>  Mark

Thanks both for reporting and fixing the problem.
I pushed the new path in e21f34735.

Kind regards,
Roel Janssen



Re: 01/01: gnu: vlc: Enable libdvdread and libdvdcss support.

2018-01-04 Thread Roel Janssen

Danny Milosavljevic writes:

> Hi Roel,
>
> On Thu, 04 Jan 2018 14:59:53 +0100
> Roel Janssen <r...@gnu.org> wrote:
>
>> I can confirm that this fixes the build of gnome-disk-utility.
>> 
>> Should we fix dvdread.pc, or propagate it with libdvdread?
>
> I think we should propagate.  If libdvdread is requiring libdvdcss (whether 
> private or not) then libdvdcss ['s pc file] should be there when libdvdread 
> is used...

In that case, may I apply the attached patch?

Thanks!

Kind regards,
Roel Janssen

>From b7fa57648b5cd45a5ce4f234c99c42fad42366b4 Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Thu, 4 Jan 2018 16:25:44 +0100
Subject: [PATCH] gnu: vlc: Enable libdvdread and libdvdcss support.

* gnu/packages/video.scm (libdvdread): Compile with libdvdcss support;
  (vlc): Add libdvdread as input.
---
 gnu/packages/video.scm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/gnu/packages/video.scm b/gnu/packages/video.scm
index 2d638abfe..77a82bb9d 100644
--- a/gnu/packages/video.scm
+++ b/gnu/packages/video.scm
@@ -22,6 +22,7 @@
 ;;; Copyright © 2017 Clément Lassieur <clem...@lassieur.org>
 ;;; Copyright © 2017 Gregor Giesen <gie...@zaehlwerk.net>
 ;;; Copyright © 2017 Rutger Helling <rhell...@mykolab.com>
+;;; Copyright © 2018 Roel Janssen <r...@gnu.org>
 ;;;
 ;;; This file is part of GNU Guix.
 ;;;
@@ -1357,6 +1358,12 @@ players, like VLC or MPlayer.")
(base32
 "0ayqiq0psq18rcp6f5pz82sxsq66v0kwv0y55dbrcg68plnxy71j"
 (build-system gnu-build-system)
+(arguments
+ `(#:configure-flags '("--with-libdvdcss=yes")))
+(native-inputs
+ `(("pkg-config" ,pkg-config)))
+(propagated-inputs
+ `(("libdvdcss" ,libdvdcss)))
 (home-page "http://dvdnav.mplayerhq.hu/;)
 (synopsis "Library for reading video DVDs")
 (description
-- 
2.15.1



Re: 01/01: gnu: vlc: Enable libdvdread and libdvdcss support.

2018-01-04 Thread Roel Janssen

Danny Milosavljevic writes:

> Hi Mark,
>
> thanks for the heads-up!
>
> The fix would be in our libdvdread:
>
> diff --git a/gnu/packages/video.scm b/gnu/packages/video.scm
> index e64c1e089..e46ec15f8 100644
> --- a/gnu/packages/video.scm
> +++ b/gnu/packages/video.scm
> @@ -1365,7 +1365,7 @@ players, like VLC or MPlayer.")
>   `(#:configure-flags '("--with-libdvdcss=yes")))
>  (native-inputs
>   `(("pkg-config" ,pkg-config)))
> -(inputs
> +(propagated-inputs
>   `(("libdvdcss" ,libdvdcss)))
>  (description
>   "Libdvdread provides a simple foundation for reading DVD video
>
> ... because dvdread.pc Requires.private libdvdcss.
>
> Not sure what's up with meson's unhelpful error message...

I can confirm that this fixes the build of gnome-disk-utility.

Should we fix dvdread.pc, or propagate it with libdvdread?

Kind regards,
Roel Janssen



Re: Performance regression on NFS with new manifest version

2017-11-08 Thread Roel Janssen

Ludovic Courtès writes:

> Hello,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> I couldn't install *all* R packages, but I used it on our shared R
>> profile:
>>
>> $ guixr package --list-installed -p /gnu/profiles/per-language/r
>> ncurses  6.0 out 
>> /gnu/store/djvxj8r1xwvrm89xqjrd44wwaxc02i74-ncurses-6.0
>> coreutils8.27out 
>> /gnu/store/ps92fz5p6l3mz9ddi388p1891r2q3fva-coreutils-8.27
>> grep 3.0 out /gnu/store/bxnxmg6vamnlp95skrgdqw7s86ag1f51-grep-3.0
>> sed  4.4 out /gnu/store/673v5pxadfdj1zkmpm90s6j89367w4af-sed-4.4
>> r-sparql 1.16out 
>> /gnu/store/5qhr4va0af65a0jrpj6nc7xdnw9s4345-r-sparql-1.16
>
> Unfortunately most of these packages are not in Guix proper AFAICS.
> Could you come up with a simple way for me to reproduce the issue on
> Guix master?

Just install a lot of R packages in a profile, including R itself would
be sufficient to reproduce it.


>
>> $ strace -c guixr package --search-paths -p /gnu/profiles/per-language/r 
>>   
>> export PATH="/gnu/profiles/per-language/r/bin"
>> export R_LIBS_SITE="/gnu/profiles/per-language/r/site-library/"
>> export TERMINFO_DIRS="/gnu/profiles/per-language/r/share/terminfo"
>> % time seconds  usecs/call callserrors syscall
>> -- --- --- - - 
>>  98.310.139510   1162612 6 wait4
>>   0.770.001087  3630 9 open
>>   0.430.000615  2129 8 stat
>
> I think you’re tracing ‘guixr’, which forks and just waits for ‘guix’
> and other commands, no?

Indeed, here's a new strace, without using 'guixr', but instead using
'guix' with 'guix-daemon' listening on a TCP port:

$ time strace -c guix package --search-paths -p /gnu/profiles/per-language/r

  
export PATH="/gnu/profiles/per-language/r/bin"
export R_LIBS_SITE="/gnu/profiles/per-language/r/site-library/"
export TERMINFO_DIRS="/gnu/profiles/per-language/r/share/terminfo"
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 30.150.010014  30   334   162 open
 24.380.008100   5  1518  1285 stat
 23.290.007738  9086   read
 11.210.003723  12   31474 futex
  2.990.000994   5   220   mmap
  2.490.000826   5   175   mprotect
  1.230.000407   2   175   close
  0.720.000238   466   fstat
  ...
-- --- --- - - 
100.000.033219  3335  1535 total

real1m12.196s
user1m10.090s
sys 0m0.377s

So, I don't think the real issue is on display here, because strace only
thinks the command took 0.033219 seconds, but it actually too 78.196
seconds.

What worries me is that almost all stat calls are erroneous, and
I think the number of calls to "open" is a bit high for this command.

So I stripped my Bash environment (unset LD_LIBRARY_PATH, only put the
path to 'guix' in PATH), and ran the strace again:

$ /usr/bin/strace -c guix package --search-paths -p 
/gnu/profiles/per-language/r

   
export PATH="/gnu/profiles/per-language/r/bin"
export R_LIBS_SITE="/gnu/profiles/per-language/r/site-library/"
export TERMINFO_DIRS="/gnu/profiles/per-language/r/share/terminfo"
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 35.750.011242   7  1510  1279 stat
 34.080.010718  35   305   135 open
  8.450.002659   8   33683 futex
  5.240.001649  2083   read
  3.940.001238   6   220   mmap
  2.990.000941   5   175   mprotect
  1.850.000582 582 1   readlink
  1.430.000450   764 2 lstat
  1.410.000445   3   170   close
  1.020.000320   564   fstat
  ...
-- --- --- - - 
100.000.031449  3283  1509 total

Even though the number of calls to "stat" and "open" are slightly lower
now, it's still a lot.

>
> TIA!
>
> Ludo’.




Re: Creating Docker containers in Scheme

2017-11-08 Thread Roel Janssen

Ludovic Courtès writes:

> Hi,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> I'd like to create a Docker container from Scheme.  Looking at
>> guix/scripts/pack.scm, I believe something like this should be possible:
>>
>>   (docker-image "my-container"
>> (profile-derivation
>>   (packages->manifest (list hello coreutils
>
> Move precisely:
>
>   (mlet %store-monad ((profile (profile-derivation …)))
> (docker-image "my-container" profile))
>

Oh, of course! :)

Unfortunately, I cannot seem to get it to work.
Here's what I do (mind the GWL process- stuff):

--8<---cut here---start->8---
(define* (process->docker-derivation proc #:key (guile (default-guile)))
  "Return a Docker container that can run the PROCEDURE described in PROC, with
PROCEDURE's imported modules in its search path."
  (let ((name (process-full-name proc))
(exp (process-procedure proc))
(out (process-output-path proc))
(packages (process-package-inputs proc)))
(let ((out-str (if out (format #f "(setenv \"out\" ~s)" out) "")))
  (mlet %store-monad ((set-load-path
   (load-path-expression (gexp-modules exp)))
  (container (docker-image
  (string-append (process-full-name proc)
  "-docker")
  (profile-derivation
   (packages->manifest packages)
(gexp->derivation
 name
 (gexp
  (call-with-output-file (ungexp output)
(lambda (port)
  (format port "# Docker image: ~a~%" (ungexp container)
 #:graft? #f)
--8<---cut here---end--->8---

And the error I get is:
  wrong-type-arg: string-prefix?

Is there anything obviously wrong here?

>> Is this something we could add to the the public interface of a module?
>
> Sure.  For now the easiest solution would be to export ‘docker-image’
> from (guix scripts pack).
>
> Longer-term, we could rename (guix docker) to (guix build docker) and
> move ‘docker-image’ to a new (guix docker) module, but perhaps we’d also
> need a (guix pack) modules containing tools that are shared between the
> docker and tarball backends of ‘guix pack’.
>
> WDYT?

It'd be nice to keep the (guix scripts ...) small, and only do
command-line handling.  So I think a (guix build docker), and a (guix
pack) module would be good.

>
> Ludo’.

Thanks for your time!

Kind regards,
Roel Janssen



Re: Let’s meet before FOSDEM!

2017-11-07 Thread Roel Janssen

Ludovic Courtès writes:

> Hello Guix!
>
> Since we didn’t get a devroom this time in spite of the efforts of
> Manolis and Pjotr, what about holding a Guix meeting let’s say on Friday
> before FOSDEM (Feb. 2nd, in Brussels)?
>
> We could meet for the whole day, in a place that remains to be defined,
> to discuss about all things Guix{,SD}.  The event could be a mixture of
> short talks on specific topics, discussions on technical and less
> technical topics (multiple bootloader support & GuixSD on ARM, making
> ‘guix pull’ great again, Cuirass, reaching out to more
> users/contributors, Guix & HPC, improving our infrastructure and
> organization, etc.), and possibly hacking sessions.
>
> Who would be willing to join?  (You can reply privately if you prefer.)

Count me in!

Kind regards,
Roel Janssen




Creating Docker containers in Scheme

2017-11-02 Thread Roel Janssen
Dear Guix,

I'd like to create a Docker container from Scheme.  Looking at
guix/scripts/pack.scm, I believe something like this should be possible:

  (docker-image "my-container"
(profile-derivation
  (packages->manifest (list hello coreutils

Is this something we could add to the the public interface of a module?

Kind regards,
Roel Janssen



Re: IcedTea is not linking correctly with libjvm.so

2017-11-01 Thread Roel Janssen
Hi Ricardo,

Ricardo Wurmus writes:

> Hi Roel,
>
>> I used (symlink ...), added a FIXME, rebuilt to see if it worked, and
>> pushed in 491dc2fb1.
>
> Did you successfully build icedtea@2 with this commit?  I failed to
> build it on my laptop:
>
> --8<---cut here---start->8---
> …
> make[5]: Leaving directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk/jdk/make/java/version'
> make[5]: Entering directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk/jdk/make/java/jvm'
> logname: no login name
> INFO: ENABLE_FULL_DEBUG_SYMBOLS=1
> INFO: 
> ALT_OBJCOPY=/gnu/store/nnykzgwfy8mwh2gmxm715sjxykg8qjwn-binutils-2.28/bin/objcopy
> INFO: /gnu/store/nnykzgwfy8mwh2gmxm715sjxykg8qjwn-binutils-2.28/bin/objcopy 
> cmd found so will create .debuginfo files.
> INFO: STRIP_POLICY=no_strip
> INFO: ZIP_DEBUGINFO_FILES=1
> /gnu/store/m9l0j7apf9ac7shqwi5sh4hsn12x4dnk-coreutils-8.27/bin/mkdir -p 
> /tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk.build/include
> rm -f 
> /tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk.build/include/jni.h
> /gnu/store/m9l0j7apf9ac7shqwi5sh4hsn12x4dnk-coreutils-8.27/bin/cp 
> ../../../src/share/javavm/export/jni.h 
> /tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk.build/include/jni.h
> make[5]: *** No rule to make target 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk.build/include/linux/jni_md.h',
>  needed by 'build'.  Stop.
> make[5]: Leaving directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk/jdk/make/java/jvm'
> make[4]: *** [Makefile:63: all] Error 1
> make[4]: Leaving directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk/jdk/make/java'
> make[3]: *** [Makefile:253: all] Error 1
> make[3]: Leaving directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk/jdk/make'
> make[2]: *** [make/jdk-rules.gmk:93: jdk-build] Error 2
> make[2]: Leaving directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk'
> make[1]: *** [Makefile:251: build_product_image] Error 2
> make[1]: Leaving directory 
> '/tmp/guix-build-icedtea-2.6.11.drv-0/icedtea-2.6.11/openjdk'
> make: *** [Makefile:2463: stamps/icedtea.stamp] Error 2
> phase `build' failed after 12663.3 seconds
> …
> --8<---cut here---end--->8---

Yes I built icedtea@2 succesfully with this commit:
~ λ guix build icedtea@2 --no-grafts
/gnu/store/cbbn89cggf86fq57h7ya7jb70qckq49j-icedtea-2.6.11-doc
/gnu/store/xcaxjgafjip9pkfrnnrj18wfyykyjcrw-icedtea-2.6.11-jdk
/gnu/store/vk6llk5zmvwysc9jcixj7hvxprazmri0-icedtea-2.6.11

And to confirm:
~ λ ls -lh 
/gnu/store/vk6llk5zmvwysc9jcixj7hvxprazmri0-icedtea-2.6.11/lib/amd64/ | grep 
libjvm
lrwxrwxrwx   2 root   root   85 1970-01-01  1970 libjvm.so -> 
/gnu/store/vk6llk5zmvwysc9jcixj7hvxprazmri0-icedtea-2.6.11/lib/amd64/server/libjvm.so

Kind regards,
Roel Janssen



Re: IcedTea is not linking correctly with libjvm.so

2017-10-30 Thread Roel Janssen

Ricardo Wurmus writes:

> Roel Janssen <r...@gnu.org> writes:
>
>>  gnu/packages/java.scm | 12 
>>  1 file changed, 12 insertions(+)
>>
>> diff --git a/gnu/packages/java.scm b/gnu/packages/java.scm
>> index 95fba20e8..81cfdc132 100644
>> --- a/gnu/packages/java.scm
>> +++ b/gnu/packages/java.scm
>> @@ -1404,6 +1404,18 @@ bootstrapping purposes.")
>>   (copy-recursively "openjdk.build/j2re-image" jre)
>>   (copy-recursively "openjdk.build/j2sdk-image" jdk))
>> #t))
>> +   ;; Some of the libraries in the lib/amd64 folder link to 
>> libjvm.so.  But that
>> +   ;; shared object is located in the server/ folder, so it cannot 
>> be found.
>> +   ;; This phase creates a symbolic link in the lib/amd64 folder so 
>> that the
>> +   ;; other libraries can find it.
>> +   ;;
>> +   ;; See 
>> https://lists.gnu.org/archive/html/guix-devel/2017-10/msg00169.html
>> +   (add-after 'install 'install-libjvm
>> + (lambda* (#:key inputs outputs #:allow-other-keys)
>> +   (let* ((lib-path (string-append (assoc-ref outputs "out") 
>> "/lib/amd64")))
>> + (system* "ln" "--symbolic"
>> +  (string-append lib-path "/server/libjvm.so")
>> +  (string-append lib-path "/libjvm.so")
>
> Please use (symlink foo bar) instead of calling the “ln” tool.  Also end
> the phase with #t.
>
> Other than that I think it’s fine as a workaround.  Please also add a
> FIXME to the comment, so that we can revisit this later.
>
> Thanks!

I used (symlink ...), added a FIXME, rebuilt to see if it worked, and
pushed in 491dc2fb1.

Thanks!

Kind regards,
Roel Janssen





Re: IcedTea is not linking correctly with libjvm.so

2017-10-24 Thread Roel Janssen

Ludovic Courtès writes:

> Hi!
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> Looking into this shared object, I found that it cannot find libjvm.so:
>> $ ldd 
>> /gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libnet.so
>> linux-vdso.so.1 =>  (0x7ffe355ab000)
>> libdl.so.2 => 
>> /gnu/store/20jhhjzgyqkiw1078cyy3891amqm8d4f-glibc-2.25/lib/libdl.so.2 
>> (0x7f3984931000)
>> libjvm.so => not found
>> libpthread.so.0 => 
>> /gnu/store/20jhhjzgyqkiw1078cyy3891amqm8d4f-glibc-2.25/lib/libpthread.so.0 
>> (0x7f39846f8000)
>> libjava.so => 
>> /gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libjava.so
>>  (0x7f39844cc000)
>> libgcc_s.so.1 => 
>> /gnu/store/0ss2akh5grfdfqnik6mm3lj4yyyb08np-gcc-5.4.0-lib/lib/libgcc_s.so.1 
>> (0x7f39842b4000)
>> libc.so.6 => 
>> /gnu/store/20jhhjzgyqkiw1078cyy3891amqm8d4f-glibc-2.25/lib/libc.so.6 
>> (0x7f3983f15000)
>> /lib64/ld-linux-x86-64.so.2 (0x7f3984d4d000)
>> libjvm.so => not found
>> libverify.so => 
>> /gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libverify.so
>>  (0x7f3983d05000)
>> libjvm.so => not found
>
> Oh, bad!  :-)
>
> The package has this:
>
>;; The DSOs use $ORIGIN to refer to each other, but (guix build
>;; gremlin) doesn't support it yet, so skip this phase.
>#:validate-runpath? #f
>
> The comment was first added in fb799cb72e, when it was true, but shortly
> after (guix build gremlin) gained support for that.  So we should
> probably set this to #t once the package is fixed.
>
> Ludo’.

With the patch I submitted earlier today, I tried to set the
#:validate-runpath? to #t, but the build failed.

It seems there are more libraries with linking errors:
$ ldd 
/gnu/store/fy8krxbj60z9kx9hwsp4f08qjgr2cb20-icedtea-2.6.11/lib/amd64/xawt/libmawt.so
linux-vdso.so.1 (0x7ffc94ed3000)
...
libawt.so => not found
libjava.so => not found
libjvm.so => not found
...

So, again libjvm.so, because this is in different directory.
libmawt.so is used by libjawt.so, (and libjawt.so cannot find libmawt.so
either).

So there's definitely more to investigate/fix here.

Nevertheless, my immediate need is to fix that libjvm.so linkage error
from libjvm.so.  So I would like to go ahead with the patch I submitted
earlier.

I am investigating the remaining linkage errors and whether there's a
better fix than creating symbolic links.

Kind regards,
Roel Janssen



Re: IcedTea is not linking correctly with libjvm.so

2017-10-24 Thread Roel Janssen

Chris Marusich writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> Chris Marusich writes:
>>
>>> Roel Janssen <r...@gnu.org> writes:
>>>
>>>> 1. Fix the recipe to make sure libjvm.so is found, and thus libnet.so is
>>>> linked correctly.
>>>>
>>>> 2. Copy or make a symlink of libjvm.so to the parent directory
>>>>(lib/amd64), where the other libraries are.  Maybe then libnet.so can
>>>>find libjvm.so.
>>>
>>> (1) seems better than (2) if possible, but either of those solutions
>>> seem OK to me.  But to be honest, I don't understand why this isn't a
>>> problem for icedtea outside of Guix.  What are we doing that is
>>> different which prevents the library from being found?
>>
>> Thanks for your reply!
>>
>> I agree that (1) would be better than (2).  The only problem I see with
>> this is that I don't see how to achieve (1), but I do see how to achieve
>> (2).
>>
>> I tried running that Java app with CentOS's Java (openjdk 1.7.0), and it
>> has the exact same problem:
>>
>> $ ldd /usr/lib/jvm/java-1.7.0-openjdk/jre/lib/amd64/libnet.so 
>> linux-vdso.so.1 =>  (0x7ffcc153d000)
>> libjvm.so => not found
>> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0b70d7e000)
>> libgconf-2.so.4 => /lib64/libgconf-2.so.4 (0x7f0b70b4d000)
>> libglib-2.0.so.0 => /lib64/libglib-2.0.so.0 (0x7f0b70816000)
>> libgobject-2.0.so.0 => /lib64/libgobject-2.0.so.0 
>> (0x7f0b705c5000)
>> libgio-2.0.so.0 => /lib64/libgio-2.0.so.0 (0x7f0b70245000)
>> libjava.so => 
>> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.121-2.6.8.0.el7_3.x86_64/jre/lib/amd64/./libjava.so
>>  (0x7f0b70019000)
>> libc.so.6 => /lib64/libc.so.6 (0x7f0b6fc57000)
>> /lib64/ld-linux-x86-64.so.2 (0x7f0b711ce000)
>> libgmodule-2.0.so.0 => /lib64/libgmodule-2.0.so.0 
>> (0x7f0b6fa53000)
>> libdbus-glib-1.so.2 => /lib64/libdbus-glib-1.so.2 
>> (0x7f0b6f82b000)
>> libdbus-1.so.3 => /lib64/libdbus-1.so.3 (0x7f0b6f5e2000)
>> libffi.so.6 => /lib64/libffi.so.6 (0x7f0b6f3da000)
>> libdl.so.2 => /lib64/libdl.so.2 (0x7f0b6f1d6000)
>> libz.so.1 => /lib64/libz.so.1 (0x7f0b6efbf000)
>> libselinux.so.1 => /lib64/libselinux.so.1 (0x7f0b6ed98000)
>> libresolv.so.2 => /lib64/libresolv.so.2 (0x7f0b6eb7e000)
>> libjvm.so => not found
>> libverify.so => 
>> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.121-2.6.8.0.el7_3.x86_64/jre/lib/amd64/./libverify.so
>>  (0x7f0b6e96e000)
>> librt.so.1 => /lib64/librt.so.1 (0x7f0b6e765000)
>> libpcre.so.1 => /lib64/libpcre.so.1 (0x7f0b6e504000)
>> libjvm.so => not found
>
> Can you share a minimal program that reproduces the issue?  If it
> happens on Guix's Java, built with Icedtea, and also on another
> distro's, too, then maybe it's a genuine bug that can be fixed upstream.

Unfortunately, I don't have a simple example to reproduce it.  However,
it looks a lot like this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1212151

The program producing the error can be found here:
https://github.com/hartwigmedical/hmftools

For which I have a package here:
https://github.com/UMCUGenetics/guix-additions/blob/master/umcu/packages/hmf.scm#L117

However, it needs various data inputs before the tool will run.  Hence,
the problem with making a simple example to reproduce it.

I attached a patch for solution (2), which seems to work.  On the output
directory:
$ $ ldd 
/gnu/store/0p35h1dq956h4axal8cc9as1y7qxchqv-icedtea-2.6.11/lib/amd64/libnet.so
linux-vdso.so.1 (0x7ffdda9ab000)
libjvm.so => 
/gnu/store/0p35h1dq956h4axal8cc9as1y7qxchqv-icedtea-2.6.11/lib/amd64/./libjvm.so
 (0x7f2def98c000)
libpthread.so.0 => 
/gnu/store/n6nvxlk2j8ysffjh3jphn1k5silnakh6-glibc-2.25/lib/libpthread.so.0 
(0x7f2def76e000)
libdl.so.2 => 
/gnu/store/n6nvxlk2j8ysffjh3jphn1k5silnakh6-glibc-2.25/lib/libdl.so.2 
(0x7f2def56a000)
libgio-2.0.so.0 => 
/gnu/store/qzmyyj0jx6n14vsffa66jgsnnvwhby3n-glib-2.52.3/lib/libgio-2.0.so.0 
(0x7f2def1d3000)
libgobject-2.0.so.0 => 
/gnu/store/qzmyyj0jx6n14vsffa66jgsnnvwhby3n-glib-2.52.3/lib/libgobject-2.0.so.0 
(0x7f2deef81000)
libglib-2.0.so.0 => 
/gnu/store/qzmyyj0jx6n14vsffa66jgsnnvwhby3n-glib-2.52.3/lib/libglib-2.0.so.0 
(0x7f2deec6e000)
libjava.so => 
/gnu/store/0p35h1dq956h4axal8cc9as1y7qxchqv-ice

Re: IcedTea is not linking correctly with libjvm.so

2017-10-22 Thread Roel Janssen

Chris Marusich writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> 1. Fix the recipe to make sure libjvm.so is found, and thus libnet.so is
>> linked correctly.
>>
>> 2. Copy or make a symlink of libjvm.so to the parent directory
>>(lib/amd64), where the other libraries are.  Maybe then libnet.so can
>>find libjvm.so.
>
> (1) seems better than (2) if possible, but either of those solutions
> seem OK to me.  But to be honest, I don't understand why this isn't a
> problem for icedtea outside of Guix.  What are we doing that is
> different which prevents the library from being found?

Thanks for your reply!

I agree that (1) would be better than (2).  The only problem I see with
this is that I don't see how to achieve (1), but I do see how to achieve
(2).

I tried running that Java app with CentOS's Java (openjdk 1.7.0), and it
has the exact same problem:

$ ldd /usr/lib/jvm/java-1.7.0-openjdk/jre/lib/amd64/libnet.so 
linux-vdso.so.1 =>  (0x7ffcc153d000)
libjvm.so => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0b70d7e000)
libgconf-2.so.4 => /lib64/libgconf-2.so.4 (0x7f0b70b4d000)
libglib-2.0.so.0 => /lib64/libglib-2.0.so.0 (0x7f0b70816000)
libgobject-2.0.so.0 => /lib64/libgobject-2.0.so.0 (0x7f0b705c5000)
libgio-2.0.so.0 => /lib64/libgio-2.0.so.0 (0x7f0b70245000)
libjava.so => 
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.121-2.6.8.0.el7_3.x86_64/jre/lib/amd64/./libjava.so
 (0x7f0b70019000)
libc.so.6 => /lib64/libc.so.6 (0x7f0b6fc57000)
/lib64/ld-linux-x86-64.so.2 (0x7f0b711ce000)
libgmodule-2.0.so.0 => /lib64/libgmodule-2.0.so.0 (0x7f0b6fa53000)
libdbus-glib-1.so.2 => /lib64/libdbus-glib-1.so.2 (0x7f0b6f82b000)
libdbus-1.so.3 => /lib64/libdbus-1.so.3 (0x7f0b6f5e2000)
libffi.so.6 => /lib64/libffi.so.6 (0x7f0b6f3da000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f0b6f1d6000)
libz.so.1 => /lib64/libz.so.1 (0x7f0b6efbf000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x7f0b6ed98000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x7f0b6eb7e000)
libjvm.so => not found
libverify.so => 
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.121-2.6.8.0.el7_3.x86_64/jre/lib/amd64/./libverify.so
 (0x7f0b6e96e000)
librt.so.1 => /lib64/librt.so.1 (0x7f0b6e765000)
libpcre.so.1 => /lib64/libpcre.so.1 (0x7f0b6e504000)
libjvm.so => not found

>
> Solution (3) feels like more of a hack than (2), so I'm not sure about
> it.  Maybe other have other opinions?

I agree that we should avoid solution (3).

Kind regards,
Roel Janssen



IcedTea is not linking correctly with libjvm.so

2017-10-21 Thread Roel Janssen
Dear Guix,

I ran into a problem with a Java program running with Guix's icedtea-3.
The error message looks like this:

Exception in thread "main" java.lang.UnsatisfiedLinkError: 
/gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libnet.so: 
/gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libnet.so: 
failed to map segment from shared object
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1845)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at java.net.InetAddress$1.run(InetAddress.java:294)
at java.net.InetAddress$1.run(InetAddress.java:292)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.InetAddress.(InetAddress.java:291)
at 
org.apache.logging.log4j.core.util.NetUtils.getLocalHostname(NetUtils.java:53)
at 
org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:539)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:617)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:634)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:229)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:152)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:551)
...

Looking into this shared object, I found that it cannot find libjvm.so:
$ ldd 
/gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libnet.so
linux-vdso.so.1 =>  (0x7ffe355ab000)
libdl.so.2 => 
/gnu/store/20jhhjzgyqkiw1078cyy3891amqm8d4f-glibc-2.25/lib/libdl.so.2 
(0x7f3984931000)
libjvm.so => not found
libpthread.so.0 => 
/gnu/store/20jhhjzgyqkiw1078cyy3891amqm8d4f-glibc-2.25/lib/libpthread.so.0 
(0x7f39846f8000)
libjava.so => 
/gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libjava.so 
(0x7f39844cc000)
libgcc_s.so.1 => 
/gnu/store/0ss2akh5grfdfqnik6mm3lj4yyyb08np-gcc-5.4.0-lib/lib/libgcc_s.so.1 
(0x7f39842b4000)
libc.so.6 => 
/gnu/store/20jhhjzgyqkiw1078cyy3891amqm8d4f-glibc-2.25/lib/libc.so.6 
(0x7f3983f15000)
/lib64/ld-linux-x86-64.so.2 (0x7f3984d4d000)
libjvm.so => not found
libverify.so => 
/gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/libverify.so
 (0x7f3983d05000)
libjvm.so => not found

It seems that libjvm.so is in the lib/amd64/server folder of the icedtea
package, so after setting LD_LIBRARY_PATH, the application runs fine:
$ export 
LD_LIBRARY_PATH=/gnu/store/q9ad5zvxpm2spiddcj01sw3jkm5vpgva-icedtea-3.5.1/lib/amd64/server

I looked into the package recipe, but I cannot find a place where
a change in the compilation process could fix this problem.  My only
guess is that in the 'patch-jni-libs phase, we could change the way
dynamically loaded libraries are found.

The way I see it, we could do three things:
1. Fix the recipe to make sure libjvm.so is found, and thus libnet.so is
linked correctly.

2. Copy or make a symlink of libjvm.so to the parent directory
   (lib/amd64), where the other libraries are.  Maybe then libnet.so can
   find libjvm.so.

3. Propagate LD_LIBRARY_PATH with a path to lib/amd64/server.  It would
   work like in my quick test, but this is my least favorite solution.

Which way would be preferable?  I can prepare a patch for option 2, and
see if that works.  But maybe option 1 would be better.

Thanks for your time.

Kind regards,
Roel Janssen



Re: Search paths in packages

2017-10-09 Thread Roel Janssen

Ludovic Courtès writes:

> Hi Roel,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> I have a question about how search paths are handled in Guix.
>> So, I have a package that tries to set an environment variable:
>> DRMAA_LIBRARY_PATH.
>>
>> So, I tried:
>> (native-search-paths
>>  (list (search-path-specification
>> (variable "DRMAA_LIBRARY_PATH")
>> (files '("lib/libdrmaa.so")
>
> If you want to match a regular file instead of a directory (the
> default), you must write:
>
>   (search-path-specification
> (variable "DRMAA_LIBRARY_PATH")
> (files '("lib/libdrmaa.so"))
> (file-type 'regular))
>
> This will match all the lib/libdrmaa.so files found in the environment.
>
>> But after running:
>> $ guix environment --container --ad-hoc  bash coreutils
>>
>> ... the DRMAA_LIBRARY_PATH is not set.
>
> That’s because none of the packages listed after --ad-hoc contains a
> lib/libdrmaa.so file.
>
> You can do this experiment with GIT_SSL_CAINFO:
>
> --8<---cut here---start->8---
> $ guix environment -C --ad-hoc git coreutils -- env |grep GIT
> GIT_EXEC_PATH=/gnu/store/m5baadh2m4kgvzgxc5m3phw9f6pyhwnv-profile/libexec/git-core
> $ guix environment -C --ad-hoc git coreutils nss-certs -- env |grep GIT
> GIT_SSL_CAINFO=/gnu/store/x6f5ywznnjzwa81a3g7rcs5riippx2zh-profile/etc/ssl/certs/ca-certificates.crt
> GIT_EXEC_PATH=/gnu/store/x6f5ywznnjzwa81a3g7rcs5riippx2zh-profile/libexec/git-core
> --8<---cut here---end--->8---
>
> In the first run, there was no etc/ssl/ca-certificates.crt file, so
> GIT_SSL_CAINFO was undefined.
>
> HTH!
>
> Ludo’.

Thanks for explaining this!  Now I've got it working as I intended it,
indeed.

Thanks!

Kind regards,
Roel Janssen



Search paths in packages

2017-10-09 Thread Roel Janssen
Dear Guix,

I have a question about how search paths are handled in Guix.
So, I have a package that tries to set an environment variable:
DRMAA_LIBRARY_PATH.

So, I tried:
(native-search-paths
 (list (search-path-specification
(variable "DRMAA_LIBRARY_PATH")
(files '("lib/libdrmaa.so")

But after running:
$ guix environment --container --ad-hoc  bash coreutils

... the DRMAA_LIBRARY_PATH is not set.

However, the manifest contains the search path.  From inside the container:
$ cat $GUIX_ENVIRONMENT/manifest
(manifest
  (version 3)
  (packages
(("coreutils"
  "8.27"
  "out"
  "/gnu/store/m9l0j7apf9ac7shqwi5sh4hsn12x4dnk-coreutils-8.27"
  (propagated-inputs ())
  (search-paths ()))
 ("bash"
  "4.4.12"
  "out"
  "/gnu/store/b7y66db86k57vbb03nr4bfn9svmks4gf-bash-4.4.12"
  (propagated-inputs ())
  (search-paths
(("BASH_LOADABLES_PATH"
  ("lib/bash")
  ":"
  directory
  #f
 ("grid-engine-core"
  "8.1.9"
  "out"
  "/gnu/store/jw80iilm964q2y0krnc1r67fxi07fix2-grid-engine-core-8.1.9"
  (propagated-inputs ())
  (search-paths
(("DRMAA_LIBRARY_PATH"
  ("lib/libdrmaa.so")
  ":"
  directory
  #f)))

Why isn't DRMAA_LIBRARY_PATH set in this case?

Kind regards,
Roel Janssen



Re: Performance regression on NFS with new manifest version

2017-09-22 Thread Roel Janssen

Ludovic Courtès writes:

> Hello Roel,
>
> I’m finally going back to this issue…
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> Ludovic Courtès writes:
>>
>>> Hi,
>>>
>>> Roel Janssen <r...@gnu.org> skribis:
>>>
>>>> Ricardo Wurmus writes:
>>>>
>>>>> Hi Roel,
>>>>>
>>>>>> Looking into the manifests ($GENERATION_15/manifest and
>>>>>> $GENERATION_16/manifest), I noticed that generation 15 uses manifest
>>>>>> version 2, and generation 16 uses manifest version 3.
>>>>>>
>>>>>> What has changed, so that it takes a lot longer to run the same command
>>>>>> as before?  (this is probably disk-related, because that is a known
>>>>>> cause for trouble on network-mounted stores..)
>>>>>
>>>>> Commit 55b4715fd4c03e46501f123c5c9bc6072edf12a4 bumped the manifest
>>>>> version to 3.  The goal was to represent propagated inputs as manifest
>>>>> entries so that we can anticipate conflicts from propagated inputs and
>>>>> refuse to build a profile when there would be conflicts.
>>>>
>>>> Thanks for pointing to that commit.  It's much better this way. :-)
>>>>
>>>> So, what makes 'guix package --search-paths' so slow?  It doesn't have
>>>> to check for conflicts because that's already done on profile creation
>>>> time.  All it has to do is combine the search-path data and output
>>>> that..
>>>
>>> Could it be that you have lots of propagated inputs in your profile
>>> (Python, etc.)?  Are you sure (per ‘strace’) that it has to do with file
>>> system accesses?
>>
>> Yes, I have lots of propagated inputs in that profile (R packages..).
>>
>> I haven't checked with strace, but everything else on this machine is
>> fast and plenty (24 cores, 128GB ram).  The only troublesome thing is
>> the NFS-mounted store.
>
> I tried to reproduce the problem like this:
>
> --8<---cut here---start->8---
> $ ./pre-inst-env guix package -p foo -i r $(guix package -A '^r-a'|cut -f1)
> guix package: warning: Your Guix installation is 10 days old.
> guix package: warning: Consider running 'guix pull' followed by
> 'guix package -u' to get up-to-date packages and security updates.
>
> The following packages will be installed:
>r  3.4.1   /gnu/store/k0q4b6nq1cdyfh3267nmgkwspf7hv6pb-r-3.4.1
>r-acepack  1.4.1   
> /gnu/store/2lnpmwk5n3g2567q0rj1cz2hfwmcaj4v-r-acepack-1.4.1
>r-acsnminer0.16.8.25   
> /gnu/store/dv7mrnh8nm0cga5caqay5hmx4cc5355a-r-acsnminer-0.16.8.25
>r-adaptivesparsity 1.4 
> /gnu/store/vkiq4knbqm1rm6hsnbkq4ad0pgdsr653-r-adaptivesparsity-1.4
>r-ade4 1.7-8   /gnu/store/3455karydz6sfn5d78r088f812w5z99y-r-ade4-1.7-8
>r-affy 1.54.0  
> /gnu/store/zv5fj0c5gdw27carm7mdvyisdnrnirl9-r-affy-1.54.0
>r-affyio   1.46.0  
> /gnu/store/xgd50xlbq6nsjjk628y1n8scf4hwrrd8-r-affyio-1.46.0
>r-annotate 1.54.0  
> /gnu/store/qbbwjr7k12ss32d7545pwr31lhg1ng07-r-annotate-1.54.0
>r-annotationdbi1.38.2  
> /gnu/store/vcmxw3plg8yygp58q4crr85710hr9mqk-r-annotationdbi-1.38.2
>r-annotationfilter 1.0.0   
> /gnu/store/w2n1lz9051fhchc8v2c6yyq7dv7bsxgh-r-annotationfilter-1.0.0
>r-annotationforge  1.18.1  
> /gnu/store/9vmr1lfx4g6v3h85d8jgaqn7cs2by8wy-r-annotationforge-1.18.1
>r-annotationhub2.8.2   
> /gnu/store/xyanxcl5dgj8fq415dr4ihm1prlsrz03-r-annotationhub-2.8.2
>r-ape  4.1 /gnu/store/0cy2kw2vifirbhp399db8lndfkyqgjva-r-ape-4.1
>r-aroma-light  3.6.0   
> /gnu/store/rq0xawz5l9nq07cxz9kisk63yxasfcnl-r-aroma-light-3.6.0
>r-assertthat   0.2.0   
> /gnu/store/dqq16x94nwkgyrliavf0w6gnhp1vxbha-r-assertthat-0.2.0
>r-auc  0.3.0   /gnu/store/fmzah3jzzy756sh4dir78nb8kcrn25pn-r-auc-0.3.0
>
> [...]
>
> The following environment variable definitions may be needed:
>export PATH="foo/bin${PATH:+:}$PATH"
>export R_LIBS_SITE="foo/site-library/${R_LIBS_SITE:+:}$R_LIBS_SITE"
> ludo@ribbon ~/src/guix$ time ./pre-inst-env guix package --search-paths -p foo
> export PATH="foo/bin"
> export R_LIBS_SITE="foo/site-library/"
>
> real  0m0.170s
> user  0m0.107s
> sys   0m0.016s
> ludo@ribbon ~/src/guix$ time ./pre-inst-env guix package --search-paths -p foo
> export PATH="foo/bin"
> export R_LIBS_SITE="foo/site-library/"
>
> real  0m0.158s
> user  0m0.097s
> sys   0m0.021s
> ludo@ribbon ~/src/guix$ ./pre-

Re: On packaging old versions of libraries

2017-08-23 Thread Roel Janssen

Mike Gerwitz writes:

> There is a game my kids love playing named Secret Mayro
> Chronicles.  Unfortunately, it's been unmaintained since 2012, and it
> was removed from Debian because it is no longer compatible with newer
> versions of libraries they package.[0]  There is a maintained fork of
> the game, but it's quite different from the original (intentionally).
>
> I have the option of compiling it using old libraries (I would have to
> compile the old libraries' dependencies as well, as needed), upgrade the
> game by backporting changes from the fork (which I honestly doubt I have
> the time for right now, but I'll look into it), or run the game within a
> VM/container running an old Debian version.
>
> I'm going to look into what is required to backport, but if I decided to
> go the first route, I would probably use Guix.  Would such a
> contribution be accepted considering it packages older libraries, which
> would add some cruft?  At the least, I would have to compile CEGUI0.7,
> but that might need older versions of libraries itself to compile.
>
>
> [0]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=812096

Regardless of whether older versions of libraries would be accepted
upstream, you can also keep them in a separate repository or directory
and use the environment variable GUIX_PACKAGE_PATH to include them in
your Guix.

Kind regards,
Roel Janssen



Re: Performance regression on NFS with new manifest version

2017-08-22 Thread Roel Janssen

Ludovic Courtès writes:

> Hi,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> Ricardo Wurmus writes:
>>
>>> Hi Roel,
>>>
>>>> Looking into the manifests ($GENERATION_15/manifest and
>>>> $GENERATION_16/manifest), I noticed that generation 15 uses manifest
>>>> version 2, and generation 16 uses manifest version 3.
>>>>
>>>> What has changed, so that it takes a lot longer to run the same command
>>>> as before?  (this is probably disk-related, because that is a known
>>>> cause for trouble on network-mounted stores..)
>>>
>>> Commit 55b4715fd4c03e46501f123c5c9bc6072edf12a4 bumped the manifest
>>> version to 3.  The goal was to represent propagated inputs as manifest
>>> entries so that we can anticipate conflicts from propagated inputs and
>>> refuse to build a profile when there would be conflicts.
>>
>> Thanks for pointing to that commit.  It's much better this way. :-)
>>
>> So, what makes 'guix package --search-paths' so slow?  It doesn't have
>> to check for conflicts because that's already done on profile creation
>> time.  All it has to do is combine the search-path data and output
>> that..
>
> Could it be that you have lots of propagated inputs in your profile
> (Python, etc.)?  Are you sure (per ‘strace’) that it has to do with file
> system accesses?

Yes, I have lots of propagated inputs in that profile (R packages..).

I haven't checked with strace, but everything else on this machine is
fast and plenty (24 cores, 128GB ram).  The only troublesome thing is
the NFS-mounted store.

>
> That could be a quadratic thing that popped up somewhere in that commit.
>
> Ludo’.

Thanks for looking at this message.

Kind regards,
Roel Janssen



Re: Performance regression on NFS with new manifest version

2017-08-10 Thread Roel Janssen

Ricardo Wurmus writes:

> Hi Roel,
>
>> Looking into the manifests ($GENERATION_15/manifest and
>> $GENERATION_16/manifest), I noticed that generation 15 uses manifest
>> version 2, and generation 16 uses manifest version 3.
>>
>> What has changed, so that it takes a lot longer to run the same command
>> as before?  (this is probably disk-related, because that is a known
>> cause for trouble on network-mounted stores..)
>
> Commit 55b4715fd4c03e46501f123c5c9bc6072edf12a4 bumped the manifest
> version to 3.  The goal was to represent propagated inputs as manifest
> entries so that we can anticipate conflicts from propagated inputs and
> refuse to build a profile when there would be conflicts.

Thanks for pointing to that commit.  It's much better this way. :-)

So, what makes 'guix package --search-paths' so slow?  It doesn't have
to check for conflicts because that's already done on profile creation
time.  All it has to do is combine the search-path data and output
that..

Anyway, I worked around it by using the $PROFILE/etc/profile file and
unset the environment variables before setting them, which takes less
than a second.

Kind regards,
Roel Janssen



Performance regression on NFS with new manifest version

2017-08-08 Thread Roel Janssen
Dear Guix,

On our cluster, the following command took less than a second to run:
guix package --search-paths

At some point, the same command took 30 seconds to run.

So, I tried previous versions of the profile and found generation 15
returned the search paths very quickly (as before), and generation 16
returned the search paths only after about 30 seconds (the regression).

Looking into the manifests ($GENERATION_15/manifest and
$GENERATION_16/manifest), I noticed that generation 15 uses manifest
version 2, and generation 16 uses manifest version 3.

What has changed, so that it takes a lot longer to run the same command
as before?  (this is probably disk-related, because that is a known
cause for trouble on network-mounted stores..)

Thanks for your time!

Kind regards,
Roel Janssen



Re: Xorg tearing fix on Intel HD Graphics 4000

2017-07-20 Thread Roel Janssen

Chris Marusich writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> Chris Marusich writes:
>>
>>> Roel Janssen <r...@gnu.org> writes:
>>>
>>>> Ricardo Wurmus writes:
>>>>
>>>>> Hi Roel,
>>>>>
>>>>>> With the following patch to the Xorg configuration file, I have a
>>>>>> tear-free GuixSD experience.  I wonder if this is upstreameable in some
>>>>>> way.  This patch is probably too broad in effect.  Can I change it so
>>>>>> that only the graphics card I have will be affected by this patch?
>>>>>
>>>>> I’m not sure about this, but you can apply it only to your system by
>>>>> changing the slim-service’s “startx” value like this:
>>>>>
>>>>> --8<---cut here---start->8---
>>>>> (modify-services %desktop-services
>>>>>   (slim-service-type
>>>>>config => (slim-configuration
>>>>>   (inherit config)
>>>>>   (startx (xorg-start-command
>>>>>#:configuration-file
>>>>>(xorg-configuration-file
>>>>> #:extra-config
>>>>> (list your-fix)))
>>>>> --8<---cut here---end--->8---
>>>>>
>>>>> But I suppose what you want is to apply it unconditionally in Guix and
>>>>> have the X server ignore it for all but this one graphics card, right?
>>>>
>>>> No, not necessarily.  I could no longer do 'guix pull && guix system
>>>> reconfigure ...', which I attempted to solve by upstreaming this patch.
>>>
>>> Why wouldn't you be able to do a 'guix pull && guix system reconfigure'?
>>
>> Because that would build a system generation which doesn't contain the
>> patched Xorg config.  Ricardo's snippet solved that.
>>
>>>
>>>> I wonder if anyone else is having the same problem on this hardware..
>>>
>>> Yes, I have this problem.  I use a Lenovo X200.  Like Mark, graphical
>>> Emacs doesn't display characters right, and it's difficult to tell what
>>> the buffer actually contains, sometimes.  I've reconfigured my system to
>>> use the extra Xorg config you've provided in this thread, and I'll let
>>> you know in a week or two if it seems to have fixed the problem.
>>
>> Thanks.
>>
>> Kind regards,
>> Roel Janssen
>
> Just wanted to close the loop here: I have not had any tearing problems
> since applying the patch.  Sounds like the problem has been resolved
> through a slightly different means, though (with commit
> b049ae2f9708794f83c41171c19ffdfe4f11807e).  Accordingly, I've removed
> the extra xorg configuration from my operating system configuration file
> and simply reconfigured using the latest origin/master.

I also removed the extra Xorg configuration snippet from my system
configuration, and I too don't have the tearing problem anymore.

>
> Thank you for starting this discussion!  It's really nice to be able to
> use graphical emacs now without needing to frequently invoke M-x
> redraw-display.

Thanks for confirming that this issue has been fixed.

Kind regards,
Roel Janssen



Re: RPC pipelining

2017-07-12 Thread Roel Janssen

Ludovic Courtès writes:

> Hi Roel,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> substitute: guix substitute: warning: ACL for archive imports seems to be 
>> uninitialized, substitutes may be unavailable
>> substitute: ;;; Failed to autoload make-session in (gnutls):
>> substitute: ;;; ERROR: missing interface for module (gnutls)
>> substitute: Backtrace:
>> substitute:1 (primitive-load 
>> "/gnu/repositories/guix/scripts/guix")
>> substitute: In guix/ui.scm:
>> substitute:   1352:12  0 (run-guix-command _ . _)
>> substitute: 
>> substitute: guix/ui.scm:1352:12: In procedure run-guix-command:
>> substitute: guix/ui.scm:1352:12: In procedure module-lookup: Unbound 
>> variable: make-session
>> guix environment: error: build failed: writing to file: Broken pipe
>
> This is because ‘guix substitute’, called by ‘guix-daemon’, doesn’t find
> (gnutls) in its GUILE_LOAD_PATH.
>
> Use either “sudo -E ./pre-inst-env guix-daemon …” so that guix-daemon
> inherits GUILE_LOAD_PATH and GUILE_LOAD_COMPILED_PATH, or
> --no-substitutes.
>
> Thanks for testing!
>
> Ludo’.

Thanks for the pointer.  I think it was a bad idea to try it with the
pre-inst-env, because before it even got to this point I had to modify
the location of guile in the header of scripts/guix.

So, maybe I shouldn't be using the pre-inst-env at all and just
properly install the patched version.  I can always --roll-back anyway.

So I applied the patch, and ran:
make dist

Which produced a tarball.
I then modified the guix recipe to use the tarball instead of a git
checkout.  But unfortunately, building it is again troublesome:

...
  GUILEC   guix/scripts/import/gem.go
  GUILEC   guix/scripts/import/pypi.go
  GUILEC   guix/scripts/import/stackage.go
  GUILEC   guix/ssh.go
  GUILEC   guix/scripts/copy.go
  GUILEC   guix/store/ssh.go
  GUILEC   guix/scripts/offload.go
  GUILEC   guix/config.go
  GUILEC   guix/tests.go
  GUILEC   guix/tests/http.go
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload exec-command in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
random seed for tests: 1499856249
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
guix/scripts/size.scm:211:2: warning: possibly unbound variable `make-page-map'
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
;;; Failed to autoload make-page-map in (charting):
;;; ERROR: missing interface for module (charting)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload exec-command in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload exec-command in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
gnu/build/shepherd.scm:98:13: warning: possibly unbound variable `read-pid-file'
gnu/build/shepherd.scm:159:32: warning: possibly unbound variable `exec-command'
gnu/build/shepherd.scm:170:14: warning: possibly unbound variable 
`read-pid-file'
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload exec-command in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload read-pid-file in (shepherd service):
;;; ERROR: missing interface for module (shepherd service)
;;; Failed to autoload exec-command in (sh

Re: RPC pipelining

2017-07-11 Thread Roel Janssen
Hello Ludo’!

Thanks for working so hard on this.
I run into trouble with my test setup..

[roel@hpcguix ~]$ time ./guixr environment --ad-hoc coreutils -- true

;;; (flush-pending-rpcs 170)

;;; (flush-pending-rpcs 4)
substitute: guix substitute: warning: ACL for archive imports seems to be 
uninitialized, substitutes may be unavailable
substitute: ;;; Failed to autoload make-session in (gnutls):
substitute: ;;; ERROR: missing interface for module (gnutls)
substitute: Backtrace:
substitute:1 (primitive-load "/gnu/repositories/guix/scripts/guix")
substitute: In guix/ui.scm:
substitute:   1352:12  0 (run-guix-command _ . _)
substitute: 
substitute: guix/ui.scm:1352:12: In procedure run-guix-command:
substitute: guix/ui.scm:1352:12: In procedure module-lookup: Unbound variable: 
make-session
guix environment: error: build failed: writing to file: Broken pipe

real0m8.679s
user0m1.199s
sys 0m0.202s


But FWIW, I think the time between no output and the "substitute: ..."
output is dramatically shorter.

I'll report back when I have a better testing environment ready.

Kind regards,
Roel Janssen


Ludovic Courtès writes:

> Hello Guix!
>
> One of the main sources of slowness when talking to a remote daemon, as
> with GUIX_DAEMON_SOCKET=guix://…, is the many RPCs that translate in
> lots of network round trips:
>
> --8<---cut here---start->8---
> $ GUIX_PROFILING=rpc ./pre-inst-env guix build inkscape -d --no-grafts
> /gnu/store/iymxyy5sn0qrkivppl6fn0javnmr3nss-inkscape-0.92.1.drv
> Remote procedure call summary: 1006 RPCs
>   built-in-builders  ... 1
>   add-to-store   ...   136
>   add-text-to-store  ...   869
> --8<---cut here---end--->8---
>
> In this example we’re making ~1,000 round trips; not good!
>
> Before changing the protocol, an idea that came to mind is to do “RPC
> pipelining”: send as many RPC requests at once, then read all the
> corresponding responses.
>
> It turns out to necessitate a small change in the daemon, though, but
> the attached patch demonstrates it: the client buffers all
> ‘add-text-to-store’ RPCs, and writes them all at once when another RPC
> is made (because other RPCs, which are not buffered, might depend on the
> effect of those ‘add-text-to-store’ RPCs) or when the connection is
> closed.  In practice, on the example above, it manages to buffer all 869
> RPCs and send them all at once.
>
> To estimate the effectiveness of this approach, I introduced delay on
> the loopback device with tc-netem(8) and measured execution time (the
> first run uses pipelining, the second doesn’t):
>
> --8<---cut here---start->8---
> $ sudo tc qdisc add dev lo root netem delay 150ms
> $ time GUIX_DAEMON_SOCKET=guix://localhost ./pre-inst-env guix build inkscape 
> -d --no-grafts
> accepted connection from 127.0.0.1
> /gnu/store/iymxyy5sn0qrkivppl6fn0javnmr3nss-inkscape-0.92.1.drv
>
> ;;; (flush-pending-rpcs 869)
>
> real  0m47.796s
> user  0m1.307s
> sys   0m0.056s
> $ time GUIX_DAEMON_SOCKET=guix://localhost guix build inkscape -d --no-grafts
> accepted connection from 127.0.0.1
> /gnu/store/iymxyy5sn0qrkivppl6fn0javnmr3nss-inkscape-0.92.1.drv
>
> real  5m7.226s
> user  0m1.392s
> sys   0m0.056s
> $ sudo tc qdisc del dev lo root
> --8<---cut here---end--->8---
>
> So the wall-clock time is divided by 6 thanks to ‘add-text-to-store’
> pipelining, but it’s still pretty high due to the 136 ‘add-to-store’
> RPCs which are still *not* pipelined.
>
> It’s less clear what to do with these.  Buffering them would require
> clients to compute the store file name of the files that are passed to
> ‘add-to-store’, which involves computing the hash of the files itself,
> which can be quite costly and redundant with what the daemon will do
> eventually anyway.  The CPU cost might be compensated for when latency
> is high, but not when latency is low.
>
> Anyway, food for thought!
>
> For now, if those using Guix on clusters are willing to test the patch
> below (notice that you need to run the patched guix-daemon as well), I’d
> be interested in seeing how representative the above test is!
>
> Ludo’.
>
> diff --git a/guix/store.scm b/guix/store.scm
> index b15da5485..1ba22cf2d 100644
> --- a/guix/store.scm
> +++ b/guix/store.scm
> @@ -40,6 +40,7 @@
>#:use-module (ice-9 regex)
>#:use-module (ice-9 vlist)
>#:use-module (ice-9 popen)
> +  #:use-module (ice-9 format)
>#:use-module (web uri)
>#:export (%daemon-socket-uri
>  %gc-roots-directory
> @@ -322,7 +323,7 @@
&g

Re: Xorg tearing fix on Intel HD Graphics 4000

2017-06-25 Thread Roel Janssen

Marius Bakke writes:

> William <w...@vieta.uk> writes:
>
>> The Arch Wiki says that Debian and some others suggest uninstalling
>> xf86-video-intel and relying on the modesetting driver.
>>
>> I have personally found this to help with tearing, but naturally YMMV.
>>
>> See https://wiki.archlinux.org/index.php/Intel_graphics#Installation
>> for more details.
>
> Many distros default to using the built-in xorg modesetting driver.
>
> https://tjaalton.wordpress.com/2016/07/23/intel-graphics-gen4-and-newer-now-defaults-to-modesetting-driver-on-x/
> https://fedoraproject.org/wiki/Features/IntelKMS
>
> Could those affected by this bug see if it works for them? Maybe we
> should follow suit.

Maybe I did this the wrong way, but here's what I placed in my config.scm:

(kernel-arguments (list "modprobe.blacklist=pcspkr" "quiet" "rhgb"
"thinkpad_acpi.fan_control=1" "i195.modeset=1"))

I can say that this does not solve the problem in my case.

Kind regards,
Roel Janssen



Re: Xorg tearing fix on Intel HD Graphics 4000

2017-06-25 Thread Roel Janssen

Chris Marusich writes:

> Mark H Weaver <m...@netris.org> writes:
>
>> However, your proposed workaround is not a proper fix, and I don't think
>> we should apply it system-wide in Guix.
>
> Can you elaborate on why it is not a proper fix?  It isn't obvious to
> me.

Because it changes Xorg in order to fix a bug in Emacs.  A proper fix
would be in Emacs.

Unless every other piece of software (GTK+, GNOME, Qt) have worked
around the bug somehow.. Then it might be a Xorg thing after all.  But
this seems to be unlikely.

Kind regards,
Roel Janssen



Re: Xorg tearing fix on Intel HD Graphics 4000

2017-06-25 Thread Roel Janssen

Chris Marusich writes:

> Roel Janssen <r...@gnu.org> writes:
>
>> Ricardo Wurmus writes:
>>
>>> Hi Roel,
>>>
>>>> With the following patch to the Xorg configuration file, I have a
>>>> tear-free GuixSD experience.  I wonder if this is upstreameable in some
>>>> way.  This patch is probably too broad in effect.  Can I change it so
>>>> that only the graphics card I have will be affected by this patch?
>>>
>>> I’m not sure about this, but you can apply it only to your system by
>>> changing the slim-service’s “startx” value like this:
>>>
>>> --8<---cut here---start->8---
>>> (modify-services %desktop-services
>>>   (slim-service-type
>>>config => (slim-configuration
>>>   (inherit config)
>>>   (startx (xorg-start-command
>>>#:configuration-file
>>>(xorg-configuration-file
>>> #:extra-config
>>> (list your-fix)))
>>> --8<---cut here---end--->8---
>>>
>>> But I suppose what you want is to apply it unconditionally in Guix and
>>> have the X server ignore it for all but this one graphics card, right?
>>
>> No, not necessarily.  I could no longer do 'guix pull && guix system
>> reconfigure ...', which I attempted to solve by upstreaming this patch.
>
> Why wouldn't you be able to do a 'guix pull && guix system reconfigure'?

Because that would build a system generation which doesn't contain the
patched Xorg config.  Ricardo's snippet solved that.

>
>> I wonder if anyone else is having the same problem on this hardware..
>
> Yes, I have this problem.  I use a Lenovo X200.  Like Mark, graphical
> Emacs doesn't display characters right, and it's difficult to tell what
> the buffer actually contains, sometimes.  I've reconfigured my system to
> use the extra Xorg config you've provided in this thread, and I'll let
> you know in a week or two if it seems to have fixed the problem.

Thanks.

Kind regards,
Roel Janssen



Re: Xorg tearing fix on Intel HD Graphics 4000

2017-06-21 Thread Roel Janssen

Mark H Weaver writes:

> Hi Roel,
>
> Roel Janssen <r...@gnu.org> writes:
>
>> Ricardo Wurmus writes:
>>
>>> Hi Roel,
>>>
>>>> With the following patch to the Xorg configuration file, I have a
>>>> tear-free GuixSD experience.  I wonder if this is upstreameable in some
>>>> way.  This patch is probably too broad in effect.  Can I change it so
>>>> that only the graphics card I have will be affected by this patch?
>>>
>>> I’m not sure about this, but you can apply it only to your system by
>>> changing the slim-service’s “startx” value like this:
>>>
>>> --8<---cut here---start->8---
>>> (modify-services %desktop-services
>>>   (slim-service-type
>>>config => (slim-configuration
>>>   (inherit config)
>>>   (startx (xorg-start-command
>>>#:configuration-file
>>>(xorg-configuration-file
>>> #:extra-config
>>> (list your-fix)))
>>> --8<---cut here---end--->8---
>>>
>>> But I suppose what you want is to apply it unconditionally in Guix and
>>> have the X server ignore it for all but this one graphics card, right?
>>
>> No, not necessarily.  I could no longer do 'guix pull && guix system
>> reconfigure ...', which I attempted to solve by upstreaming this patch.
>>
>> I wonder if anyone else is having the same problem on this hardware..  
>
> I have the same problem on my Thinkpad X200.  For me, it mostly only
> happens in Emacs graphical frames, and only within GNOME (and I suppose
> maybe other compositing window managers, though I haven't tried), but
> the problem for me is quite severe.  I've resorted to running Emacs in
> text mode within GNOME Terminal, because otherwise I cannot trust my
> editing at all (e.g. I'm not sure if I'm deleting the messages that I
> intend to delete in Gnus).
>
> However, your proposed workaround is not a proper fix, and I don't think
> we should apply it system-wide in Guix.  I don't think it would be
> accepted upstream.  I think there's a real bug somewhere, most likely in
> Emacs itself, but possibly in the Intel graphics drivers.

Thanks for your response! I look forward to finding out what this bug
is.  If you do, please let us know.

>
> It's good to have the workaround though.  I may apply it to my own
> system and see how it affects graphics performance.  Thank you!

FWIW, I get equal frames per second in SuperTuxKart and Armagetron.
Anyway, there's nothing like experiencing it yourself of course.

Kind regards,
Roel Janssen



Re: Xorg tearing fix on Intel HD Graphics 4000

2017-06-21 Thread Roel Janssen

Ricardo Wurmus writes:

> Hi Roel,
>
>> With the following patch to the Xorg configuration file, I have a
>> tear-free GuixSD experience.  I wonder if this is upstreameable in some
>> way.  This patch is probably too broad in effect.  Can I change it so
>> that only the graphics card I have will be affected by this patch?
>
> I’m not sure about this, but you can apply it only to your system by
> changing the slim-service’s “startx” value like this:
>
> --8<---cut here---start->8---
> (modify-services %desktop-services
>   (slim-service-type
>config => (slim-configuration
>   (inherit config)
>   (startx (xorg-start-command
>#:configuration-file
>(xorg-configuration-file
> #:extra-config
> (list your-fix)))
> --8<---cut here---end--->8---
>
> But I suppose what you want is to apply it unconditionally in Guix and
> have the X server ignore it for all but this one graphics card, right?

No, not necessarily.  I could no longer do 'guix pull && guix system
reconfigure ...', which I attempted to solve by upstreaming this patch.

I wonder if anyone else is having the same problem on this hardware..  

Thanks for your snippet!  I've done a system reconfigure and the extra
configuration applied as expected.

If there is anyone with the same problem, we could look further into
upstreaming it.

Thanks,
Roel



Xorg tearing fix on Intel HD Graphics 4000

2017-06-21 Thread Roel Janssen
Dear Guix,

For a long time now, I have a tearing issue on GuixSD (parts of the
screen do not get updated while others do, resulting in dissapearing
text in Emacs).

With the following patch to the Xorg configuration file, I have a
tear-free GuixSD experience.  I wonder if this is upstreameable in some
way.  This patch is probably too broad in effect.  Can I change it so
that only the graphics card I have will be affected by this patch?

Kind regards,
Roel Janssen

>From 25b431d23071b325b50c584977fcd6c1f9d790af Mon Sep 17 00:00:00 2001
From: Roel Janssen <r...@gnu.org>
Date: Wed, 21 Jun 2017 09:44:46 +0200
Subject: [PATCH] gnu: services: xorg: Fix tearing issue.

* gnu/services/xorg.scm (xorg-configuration-file): Fix tearing issue.
---
 gnu/services/xorg.scm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/gnu/services/xorg.scm b/gnu/services/xorg.scm
index 5bae8c18e..97f92ab53 100644
--- a/gnu/services/xorg.scm
+++ b/gnu/services/xorg.scm
@@ -133,6 +133,13 @@ EndSection
 Section \"ServerFlags\"
   Option \"AllowMouseOpenFail\" \"on\"
 EndSection
+
+Section \"Device\"
+  Identifier  \"Intel Graphics\"
+  Driver  \"intel\"
+  Option  \"AccelMethod\" \"uxa\" #sna
+  Option  \"DRI\" \"2\"
+EndSection
 "
   (string-join (map device-section drivers) "\n") "\n"
   (string-join (map (cut screen-section <> resolutions)
-- 
2.13.1



Re: Performance on NFS

2017-06-17 Thread Roel Janssen

Ludovic Courtès writes:

> Hi!
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> I applied the patch, and here are the results:
>>
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure -- true
>> The following derivations will be built:
>>/gnu/store/0hz8g844432b5h9zbqr9cpsjy0brg15h-profile.drv
>>/gnu/store/wkksb7bbx3jr0p6p5cj4kkphbwday0yd-info-dir.drv
>>/gnu/store/cd2mwx9qprdy23p7j3pik2zs14nifn36-manual-database.drv
>> Creating manual page database for 1 packages... done in 1.816 s
>>
>> real 1m14.686s
>> user 0m5.761s
>> sys  0m0.498s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure -- true
>>
>> real 0m34.100s
>> user 0m5.599s
>> sys  0m0.414s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure -- true
>>
>> real 0m33.821s
>> user 0m5.140s
>> sys  0m0.432s
>
> You’re telling me it’s just as bad as before, right?

Sorry for the somewhat hasted response.

Well, before it was more variable what the speed was.  Now it seems to
be pretty stable around ~30 to ~35 seconds with grafting, and ~15 to ~20
seconds without grafting.

I really appreciate the effort for optimizing.  And I feel it is
improving.

>
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-substitutes --no-grafts -- true
>> The following derivations will be built:
>>/gnu/store/rvh0imjdimwm90nzr0fmr5gmp97lyiix-profile.drv
>>/gnu/store/5hm3v4afjf9gix92ixqzv9bwc11a608s-fonts-dir.drv
>>
>> real 0m37.200s
>> user 0m3.408s
>> sys  0m0.284s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-substitutes --no-grafts -- true
>>
>> real 0m19.415s
>> user 0m3.466s
>> sys  0m0.306s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-substitutes --no-grafts -- true
>>
>> real 0m18.850s
>> user 0m3.536s
>> sys  0m0.346s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-grafts -- true
>>
>> real 0m16.003s
>> user 0m3.246s
>> sys  0m0.301s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-grafts -- true
>>
>> real 0m18.205s
>> user 0m3.470s
>> sys  0m0.314s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-substitutes -- true
>>
>> real 0m33.731s
>> user 0m5.111s
>> sys  0m0.428s
>> [roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
>> --no-substitutes -- true
>>
>> real 0m30.993s
>> user 0m5.049s
>> sys  0m0.458s
>>
>> Why is grafting so slow, even if it doesn't have to graft anything?
>
> Grafting leads to a bunch of additional RPCs:
>
> --8<---cut here---start->8---
> $ GUIX_PROFILING=rpc ./pre-inst-env guix build coreutils
> /gnu/store/mskh7zisxa313anqv68c5lr4hajldjc5-coreutils-8.27-debug
> /gnu/store/xbvwxf4k5njnb3hn93xwqlppjkiz4hdv-coreutils-8.27
> Remote procedure call summary: 379 RPCs
>   build-things   ... 1
>   built-in-builders  ... 1
>   valid-path?... 5
>   query-substitutable-path-infos ... 8
>   query-references   ...22
>   query-valid-derivers   ...48
>   add-text-to-store  ...   294
> $ GUIX_PROFILING=rpc ./pre-inst-env guix build coreutils --no-grafts
> /gnu/store/mskh7zisxa313anqv68c5lr4hajldjc5-coreutils-8.27-debug
> /gnu/store/xbvwxf4k5njnb3hn93xwqlppjkiz4hdv-coreutils-8.27
> Remote procedure call summary: 294 RPCs
>   built-in-builders  ... 1
>   query-substitutable-path-infos ... 1
>   build-things   ... 1
>   valid-path?... 5
>   add-text-to-store  ...   286
> --8<---cut here---end--->8---
>
> So the problem is probably not NFS in this case but rather RPC
> performance.
>
> However, I can’t help with this until you drop ‘guixr’ and use
> GUIX_DAEMON_SOCKET=guix:// instead.  Hint hint.  ;-)

This is what guixr is already doing, see:
https://github.com/UMCUGenetics/guix-additions/blob/master/umcu/packages/guix.scm#L95

So I went a little bit further and did this:
[roel@hpcguix ~]$ export GUIX_DAEMON_SOCKET="/gnu/daemon-socket/socket"
[roel@hpcguix ~]$ export NIX_STATE_DIR=/gnu

This means that if I run "guix" on the same machine as where guix-daemon
is running, and communicate over the UNIX socket, I should not
experience a performance problem, othe

Re: Performance on NFS

2017-06-17 Thread Roel Janssen
  0m19.415s
user0m3.466s
sys 0m0.306s
[roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
--no-substitutes --no-grafts -- true

real0m18.850s
user0m3.536s
sys 0m0.346s
[roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
--no-grafts -- true

real0m16.003s
user0m3.246s
sys 0m0.301s
[roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
--no-grafts -- true

real0m18.205s
user0m3.470s
sys 0m0.314s
[roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
--no-substitutes -- true

real0m33.731s
user0m5.111s
sys 0m0.428s
[roel@hpcguix guix]$ time guixr environment --ad-hoc coreutils --pure 
--no-substitutes -- true

real0m30.993s
user0m5.049s
sys 0m0.458s

Why is grafting so slow, even if it doesn't have to graft anything?

So, because grafting is disk-intensive rather than CPU-intensive, it might
be a good idea to be able to globally disable grafting.  (it would
reduce the after-we-build-it-time considerably for our cluster.)

Kind regards,
Roel Janssen



Re: Performance on NFS

2017-06-12 Thread Roel Janssen

Ludovic Courtès writes:

> Hi Roel,
>
> Roel Janssen <r...@gnu.org> skribis:
>
>> You should know that we have 'submit' nodes that use the guixr wrapper
>> script to connect to the guix-daemon that runs on the 'hpcguix' node.
>>
>> Both have a /gnu mounted by a storage subsystem.
>>
>> I couldn't run the second command on a 'submit' node.  But I could run
>> it in the 'hpcguix' node.
>
> OK.
>
> Side note: I think you can replace your ‘guixr’ wrapper by just doing:
>
>   export GUIX_DAEMON_SOCKET=guix://hpcguix:1234
>
> See <https://www.gnu.org/software/guix/manual/html_node/The-Store.html>.
>
>> The first command:
>> --
>>
>> [roel@hpc-submit1 ~]$ time guixr environment --ad-hoc coreutils --pure -- 
>> true
>>
>> real0m38.415s
>> user0m6.075s
>> sys 0m0.611s
>>
>> [roel@hpcguix ~]$ time guix environment --ad-hoc coreutils --pure -- true
>>
>> real0m27.054s
>> user0m4.254s
>> sys 0m0.383s
>>
>>
>> The second command:
>> ---
>>
>> [roel@hpcguix ~]$ time guix environment --ad-hoc -e '(@ (gnu packages base) 
>> coreutils)' --pure --no-substitutes --no-grafts -- true  
>>  
>>
>> The following derivations will be built:
>>/gnu/store/9wczighnyz1bz43j4wawf09z180g3ywv-profile.drv
>>/gnu/store/ffsyhajbdcp1lcq6x65czghya1iydly8-info-dir.drv
>>/gnu/store/5gyl3l23ps6f8dgay4awybwq7n9j9pzk-fonts-dir.drv
>>/gnu/store/l2mwj2q4vnq2v5raxz64ra7jyphd2jyd-manual-database.drv
>> Creating manual page database for 1 packages... done in 5.524 s
>>
>> real1m6.812s
>> user0m2.969s
>> sys 0m0.325s
>> [roel@hpcguix ~]$ time guix environment --ad-hoc -e '(@ (gnu packages base) 
>> coreutils)' --pure --no-substitutes --no-grafts -- true
>>
>> real0m23.357s
>> user0m2.802s
>> sys 0m0.340s
>>
>>
>> I suspect that the difference between the two commands is that one only
>> looks for one module, while the other looks in all modules.  Looking at
>> the second run, I suppose the difference is quite small.
>
> Yeah, -e doesn’t seem to be much faster (there are still a lot of
> modules to load anyway.)
>
> At any rate, let’s see what we can do; 23 seconds is not okay.
>
> I did a quick experiment:
>
> --8<---cut here---start->8---
> $ strace -o ,,s -s 123 guix environment --ad-hoc -e '(@ (gnu packages base) 
> coreutils)' --pure --no-substitutes --no-grafts -- true
> $ grep ^open ,,s |wc -l
> 1095
> $ grep '^open.*ENOENT' ,,s |wc -l
> 136
> $ grep -E '^(open|stat|lstat).*patches/gcc-arm-bug' ,,s |wc -l
> 27
> $ grep -E '^(open|stat|lstat).*guix/build/utils' ,,s |wc -l
> 2190
> --8<---cut here---end--->8---
>
> After the patch below, I get:
>
> --8<---cut here---start->8---
> $ grep -E '^(open|stat|lstat).*guix/build/utils' ,,s2 |wc -l
> 14
> $ grep -E '^(open|stat|lstat).*patches/gcc-arm-bug' ,,s2 |wc -l
> 4
> --8<---cut here---end--->8---
>
> Here’s the big picture before and after:
>
> --8<---cut here---start->8---
> $ strace -c guix environment --ad-hoc -e '(@ (gnu packages base) coreutils)' 
> --pure --no-substitutes --no-grafts -- true
> % time seconds  usecs/call callserrors syscall
> -- --- --- - - 
>  32.550.009781   1 10463  9158 stat
>  15.550.004673   1  8780   write
>  11.260.0033853385 1   wait4
>   7.940.002387  20   12212 futex
>   6.380.001917   0  5052 4 lstat
>   5.700.001713   2  1095   136 open
>   5.540.001664   1  2919   read
>   3.020.000909   2   525   mmap
>   2.960.000889 148 6   clone
>   2.500.000751   2   481   mprotect
>   2.000.000600   1   959   close
>   1.560.000469   1   898 3 lseek
>   1.100.000330   3   100   sendfile
>   0.880.000264   0   541   fstat
>   0.420.000127   1   175   brk
>   0.150.44   222   rt_sigaction
>   0.090.26

Re: Performance on NFS

2017-06-07 Thread Roel Janssen

Ludovic Courtès writes:

> How does:
>
>   time guix environment --ad-hoc coreutils --pure -- true
>
> compare to:
>
>   time guix environment --ad-hoc -e '(@ (gnu packages base) coreutils)' 
> --pure -- true
>
> ?  That would give us an estimate of how much the cache I describe would
> help.
>
> Thanks,
> Ludo’.

You should know that we have 'submit' nodes that use the guixr wrapper
script to connect to the guix-daemon that runs on the 'hpcguix' node.

Both have a /gnu mounted by a storage subsystem.

I couldn't run the second command on a 'submit' node.  But I could run
it in the 'hpcguix' node.


The first command:
--

[roel@hpc-submit1 ~]$ time guixr environment --ad-hoc coreutils --pure -- true

real0m38.415s
user0m6.075s
sys 0m0.611s

[roel@hpcguix ~]$ time guix environment --ad-hoc coreutils --pure -- true

real0m27.054s
user0m4.254s
sys 0m0.383s


The second command:
---

[roel@hpcguix ~]$ time guix environment --ad-hoc -e '(@ (gnu packages base) 
coreutils)' --pure --no-substitutes --no-grafts -- true 
 
The following derivations will be built:
   /gnu/store/9wczighnyz1bz43j4wawf09z180g3ywv-profile.drv
   /gnu/store/ffsyhajbdcp1lcq6x65czghya1iydly8-info-dir.drv
   /gnu/store/5gyl3l23ps6f8dgay4awybwq7n9j9pzk-fonts-dir.drv
   /gnu/store/l2mwj2q4vnq2v5raxz64ra7jyphd2jyd-manual-database.drv
Creating manual page database for 1 packages... done in 5.524 s

real1m6.812s
user0m2.969s
sys 0m0.325s
[roel@hpcguix ~]$ time guix environment --ad-hoc -e '(@ (gnu packages base) 
coreutils)' --pure --no-substitutes --no-grafts -- true

real0m23.357s
user0m2.802s
sys 0m0.340s


I suspect that the difference between the two commands is that one only
looks for one module, while the other looks in all modules.  Looking at
the second run, I suppose the difference is quite small.

Kind regards,
Roel Janssen



  1   2   3   4   5   >