Re: Non-bootstrappable NPM packages

2024-02-20 Thread Jelle Licht
Nicolas Graves  writes:

> On 2019-07-24 15:41, Jelle Licht wrote:
>
>> Timothy Sample  writes:
>>
>> [snip]
>>
>>> I’ve come to think that bootstrapping JavaScript might be easier than it
>>> looks.  As time goes on, Node gets better at the newer JavaScript
>>> features.  This removes the need for things like Babel or Rollup, since
>>> with some care, Node can run the source directly with out any
>>> transformations or bundling.  That being said, TypeScript looks to be a
>>> major issue, as it is used in many fundamental JavaScript packages and
>>> it is not bootstrappable.
>>
>> Very recently (IE about 94 minutes ago), I found out something
>> interesting that might be helpful; Sucrase[0] is, among other things, a
>> typescript transpiler that does not do any type checking, and it only
>> has some runtime dependencies.
>>
>> I created some “fiio”-packages as well [1] , and I have confirmed that
>> it actually works! My next step was of course to compile TypeScript
>> proper, and this worked with one tiny snag that I reported at [2]. After
>> manually fixing these problems in the TypeScript source tree, I was able
>> to transpile the TypeScript sources using guix-packaged
>> `node-sucrase-bootstrap'.
>
> Hi Jelle!
>
> Did someone made some progress on the build-system since that to allow
> for this to be taken into account? If you still have it, could you share
> your "fiio" packages once again? The paste link is expired. Thanks!

I don't have them anymore, but I could re-generate them.
Are you still interested, or am I too slow in responding, and are your
other node-related emails more relevant by now?

KR,
- Jelle




Re: Potential removal of unmaintained node-openzwave-shared

2023-03-30 Thread Jelle Licht


Hi,

"Philip McGrath"  writes:

> Hi,
>
> On Wed, Mar 22, 2023, at 8:08 AM, Jelle Licht wrote:
>> Hey guix,
>>
>> In getting `Node.js' updated to a more recent LTS version[0], I found
>> out that node-openzwave-shared no longer builds with modern versions of
>> node [1]; random people on the Internet seem to indicate that
>> the hip new thing is Z-Wave JS [2].
>>
>> Long story short, what is our de facto policy here?
>>
>> 1) Keep around a copy of Node 14 and all node-openzwave-shared deps,
>>even after the End Of Life of 2023-04-30
>> 2) Remove node-openzwave-shared, and move to Node 18 whenever possible
>>without this package.
>> 3) Patch node-openzwave-shared' so it builds with newer versions of
>>Node, and move to Node 18.
>> 4) Remove node-openzwave-shared, move to Node 18, package the relevant
>>parts of Z-Wave JS.
>>
>> I don't have the time nor means for anything but option 2) myself, so if
>> consensus deems any of the other options a better way forward,
>> volunteers are invited to apply :-)
>>
>> [0]: https://issues.guix.gnu.org/59188
>> [1]: https://github.com/OpenZWave/node-openzwave-shared/issues/398
>> [2]: https://github.com/zwave-js?type=source
>
> I added this package, so I have some interest in trying options 3 or 4, but I 
> don't think this should block Node 18 in any case.
>
> Is there a log message for the build failure somewhere? I don't see any 
> details at [2], and I'm a bit surprised by the failure.

I've played around with it, and was able to get it to compile (and pass
tests?) in v6 of the node patch series:

https://issues.guix.gnu.org/59188#83

Could you see if that works for your workflows?

Thanks!
- Jelle




Potential removal of unmaintained node-openzwave-shared

2023-03-22 Thread Jelle Licht
Hey guix,

In getting `Node.js' updated to a more recent LTS version[0], I found
out that node-openzwave-shared no longer builds with modern versions of
node [1]; random people on the Internet seem to indicate that
the hip new thing is Z-Wave JS [2].

Long story short, what is our de facto policy here?

1) Keep around a copy of Node 14 and all node-openzwave-shared deps,
   even after the End Of Life of 2023-04-30
2) Remove node-openzwave-shared, and move to Node 18 whenever possible
   without this package.
3) Patch node-openzwave-shared' so it builds with newer versions of
   Node, and move to Node 18.
4) Remove node-openzwave-shared, move to Node 18, package the relevant
   parts of Z-Wave JS.

I don't have the time nor means for anything but option 2) myself, so if
consensus deems any of the other options a better way forward,
volunteers are invited to apply :-)

- Jelle


[0]: https://issues.guix.gnu.org/59188
[1]: https://github.com/OpenZWave/node-openzwave-shared/issues/398
[2]: https://github.com/zwave-js?type=source



Guix driver paths for icecat RDD sandbox

2023-01-15 Thread Jelle Licht
Hi guix,

I was playing around tyring to get hardware enabled video decoding
working in icecat and/or firefox in guix, and found out that the fine
folks working on Nix have already gotten a patch upstreamed that allows
stuff in /nix/store to be loaded[0]. (Grep around for '/nix/store' in
our icecat sources to see what I mean).

>From what I can see, the RDD whitelist reads through symlinks, so the
actual target file needs to be whitelisted before the file is loaded in
the sandbox.

Without this (or a similar fix), we'd need a custom package per possible
value of LIBVA_DRIVERS_PATH, as loadable libraries for hardware
accelaration do not seem directly configurable via
'browser/app/profile/icecat.js' at runtime. I may be wrong here, but
this seems to also imply that a recompilation of icecat would be
required as well every time one of these 'inputs' change :/.

OTOH, it would have some drawbacks:
- It hardcodes /gnu/store, instead of $MY_MAGIC_STORE_LOCATION
- It allows loading of pretty much anything in the store by the
sandboxed process.

The second drawback seems pretty iffy, but the current suggested
workaround is to disable the sandbox entirely.

So that leaves us with 2 questions:

1. Do we want apply a patch to whitelist '/gnu/store'?
2. If so, would we want to also send this patch upstream firefox? They
seem open to accepting it.

I've opened an upstream issue for a similar treatment of /gnu/store,
which may also simplify the 'build-sandbox-whitelist' phase of our
icecat package[1] if accepted. I'm not entirely sure if that is
ultimately a good thing yet though.

Happy to hear any thoughts on this subject.

- Jelle

[0]: https://bugzilla.mozilla.org/show_bug.cgi?id=1761692
[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1808408



Re: RFC: new syntax for inline patches

2022-01-05 Thread Jelle Licht
Liliana Marie Prikler  writes:

> Hi Ricardo,
>
> Am Dienstag, dem 04.01.2022 um 17:50 +0100 schrieb Ricardo Wurmus:
>> Hi Guix,
>> 
>> does this pattern look familiar to you?
>> 
>>   (arguments
>>     (list
>>     #:phases
>>     '(modify-phases %standard-phases
>>   (add-after 'unpack 'i-dont-care
>>     (lambda _
>>   (substitute* "this-file"
>>     (("^# some unique string, oh, careful! gotta \\(escape\\)
>> this\\." m)
>>  (string-append m "\nI ONLY WANTED TO ADD THIS
>> LINE!\n"
>> 
>> This is a lot of boilerplate just to add a line to a file.  I’m using
>> “substitute*” but actually I don’t want to substitute anything.  I
>> just know that I need to add a line somewhere in “this-file”.
> Now many of these substitute*s are actually sane.  E.g. those which
> match the beginning of a defun in Emacs terms.  However, there for sure
> are cases in which I think "when all you have is a substitute*, every
> problem you have starts to look like... oh shit, I can't match this
> multi-line string, back to square 0".
>
>> CMakeLists.txt
> I feel your pain.
>
>> We have a lot of packages like that.  And while this boilerplate
>> pattern looks familiar to most of us now, it is really unclear.  It
>> is imperative and abuses regular expression matching when really it
>> should have been a patch.
> Now imperative is not really the bad thing here, but I'll get to that a
> little bit later when discussing your mockup.  I do however agree that
> it's too limited for its task.
>
>> There are a few reasons why we don’t use patches as often:
>> 
>> 1. the source code is precious and we prefer to modify the original
>> sources as little as necessary, so that users can get the source code
>> as upstream intended with “guix build -S foo”.  We patch the sources
>> primarily to get rid of bundled source code, pre-built binaries, or
>> code that encroaches on users’ freedom.
>> 
>> 2. the (patches …) field uses patch files.  These are annoying and
>> inflexible.  They have to be added to dist_patch_DATA in
>> gnu/local.mk, and they cannot contain computed store locations.  They
>> are separate from the package definition, which is inconvenient.
>> 
>> 3. snippets feel like less convenient build phases.  Snippets are not
>> thunked, so we can’t do some things that we would do in a build phase
>> substitution.  We also can’t access %build-inputs or %outputs.  (I
>> don’t know if we can use Gexps there.)
> Both patches and snippets serve the functions you've outlined in 1. 
> Now 2. is indeed an issue, but it's still an issue if you include patch
> as native input as well as the actual patch file you want to apply
> (assuming it is free of store locations, which can be and has been
> worked around).
>
>> It may not be possible to apply patches with computed store locations
>> — because when we compute the source derivation (which is an input to
>> the package derivation) we don’t yet know the outputs of the package
>> derivation.  But perhaps we can still agree on a more declarative way
>> to express patches that are to be applied before the build starts;
>> syntax that would be more declarative than a serious of brittle
>> substitute* expressions that latch onto hopefully unique strings in
>> the target files.
>> 
>> (We have something remotely related in etc/committer.scm.in, where we
>> define a record describing a diff hunk.)
>> 
>> Here’s a colour sample for the new bikeshed:
>> 
>>   (arguments
>>     (list
>>   #:patches
>>   #~(patch "the-file"
>>  ((line 10)
>>   (+ "I ONLY WANTED TO ADD THIS LINE"))
>>  ((line 3010)
>>   (- "maybe that’s better")
>>   (+ (string-append #$guix " is better"))
>>   (+ "but what do you think?")
> Now this thing is again just as brittle as the patch it encodes and if
> I know something about context-less patches then it's that they're
> super not trustworthy.

What do you mean here, with 'brittle' and 'trustworthy'? Is it the fact
that line numbers are hardcoded, compared to the substitute* approach?

> However, have you considered that something similar has been lying
> around in our codebase all this time and 99% of the packages ignore it
> because it's so obscure and well hidden?  Why yes, of course I'm
> talking about emacs-batch-edit-file.
>
> Now of course there are some problems when it comes to invoking Emacs
> inside Guix build.  For one, not every package has emacs-minimal
> available at build time and honestly, that's a shame.  Even then, you'd
> have to dance around and add emacs-utils to enable it.  And after that?
> You code Emacs Lisp in the middle of Guile code!  Now certainly, this
> UI can be improved.

> Particularly, I'd propose a set of monadic functions that operate on a
> buffer in much the same way Emacs does.  Now we wouldn't need to
> implement all of Emacs in that way (though we certainly could if given
> enough time).
>
> Basic primitives would be the 

Re: importers and input package lookup

2022-01-03 Thread Jelle Licht
Ludovic Courtès  writes:

> Hi!
>
> Attila Lendvai  skribis:
>
>> there are two, independent namespaces:
>> 1) the scheme one, and
>> 2) the guix package repository.
>>
>> when i work on an importer (golang), it skips the packages that are already 
>> available in 2), but then it has no clue under what variable name they are 
>> stored in 1), and in which scheme module.
>
> Does the variable name matters though?  In general what matters for the
> importer is whether the package/version exists, regardless of the
> variable name.

Since these packages often end up in a `inputs' list, the variable name
seems pretty relevant.

>> should the dependency lists in the package forms be emitted as 
>> (specification->package "pkgname@0.1.0") forms?
>
> No, not for packages in Guix proper.
>
> [...]
>
>> a bit of a tangent here, and a higher-level perspective, but... shouldn't 
>> the package definition DSL have support for this? then most package 
>> descriptions could be using package specifications instead of scheme 
>> variables, and 1) could be phased out. or would that be more error prone? 
>> maybe with a tool that warns for the equivalent of undefined variable 
>> warnings?
>
> Package specs are ambiguous compared to variable references (they depend
> on external state, on the set of chosen channels, etc.) so in general we
> want to refer to variables.

They are pretty explicit at one point in time: the moment we run an
importer. So at some point in the course of running the importer, we
know we have a package definition somewhere that is pretty much
equivalent to the result of (specification->package "pkgname@0.1.0");
can we somehow serialize this package object, perhaps using a heuristic?

If someone fiddles with GUIX_PACKAGE_PATH, GUILE_LOAD_PATH or any amount
of channels this impacts the result of running an importer, but
importers already can give different results depending on these
settings.



Re: Convention for new “guix style“?

2021-12-22 Thread Jelle Licht
Hi,

zimoun  writes:

> Hi,
>
> This could be part of a RFC process. :-)
>
>
> The Big Change introduces new style, other said, this old
>
> --8<---cut here---start->8---
>  (native-inputs
>  `(("perl" ,perl)
>("pkg-config" ,pkg-config)))
> --8<---cut here---end--->8---
>
> is replaced by this new,
>
> --8<---cut here---start->8---
>  (native-inputs
>   (list perl pkg-config))
> --8<---cut here---end--->8---
>
> It removes all the labels. \o/  More details [1].
>
>
> We had a discussion on IRC starting here [2].  This proposal is to
> document in the manual and adapt ‘guix style’ to have one input per line
> – as it was the case with the old style.
>
> Aside preference, for instance, I find easier to read,
>
> --8<---cut here---start->8---
> (inputs ;required for test
>  (list julia-chainrulestestutils
>julia-finitedifferences
>julia-nanmath
>julia-specialfunctions))
> (propagated-inputs
>  (list julia-chainrulescore
>julia-compat
>julia-reexport
>julia-requires))
> --8<---cut here---end--->8---
>
> than
>
> --8<---cut here---start->8---
> (inputs ;required for test
>  (list julia-chainrulestestutils julia-finitedifferences julia-nanmath
>julia-specialfunctions))
> (propagated-inputs
>  (list julia-chainrulescore julia-compat julia-reexport
>julia-requires))
> --8<---cut here---end--->8---
>
> but this is somehow bikeshed.  However, the current situation leads to
> non-uniform or ambiguity.
>
> For example, the comments as here:
>
> --8<---cut here---start->8---
> (inputs
>  (list libx11 libiberty ;needed for objdump support
>zlib))   ;also needed for objdump support
> --8<---cut here---end--->8---
Yuck indeed!

> when the comments apply to only one line as it was:
>
> --8<---cut here---start->8---
>  `(("libx11" ,libx11)
>("libiberty" ,libiberty)   ;needed for objdump support
>("zlib" ,zlib)))   ;also needed for objdump support
> --8<---cut here---end--->8---
>
> Other said, this looks better:
>
> --8<---cut here---start->8---
> (inputs
>  (list libx11
>libiberty ;needed for objdump support
>zlib));also needed for objdump support
> --8<---cut here---end--->8---
>
> Obviously, we could come up a rule depending on comments, numbers of
> inputs, length, etc.  It was not the case with the old style when
> nothing prevented us to do it.  Because one item per line is, IMHO,
> easier to maintain.

You seem to be putting the cart before the horse here; we should not let
our (lack of) tooling determine our styling preferences.

> Consider the case,
>
> (inputs
>  (list bar foo1 foo2 foo3 foo3 foo4))
>
> then another ’baz’ inputs is added, do we end with,
>
> (inputs
>  (list bar foo1 foo2 foo3 foo3 foo4
>baz))
>
> to minimize and ease reading the diff, or do we end with,
>
> (inputs
>  (list bar
>baz
>foo1
>foo2
>foo3
>foo3
>foo4))

The second, ideally.

> ?  And the converse is also true, consider the case just above and what
> happens if foo1, foo2 and foo3 are removed.

Everything gets put on a single line again.

> One item per line solves all these boring cosmetic questions.

To be fair, any policy that can be automatically applied solves those
very same boring cosmetic questions. I agree that whatever style we end
up with, we should be able to automatically apply it. 

> Therefore, I propose to always have only one item per line.  To be
> concrete, for one item:
>
>   (inputs
>(list foo))
>
> or not
>
>   (inputs (list foo))
>
> And for more than one item:
>
>   (inputs
>(list foo
>  bar))
>
> This would avoid “cosmetic” changes when adding/removing inputs and/or
> comments.

This is not a convincing argument to me; I very much doubt that we have
that many packages that switch back and forth between having <= 4 and >
4 inputs constantly. That is not to say that I think we won't see it
happen; I just don't think it happens often enough to warrant what you
are proposing :-).

> Sadly, it implies another Big Change.  But earlier is better and we
> should do it as soon as possible while the conversion is not totally
> done yet.

I agree, so here's a 

Re: Flag day for simplified package inputs

2021-12-13 Thread Jelle Licht
Hello,

Ludovic Courtès  writes:

> Hi,
>
> Jelle Licht  skribis:
>
>> I will work on that. Do we already have a suitable 'bulk change' in the
>> repo? Or should we first run `guix style', and subsequently use that
>> commit as the first entry in the .git-blame-ignore-revs file?
>
> The latter I guess.

Attached what I was thinking of: I decided to go with integrating the
git-blame-ignore-revs file with our existing gitconfig situation. Let me
know what you think of the workflow and the documented changes after
running `guix style'.

>From 69926c94fb576e503d7838836cfd83066c39abcc Mon Sep 17 00:00:00 2001
From: Jelle Licht 
Date: Mon, 13 Dec 2021 16:08:22 +0100
Subject: [PATCH] maint: Ignore specified bulk changes in git blame.

* etc/git/git-blame-ignore-revs: New file.
* etc/git/gitconfig (blame): Add ignoreRevsFile.
* doc/guix.texi ("Invoking guix style"): Document git-blame-ignore-revs usage.

Signed-off-by: Jelle Licht 
---
 doc/guix.texi | 5 +
 etc/git/git-blame-ignore-revs | 0
 etc/git/gitconfig | 3 +++
 3 files changed, 8 insertions(+)
 create mode 100644 etc/git/git-blame-ignore-revs

diff --git a/doc/guix.texi b/doc/guix.texi
index 59651f996b..0c0293cc8e 100644
--- a/doc/guix.texi
+++ b/doc/guix.texi
@@ -12769,6 +12769,11 @@ Invoking guix style
 trigger any package rebuild.
 @end table
 
+When applying automated changes to many packages, consider adding that
+particular commit hash to @file{etc/git/git-blame-ignore-revs} in a
+follow-up commit.  This will allow @command{git blame}
+(@pxref{Configuring Git}) to automatically ignore the specified commits.
+
 @node Invoking guix lint
 @section Invoking @command{guix lint}
 
diff --git a/etc/git/git-blame-ignore-revs b/etc/git/git-blame-ignore-revs
new file mode 100644
index 00..e69de29bb2
diff --git a/etc/git/gitconfig b/etc/git/gitconfig
index c9ebdc8fa8..afa598c4e3 100644
--- a/etc/git/gitconfig
+++ b/etc/git/gitconfig
@@ -1,3 +1,6 @@
+[blame]
+	ignoreRevsFile = etc/git/git-blame-ignore-revs
+
 [diff "scheme"]
 	xfuncname = "^(\\(define.*)$"
 

base-commit: e765ad091d861c99eae9fdd402214a2e2e90ed4d
-- 
2.34.0


- Jelle


Re: Flag day for simplified package inputs

2021-11-29 Thread Jelle Licht


Hello there,

Ludovic Courtès  writes:

> Hi,
>
> Jelle Licht  skribis:
>
>> Ludovic Courtès  writes:
>>
>>> Hi,
>>>
>>> Jelle Licht  skribis:
>>>
>>>> When applying this and future bulk changes, could we perhaps list the
>>>> specific commits (+ commented shortlog line) in a file. To clarify, if
>>>> we were to store these commits in `.git-blame-ignore-revs', later on we
>>>> can instruct users to run:
>>>>
>>>> git config blame.ignoreRevsFile .git-blame-ignore-revs
>>>>
>>>> to ignore the bulk change for git blame purposes.
>>>
>>> Sounds like a good idea.  There’s no particular convention regarding the
>>> file name of ‘.git-blame-ignore-revs’, right?
>>
>> As far as I know there is no established convention, although after
>> searching around a bit I found that Sarah Morgensen proposed the exact
>> same thing in September. It seems the chromium project also follows the
>> same naming scheme, so we won't be totally idiosyncratic at least :).
>
> Good.  :-)  Could you (or Sarah?) propose a patch, including a sentence
> or two in the manual with the ‘git config’ snippet?

I will work on that. Do we already have a suitable 'bulk change' in the
repo? Or should we first run `guix style', and subsequently use that
commit as the first entry in the .git-blame-ignore-revs file?

- Jelle



Re: Flag day for simplified package inputs

2021-11-22 Thread Jelle Licht


Hello there,

Ludovic Courtès  writes:

> Hi,
>
> Jelle Licht  skribis:
>
>> When applying this and future bulk changes, could we perhaps list the
>> specific commits (+ commented shortlog line) in a file. To clarify, if
>> we were to store these commits in `.git-blame-ignore-revs', later on we
>> can instruct users to run:
>>
>> git config blame.ignoreRevsFile .git-blame-ignore-revs
>>
>> to ignore the bulk change for git blame purposes.
>
> Sounds like a good idea.  There’s no particular convention regarding the
> file name of ‘.git-blame-ignore-revs’, right?

As far as I know there is no established convention, although after
searching around a bit I found that Sarah Morgensen proposed the exact
same thing in September. It seems the chromium project also follows the
same naming scheme, so we won't be totally idiosyncratic at least :).



Re: Using G-Expressions for public keys (substitutes and possibly more)

2021-11-22 Thread Jelle Licht
Ludovic Courtès  writes:

> Hi,
>
> Liliana Marie Prikler  skribis:
>
>> I think we would probably want to improve on this end in the guile-
>> gcrypt module, i.e. have a public-key "constructor" that returns a
>> canonical-sexp and so on.  WDYT?
>
> I don’t find it very compelling given there’s already
> ‘sexp->canonical-sexp’ & co.  WDYT?

Well, the issue here is 'knowing' what sexp to pass along to that
function in the first place. Are Liliana & I missing something
obvious here?

I had to take a string representation of a valid canonical-sexp, and
pass it through string->canonical-sexp and canonical-sexp->sexp. It's
definitely not an issue for managing my local configuration, but it
seems silly to force _anyone_ wanting to write a canonical-sexp as a
sexp through this REPL adventure, for each kind of canonical-sexp.

- Jelle



Re: Using G-Expressions for public keys (substitutes and possibly more)

2021-11-20 Thread Jelle Licht
Hey folks,

Liliana Marie Prikler  writes:

> Hi Ludo,
>
> Am Donnerstag, den 21.10.2021, 22:13 +0200 schrieb Ludovic Courtès:
>> Hi!
>> 
>> Liliana Marie Prikler  skribis:
>> 
>> > let's say I wanted to add my own substitute server to my
>> > config.scm. 
>> > At the time of writing, I would have to add said server's public
>> > key to
>> > the authorized-keys of my guix-configuration like so:
>> >   (cons* (local-file "my-key.pub") %default-authorized-guix-keys)
>> > or similarily with append.  This local-file incantation is however
>> > pretty weak.  It changes based on the current working directory and
>> > even if I were to use an absolute path, I'd have to copy both that
>> > file
>> > and the config.scm to a new machine were I to use the same
>> > configuration there as well.
>> 
>> Note that you could use ‘plain-file’ instead of ‘local-file’ and
>> inline the key canonical sexp in there.
> Yes, but for that I'd have to either write a (multi-line) string
> directly, which visibly "breaks" indentation of the rest of the file,
> or somehow generate a string which adds at least one layer of
> indentation.  The former is imo unacceptable, the latter merely
> inconvenient.

Would some arbitrary s-expression be a workable solution? See below for
an example for what I understood was your current use-case.

>> > However, it turns out that the format for said key files is some
>> > actually pretty readable Lisp-esque stuff.  For instance, an ECC
>> > key reads like
>> >   (public-key (ecc (curve CURVE) (q #Q#)))
>> > with spaces omitted for simplicity.
>> > Were it not for the (q #Q#) bit, we could construct it using
>> > scheme-file.  In fact, it is so simple that in my local config I
>> > now do exactly that.
>> 
>> Yeah it’s frustrating that canonical sexps are almost, but not quite,
>> Scheme sexps.  :-)
>> 
>> (gcrypt pk-crypto) has a ‘canonical-sexp->sexp’ procedure:
>> 
>> --8<---cut here---start->8---
>> scheme@(guile-user)> ,use(gcrypt pk-crypto)
>> scheme@(guile-user)> ,use(rnrs io ports)
>> scheme@(guile-user)> (string->canonical-sexp
>>(call-with-input-file
>> "etc/substitutes/ci.guix.info.pub"
>>  get-string-all))
>> $18 = #
>> scheme@(guile-user)> ,pp (canonical-sexp->sexp $18)
>> $19 = (public-key
>>   (ecc (curve Ed25519)
>>(q #vu8(141 21 111 41 93 36 176 217 168 111 165 116 26 132 15
>> 242 210 79 96 247 182 196 19 72 20 173 85 98 89 113 179 148
>> --8<---cut here---end--->8---
>> 
>> > (define-record-type*  ...)
>> > (define-gexp-compiler (ecc-key-compiler (ecc-key ) ...)
>> > ...)
>> > 
>> > (ecc-key
>> >   (name "my-key.pub")
>> >   (curve 'Ed25519)
>> >   (q "ABCDE..."))
>> > 
>> > Could/should we support such formats out of the box?  WDYT?
>> 
>> With this approach, we’d end up mirroring all the canonical sexps
>> used by libgcrypt, which doesn’t sound great from a maintenance POV.
> Given that we can use canonical sexps, what about a single canonical-
> sexp compiler then?  I'd have to think about this a bit more when I
> have the time to, but having a way of writing the canonical sexp
> "directly" would imo be advantageous.

What about something such as the following?

--8<---cut here---start->8---
(use-modules (gcrypt base16)
 (gcrypt pk-crypto))

(define-record-type 
  (canonical-sexp-wrapper name sexp)
  canonical-sexp-wrapper?
  (name canonical-sexp-wrapper-name)
  (sexp canonical-sexp-wrapper-sexp))

(define-gexp-compiler (canonical-sexp-wrapper-compiler
   (wrapper ) system target)
  (match wrapper
(($  name sexp)
 (text-file name (canonical-sexp->string
  (sexp->canonical-sexp sexp)) '()
--8<---cut here---end--->8---

This would still leave constructing your s-expression as an exercise to
the reader, which is definitely not amazing. In this specific instance,
I had to look at the output of canonical-sexp->sexp, which is of course
whatever the opposite of discoverable and good UX :).

For the Ed25519 key:
--8<---cut here---start->8---
(define my-public-key
  (canonical-sexp-wrapper
   "my-key.pub"
   `(public-key
 (ecc
  (curve Ed25519)
  (q ,(base16-string->bytevector
   (string-downcase
"C9F307AE...")))
--8<---cut here---end--->8---

To improve on this, is one sexp-based-gexp-compiler + N helper functions
to construct the most-used value types a direction worth exploring?

 - Jelle



Re: Flag day for simplified package inputs

2021-11-19 Thread Jelle Licht
Hey Guix,

Ludovic Courtès  writes:

> Hello Guix!
>
> As a reminder, the plan I proposed¹ is to have a “flag day” shortly
> before ‘core-updates-frozen’ is merged into ‘master’ when we would run
> ‘guix style’ on the whole repo, thereby removing input labels from most
> packages.

> If everything goes well, this could happen in a few days.
>
> Thoughts?
Slightly OT, but I think we should think about it now rather than after
the merge.

As with any bulk change, this change will clobber git blame for the
forseeable future for a big chunk of the code.

When applying this and future bulk changes, could we perhaps list the
specific commits (+ commented shortlog line) in a file. To clarify, if
we were to store these commits in `.git-blame-ignore-revs', later on we
can instruct users to run:

--8<---cut here---start->8---
git config blame.ignoreRevsFile .git-blame-ignore-revs
--8<---cut here---end--->8---

to ignore the bulk change for git blame purposes.

It seems like a maintainable way to mitigate some of the (IMO) major
disadvantages that we bring in by applying bulk changes.

NB, we could still ponder and discuss this approach at a later point in
time, but perhaps we want to deal with some of these issues in a
different way that cannot be delayed as easily.

WDYT?
- Jelle



Re: Patches that should be applied in the Future

2021-11-02 Thread Jelle Licht
Ludovic Courtès  writes:

> Hi!
>
> Jelle Licht  skribis:
>
>> What can we do to make sure we won't simply forget to apply this and
>> other such changes? 
>
> I’d suggest making this change right away in ‘core-updates’.
We need to override a change that has not landed in core-updates yet; it
only exists on c.u.f. last I checked.

I could merge c.u.f. into c.u. this week and subsequently fix this issue
in either the merge commit itself or a fixup immediately after, if that
is what you are saying.

> After all, the reason we introduced ‘-frozen’ is so we could keep having
> fun on ‘core-updates’ while things are being stabilized elsewhere.

For some definitions of "fun" :-). 

- Jelle



Patches that should be applied in the Future

2021-10-28 Thread Jelle Licht
Hello there guix,

All of this is about core-updates-frozen at revision dac8d013bd1.

`c-ares' in gnu/packages/adns.scm has its tests disabled; On my x86_64
machine, it seems the tests for c-ares pass just fine because of the
defined GTEST_FILTER when I re-enable them;

With my limited git-foo, it seems that this particular change comes from
merging signed/master back into c.u.f. in e486b2b674b. To me it seems
that (re-?)introducing the `tests? #f' might have been something silly
done by git-merge.

This is still a pretty good situation to be in; we have working tests
that we simply don't run :). Practically, I wonder what the next step
would be to make sure things go well in the future;

We can't 'fix' c.u.f. this late in the cycle without rebuilding pretty
much everything. In my limited understanding of juggling many branches,
it also seems that merging c.u.f. into master (and later, core-updates)
will have this `tests? #f' being propagated.

What can we do to make sure we won't simply forget to apply this and
other such changes? 

- Jelle



Re: Hero culture among Guix maintainers

2021-05-02 Thread Jelle Licht
Hello Ryan,

tl;dr: (!= 'accountability 'blame), but accountability is essential to
any social endeavour.

Ryan Prior  writes:

> Hey Guix. There's a specific thing I'm motivated to address after the
> recent security incident with the "cosmetic" patches & all the fallout
> of that.

Some proposals and ideas have been put forward on how to prevent
situations such as the recent one from getting to the ugly point they
did. I do understand the values of the proposed ideas, yet I think they
are an orthogonal concern to what actually went on here, and that is
vagueness on the nature of accountability (and trust, to a lesser
extent).

> One way or another, part of the subtext I got from that thread is that
> Mark, an established and respected senior contributor here, believes
> making an error like the one Léo and Raghav made is beneath the level
> of somebody who's entrusted with membership in the committer
> group. That reminds me of a common attitude I've seen in operations at
> a lot of companies, that ops are heroes who take their duties very
> seriously and feel an extreme responsibility to follow procedures and
> avoid using their power destructively.

If you can indulge me a bit and allow me to make a snide remark: I
sincerely hope we hold folks to the standard of not using their power
destructively, in the general case.

What do you think trust is, essentially? To me, it means exactly what
you describe as being a hero; being expected to be accountable for your
actions and commitments. 

That is not to say that anyone has the expectation you won't be making
mistakes; you will, inevitably. But it is your own responsibility to
either decline the accountability, or to live up to it. Is that scary?
Yes, but so is driving a car the first time, or caring for a newborn, or
operating heavy machinery etc etc.

> That attitude is a liability at any organization, because we're all
> fallible and guaranteed to fault on occasions, but I think especially
> so in a high-trust inclusive atmosphere like what Guix has been
> building. I noticed that Léo joined, got really engaged with improving
> the software, and was quickly accepted as a committer, which I thought
> was really cool. I haven't applied for commit access myself yet, both
> because I have anxiety about acting out of line and thus want more
> time to learn the norms of the community, and also because I feel
> reasonably at ease with the tools and processes for non-committers to
> contribute. But I saw that and thought it was a great sign that a
> committed contributor like Léo was able to gain the group's trust so
> quickly. It's a strength and would be a shame to lose that.

What you describe as hero culture comes across as an uncharitable
interpretation of accountability in general, IMHO. Things only get iffy
if accountability is seen as a zero sum game; 'someone else's mistakes
are my Golden Ticket to a promotion'. Guix does not seem to be such a
project to me.

> But if everyone who's entrusted with commit access is also expected to
> live up to the heroic responsibilities of the classic ops/sysadmin
> role, then I think we're setting people up for failure. Ops at the
> best companies are guaranteed to make mistakes, and they have the
> cultural training to be Very Sorry about it and Learn Their Lesson and
> so on. But how do we expect every committer to a volunteer open source
> project to behave that way? Blaming a volunteer for a bad commit,
> calling them out on the mat to fess up and say they're sorry, is big
> "blame the intern" energy and it's ultimately not conducive to our
> security as an organization. I think that's still true even if you
> assume good faith and use only factual statements throughout.

Because of language barriers this might not be a clear distinction (at
least in my mother tongue), but accountability in a functional
organisation is not at all about blame. It is about being criticial,
reflective and open minded on your role and how one can contribute to
reach a desired outcome. Very often, that involves stating "I forked up"
in case you made an honest mistake. From my point of view, folks are
given the benefit of the doubt in most cases in the Guix community.

> It felt to me like Mark was expecting (demanding?) a certain cultural
> performance from Léo: acknowledgement of what was done wrong,
> contrition for his part in it, and a promise not to repeat it. This is
> typical in ops organizations, but it's not necessarily humane, and I
> don't think it's a reasonable expectation in a volunteer project. A
> reexamination of the hero culture among the Guix developers might be
> in order to avoid similar confrontations in the future.

What you refer to as a cultural performance, might be just that, but
it does serve a set of important functions;

1 (short) confirmation that there was no act of bad faith (e.g. "Oops,
  my mistake")

2 clarify continued dedication to the goals of the organisation/group

3 actionable steps to 

Re: npm global prefix needs to be a writeable directory

2020-11-26 Thread Jelle Licht
Ryan Prior  writes:

> On November 26, 2020, Jelle Licht  wrote:
>> On other distros it defaults to a location that is not writable by
>> normal users either;
>
> Indeed I can confirm that Ubuntu node also has this problem.
>
>> Another way folks solved this problem has been using "nvm" which in
>> practice boiled down to [setting a
>> custom global prefix, just managed by nvm now].
>
> I think Guix should work more like nvm than like other distros in this
> case. If this is something we could handle automatically per-profile,
> then that gives us the opportunity to do the right thing and save the
> user some hassle.

As long as we don't change existing behaviour by ignoring custom global
prefixes explicitly requested by the user, this seems fine to me.

Do other language-specific package managers packaged in guix not run
into similar issues?

- Jelle



Re: npm global prefix needs to be a writeable directory

2020-11-25 Thread Jelle Licht
Hey,

Ryan Prior  writes:

> Hi folks! I stumbled across an issue with the node package today and
> wanted to send a report before I forget.
>
> npm assumes that the global prefix is a writeable folder. Operations
> like `npm link` will fail if it isn't. Right now our node package
> doesn't set a prefix, so it defaults to the package's directory in the
> store, which isn't good.

On other distros it defaults to a location that is not writable by
normal users either; it has been considered bad form to install packages
into this default global prefix using sudo for about as long as node has
been released. So I would rather say; "npm expects you to use sudo for
everything" or "npm expects you to manage your global prefix yourself"
:-)

> Maybe the solution is to select a folder inside the user's Guix profile
> (or perhaps in their XDG_CACHE_HOME, if any) and set that explicitly as
> the node global prefix using a profile hook.

Node doesn't do this on other distros either, correct?

> In my case, I ran `npm config set prefix /home/ryan/.cache/npm` as a
> workaround.

Having users set up a valid prefix is already the cannonical solution
for this exact problem, so I don't directly see why guix preventing you
from doing what you shouldn't be doing in the first place should require
extra patching.

Another way folks solved this problem has been using "nvm" which in
practice boiled down to exactly the same thing (that is, setting a
custom global prefix, just managed by nvm now).

Just my 2 cents though.

- Jelle



Re: wip-node-14 branch

2020-11-16 Thread Jelle Licht
Ludovic Courtès  writes:

> Jelle Licht  skribis:
>
>> As a small extra, I have also worked on getting Timothy Sample's
>> 'binary' npm importer to work with the contemporary guix import and
>> guile-json APIs; I'd like some insight into whether this binary importer
>> could still hold some value for inclusion in guix proper[3]. I could
>> still add this code to the branch as well if there is interest.
>
> I think it would make sense to include this importer.  The manual should
> warn about the fact that it does not yield built-from-source packages
> and perhaps give pointers on how to address that, but it can still be
> useful, and probably even more useful if it’s in Guix proper.
>
> Thoughts?

Agreed! I still need to rework some of the recursive importer-related
plumbing in Guix proper for everything to play nicely with the multitude
of versions of packages that are required for pretty much anything.

 - Jelle



Re: Release: Docker Image? DockerHub? skopeo?

2020-11-04 Thread Jelle Licht
Hello,

zimoun  writes:

> Some days ago, we discussed on #guix about releasing Docker images
> ready-to-use.  And you pointed your project:
>
>   https://gitlab.com/daym/guix-on-docker/

[snip]

> Fourth question for Jelle: do you have time to add examples to the
> Cookbook using the package ’skopeo’?  If not, who could?

I was actually putting that on the back burner for now, as I felt I
first needed some time to understand Danny's `guix-on-docker' better in
order for the skopeo stuff to warrant a cookbook section. The entire
workflow as I use it now can be summarized by the following three-liner:

--8<---cut here---start->8---
REGISTRY_IMAGE=registry.gitlab.com/group/project 
IMG=$(guix pack --save-provenance -f docker --entry-point=bin/hello hello)
skopeo copy --dest-creds USERNAME[:PASSWORD] \
   "docker-archive://$IMG" \
   "docker://${REGISTRY_IMAGE}:latest"
--8<---cut here---end--->8---

> This is a follow-up to the thread, started here [1] and arguments about
> DockerHub and co. there [2] and Ludo’s comments [3].

Our timing is pretty bad here, as Dockerhub updated some of their
policies only two days ago regarding usage limits for both 'anonymous'
and free tier registered users [4]. 

> 1: https://lists.gnu.org/archive/html/guix-devel/2020-09/msg00212.html
> 2: https://lists.gnu.org/archive/html/guix-devel/2020-10/msg00352.html
> 3: https://lists.gnu.org/archive/html/guix-devel/2020-10/msg00352.html

[4]: 
https://www.docker.com/blog/what-you-need-to-know-about-upcoming-docker-hub-rate-limiting/

- Jelle



wip-node-14 branch

2020-10-23 Thread Jelle Licht
Hey Guix!

Depending on whom you ask, I come with what I see as good news! I just
pushed a series of patches with the long-term goal of getting the
Node.js situation in guix unstuck. You may find it on the `wip-node-14'
branch on on Savannah.

First things first: most of the hard work has been done by others. I'd
like to specifically thank Timothy Sample for their
`other-node-build-system' and Ryan Prior for getting `esbuild' packaged
and telling me about it. A shout out goes to Giacomo Leidi for making me
grumpy enough with the existing node-build-system to finally sit down
and fix this. I added Timothy as a Co-author on the first commit, so I
hope that is Good Enough for the copyright situation.

The branch contains the following changes:

- A revised `node-build-system', shelling out to `npm' for pretty much all
  phases of building a package. This also solves some inconsistencies
  our way of installing node packages lead to before [0]. It makes the
  current npm packages a bit more verbose, but if we ever end up with
  'properly' built npm packages, they should be easy to write.

- For now, I took the most recent Node.js package we had available that
  we could still build from source, and arbitrarily made an alias for it
  dubbed `node-bootstrap'.

- I used `esbuild' and some good old fashioned distro-packager moxie to
  bend the packages we need to run `llhttp's TypeScript and generate C
  files. I have also verified that the generated C files are identical
  to what is distributed upstream.

- I added a libuv-node package that tracks libuv upstream at a pretty
  fast pace. Help wanted on how/where/if we should manage this.

- ... and then I packaged Node.js version 14.14, which should last us
  for a good couple of years :-).


Some open issues/challenges:
- We used to patch out references to bundled dependencies in Node.js; this
  is no longer easily possible in some cases, although I have verified
  that the `--shared-XXX' flags do work; in practice this just means we
  waste some space in storing the sources for Node.js 14.

- There is a wrapper in use by Node.js for libuv called uvwasi; Once [1]
  is closed, we should look into packaging and subsequently unbundling
  this.

- Currently it is not possible to build a shared-library of `llhttp'[2]:
  I currently worked around this by generated both the sources and
  static library of `llhttp', and copying over our generated
  `llhttp.{c,h}' into the Node.js sourcetree. It works, but it ain't
  pretty. Once we can build llhttp as a shared library and Node.js
  supports a `--shared-llhttp' configure flag, we should do that.

- There are two (disabled) tests with a "TODO" comment above them. As a
  result of me not being clever enough and my laptop not being fast
  enough to compensate, I have not been able to figure why these tests
  fail (and if that is a problem in practice).

- There is _a lot_ of almost-duplication going on between our `node' and
  `node-14.14' packages; I don't like it whatsoever.

As a small extra, I have also worked on getting Timothy Sample's
'binary' npm importer to work with the contemporary guix import and
guile-json APIs; I'd like some insight into whether this binary importer
could still hold some value for inclusion in guix proper[3]. I could
still add this code to the branch as well if there is interest.

I won't be able to commit significant chunks of time on my end in the
upcoming month, but I've learned that it makes sense to share once you
have something worth sharing, instead of when you think it's
done. Reviews, tests and improvements very much welcome! I don't think
it makes sense to still target the upcoming release for all of this fun
stuff to be merged into master, but if somebody want to pick up the
slack and champion that cause; go right ahead!

Thanks!

 - Jelle

[0]: http://issues.guix.gnu.org/41219
[1]: https://github.com/nodejs/node/issues/35339
[2]: https://github.com/nodejs/llhttp/issues/52
[3]: http://logs.guix.gnu.org/guix/2020-10-23.log#123831



Re: hide more output

2020-05-18 Thread Jelle Licht
Pierre Neidhardt  writes:

> Ludovic Courtès  writes:
>
>> So in effect, it wouldn’t display anything upfront, only the size of the
>> stuff downloaded right?  Plus maybe the list of things to build?
>
> I think the list of things to build is a lower level details.  The users
> want to know what's going to end up in their profile, they care less
> about the store.

I care very much on whether this will be quick 5 minute thing, or I will
have to leave my laptop crunching for the next 36 hours.

Of course this information is only useful to me because I happen to know
that e.g. ungoogled-chromium takes ages to build. A new user or one with
slightly less Stockholm syndrome probably can't make many informed
decisions with just a list of "things to build".

 - Jelle



Re: Guix System on a GCE VM -- Success

2020-04-08 Thread Jelle Licht
Hey there,

elaexuo...@wilsonb.com writes:

> To simplify the process I created a script capable of setting up a GCE 
> instance
> from a local qcow2 image, which I am also attaching. The script is intentially
> not a simple turnkey solution, since you should be somewhat familiar with the
> GCE infrastructure before using it. However, feel free to ask questions if
> anything is unclear.

This all looks pretty nifty! Just to be clear, under which license did
you make this script available?

Thanks!
- Jelle



Working on network-manager-l2tp

2020-03-18 Thread Jelle Licht
Hey guix,

Seeing as a lot of people might be working from home at the moment, it
seems a good moment to look at issues that might prevent us from using
guix for that exact purpose. Right now, I can not connect to my company
vpn, which make several processes much more difficult than they need to
be.

The issue with network-manager-l2tp is that because of licensing
incompatibilities, version built with OpenSSL < 3.0 cannot be
redistributed in binary form. As people informed me previously at [1],
it seems that it is currently not possible to prevent the distribution
of substitutes using guix.

I still would like to get network-manager-l2tp packaged, as well as make
a simple service definition available. A hack that would allow us to
truly disable substitutions of certain store paths is a
solution. Another option is to simply wait until OpenSSL 3.0 is
released, although this might still take until Q4 of 2020 [3].

Any ideas?

- Jelle


[1]: http://logs.guix.gnu.org/guix/2019-10-15.log#171645
[2]: https://github.com/nm-l2tp/NetworkManager-l2tp
[3]: https://www.openssl.org/blog/blog/2019/11/07/3.0-update/



Re: branch master updated (0aa0e1f -> 9b7f9e6)

2020-02-24 Thread Jelle Licht
Heya,

Ludovic Courtès  writes:

> [...]
> I see reluctance to the proposed changes in
>  (I
> agree with Ricardo’s concerns).
>
> To me, that suggests at least that further discussion would have been
> needed before pushing these three commits.
>
> What should we do now?

Please do not re-break guix search in eshell by reverting 9b7f9e6f9;
I've enjoyed the roughly 2 hours of using it for the first time in what
seems like forever.

There is no reasonable workaround for this issue that I know of that has
been proposed, so let us not make the good the enemy of the perfect.

Thanks,
- Jelle



Getting docker cli-plugins in guix

2020-01-03 Thread Jelle Licht
Hey guix,

Along with many other folks, I find myself in the situation of having to
interact with docker often. The latest docker command line client
supports a way to add new subcommands via a cli-plugins system[1].

This way of working is easy, but relies on building programs and putting
them in kind of random places in either my home directory or the top
level `/usr/...' [2]. I'd much rather have this be self-contained in my
guix-profiles instead.

Patching the docker command line client to look in "the right places"
should IMHO not be too difficult, but finding the right way to go about
this is a different issue.

My current idea is as follows:
- Patch our docker-cli package to respect e.g. `GUIX_DOCKER_PLUGIN_PATH', and 
add it as a search-path
- Attempt to package some docker-cli compatible plugins, which would be
install binaries according to the earlier mentioned search-path.

The advantages of this approach is that it plays nicely with all the
guix commands. The biggest drawback would be that we would have to
maintain a bespoke patch for docker-cli. Another potential problem would
be any new security issues introduced using this simplistic approach.

I'd appreciate some feedback from anyone using docker, because this does
seem like some fairly low-hanging fruit.

- Jelle

[1]: https://github.com/docker/cli/issues/1534
[2]: The suggested installation method implies downloading some random
fat binaries and copying them to your system as-is. 




Re: “Guix Profiles in Practice”

2019-10-27 Thread Jelle Licht
Konrad Hinsen  writes:

> Maybe we should start a Guix CLI nursery. A project/repository separate
> from Guix itself that contains a copy of the "guix" script under a
> different name ("guixx" for guix-extras?) and with the same interface
> for scripting modules. We could then use this to play collectively with
> ideas, and if something turns out to work well, migrate it to the
> official Guix CLI.
>
> Does this sound like a good idea? Would anyone else participate in such
> an experiment?

This could even work as a proper guix channel, once
http://issues.guix.info/issue/37399 is fixed!



Re: Loading modules built using linux-module-build-system

2019-10-21 Thread Jelle Licht


ping :-)

Jelle Licht  writes:

> Hello Guix,
>
> Not too long ago, the linux-module-build-system was introduced. I ran
> into some code in the wild written by Alex Griffin that defines a
> shepherd service that does the following for a given kernel-module
> package:
>
> - set the LINUX_MODULE_DIRECTORY environment variable to
>   /lib/modules
> - call modprobe on the .ko file (without .ko)
>
> I have verified this way of loading modules to work, but was wondering
> whether we should rather provide a `out-of-tree-kernel-module' service
> of sorts to do this.

I saw a contribution for a kernel module built using
linux-module-build-system today, and I have to wonder, how are people
making use of these modules on their Guix System installation? I
currently use a hacked-together shepherd one-shot service that simply
calls modprobe, but was wondering whether folks have a nicer solution
for this.

> [snip]
> Is there a way by which a service can refer to the
> e.g. `operating-system-kernel' of the operating-system it is embedded
> in?

I actually figured this one out: As long as your service snippet
is directly embedded in the operating-system declaration, one can use:
`(operating-system-kernel this-operating-system)'.

- Jelle



How to keep biber working

2019-10-21 Thread Jelle Licht
Guix,

I've noticed some time ago that the biber version that we currently have
is incompatible with the texlive revision we have packaged.

Incidentally, since the latest core-updates merge, biber does not pass
its test suite anymore due to the hardcoded
`perl-unicode-collate'-related hashes. I think we need a more
sustainable (and dynamic) fix for this, as this is not the first time
this has happened.

As far as I know, biber has no dependents other than itself. I could
also downgrade it 2.11 so it works with our texlive packages
again. Should I simply revert the "gnu: biber: Update to 2.12." commit,
or only adjust the [source] and [version] fields of the biber package?

I can push a short-term fix that involves disabling the test suite for
biber nonetheless, as this seems to be required for both version 2.12
and 2.11.

Thoughts?




Re: i686-linux GCC package on x86_64

2019-10-14 Thread Jelle Licht
Jelle Licht  writes:
> Would `guix build cross-gcc-i686-unknown-linux-gnu' work?
>

My mail reader did not expand your snippet fully, I overlooked the fact
that you already overrode the name field. Sorry for the noise!



Re: i686-linux GCC package on x86_64

2019-10-14 Thread Jelle Licht
Pierre Neidhardt  writes:

>[snip]
> --8<---cut here---start->8---
> (define-public cross-gcc
>   (package
> (inherit ((@@ (gnu packages cross-base) cross-gcc)
>   "i686-unknown-linux-gnu"
>   #:libc (cross-libc "i686-unknown-linux-gnu")))
> (name "cross-gcc")))
> --8<---cut here---end--->8---
>
> Then
>
> --8<---cut here---start->8---
> $ guix build cross-gcc
> ...
> guix build: error: cross-gcc: unknown package
> --8<---cut here---end--->8---
>
> Is this expected?
Would `guix build cross-gcc-i686-unknown-linux-gnu' work?

I *think* the guix command line interface uses the package's name field
to resolve the right package objects, not the guile variable name.

In `(gnu packages cross-base)' -> `cross-gcc':

--8<---cut here---start->8---
(name (string-append "gcc-cross-"
 (if libc "" "sans-libc-")
 target))
--8<---cut here---end--->8---





Re: Joint statement on the GNU Project

2019-10-11 Thread Jelle Licht
Taylan Kammer  writes:

> [snip]
>
> All other political conflicts should IMO be decided on a case by case 
> basis with the goal of reaching mutual compromise within the confines of 
> the communication channels of the GNU project.  That is, 1. no favorites 
> on who gets to silence who and 2. the silencing shall be limited to the 
> project's communication channels.  For example let's take homosexuality 
> and religion.  A gay community member could request another member to 
> refrain from expressing religious views critical of homosexuality within 
> the project's communication channels, as it offends her or him.  On the 
> flip side, a religious person could request another member to refrain 
> from expressing political views in support of normalizing homosexuality 
> within society, because that in turn offends them.  

The difference being that in this example, the bigotry can have
disastrous effects on the safety of the individuals in question, sadly
still in many places in the world.

This is in no shape or way comparable to simply "being offended". To
equate it to a simple difference of opinion does a great injustice to
those who struggle, and have struggled in the past for the right to
simply exist as they are.

I understand this is simply an example, and will give you the benefit of
the doubt that you only meant to illustrate different perspectives on
the interactions that can exist between individuals. I respectfully
disagree with it being a good example though :-)

- Jelle



Re: i686-linux GCC package on x86_64

2019-10-04 Thread Jelle Licht
Mathieu Othacehe  writes:

>
> --8<---cut here---start->8---
> (native-inputs
>  `(,@(if (not (string-prefix? "i686" (%current-system)))
>`(("cross-gcc" ,(cross-gcc "i686-unknown-linux-gnu"))
>  ("cross-binutils" ,(cross-binutils "i686-unknown-linux-gnu")))
>'(
> --8<---cut here---end--->8---
>
> that uses the current gcc if you're already building on an i686 system,
> or define and use a cross-gcc targeting i686 systems otherwise.

This snippet might make a lot of sense to seasoned schemers/guixfolk,
with the multiple levels of (un)quoting and what not. It does not seem
like what somebody with little experience in either would think of by
themselves.

Would it make sense to have a section/stub in the cookbook about cross
compilation?

- Jelle



Re: Let’s merge ‘core-updates’!

2019-09-28 Thread Jelle Licht
Ludovic Courtès  writes:

> Hello Guix!
>
> The ‘core-updates’ branch is in a good shape now, and I think we should
> go ahead and merge in the coming days!
>
> Please try to upgrade your system and your user profile to see if
> anything’s wrong for you (I did that a few days ago and I’m happy!).
>
> You can test, for example, with:
>
>   guix pull --branch=core-updates -p /tmp/new
>   /tmp/new/bin/guix upgrade

Building gnucash using core-updates fails:

building /gnu/store/zsx2szs631q5p6dsa03zh8x7hqgv9x9p-gnucash-3.5.drv...
\ 'configure' phasebuilder for 
`/gnu/store/zsx2szs631q5p6dsa03zh8x7hqgv9x9p-gnucash-3.5.drv' failed with exit 
code 1
build of /gnu/store/zsx2szs631q5p6dsa03zh8x7hqgv9x9p-gnucash-3.5.drv failed
View build log at 
'/var/log/guix/drvs/zs/x2szs631q5p6dsa03zh8x7hqgv9x9p-gnucash-3.5.drv.bz2'.
guix upgrade: error: build of 
`/gnu/store/zsx2szs631q5p6dsa03zh8x7hqgv9x9p-gnucash-3.5.drv' failed

The specific error message is: Unknown CMake command "check_symbol_exists".
Relevant part of build log:
https://paste.debian.net/1103105/

reconfiguring my system and upgrading the other packages in my profile
seems to work very well.




Re: Error cross-compiling Mesa: failing test

2019-09-19 Thread Jelle Licht
Pierre Neidhardt  writes:

> Jelle Licht  writes:
>
>> Pierre Neidhardt  writes:
>>
>>> (define mesa32
>>>   (package
>>> (inherit (to32 mesa))
>>> (arguments
>>>  (substitute-keyword-arguments (package-arguments mesa)
>> ^
>> you use (64bit?)
>> mesa here.
>
> What's the problem? O.o

It seems you inherit from the `(to32 mesa)', but then override the
arguments to come from `(package-arguments mesa)', so you always
ignore the `#:system "i686-linux"' in this setup, if I understand
correctly.



Re: Error cross-compiling Mesa: failing test

2019-09-19 Thread Jelle Licht
Pierre Neidhardt  writes:

> (define mesa32
>   (package
> (inherit (to32 mesa))
> (arguments
>  (substitute-keyword-arguments (package-arguments mesa)
^
you use (64bit?)
mesa here.
>((#:phases phases)
> `(modify-phases ,phases
>(add-after 'unpack 'cross-disable-failing-test
>  (lambda _
>(substitute* "src/gallium/tests/unit/meson.build"
>  (("'u_format_test',") ""))
>#t
> --8<---cut here---end--->8---
>
> still produces a x86_64 build of mesa.  Any clue what I'm missing?
>
> -- 
> Pierre Neidhardt
> https://ambrevar.xyz/



Re: Website translation

2019-08-23 Thread Jelle Licht
"pelzflorian (Florian Pelz)"  writes:

> [snip]

> We could of course translate the URLs instead, we would then add a
> procedure url->localized-href that calls gettext to return the
> localized URL for a given URL and replace each
>
> (@ (href "/help"))
>
> by
>
> (@ ,(url->localized-href "/help"))
>
> and add --keyword='url->localized-href' to the call to xgettext.  The
> downside is that the logic for nginx would need to look up the
> translation instead of looking up the locale, that the logic for Haunt
> would need to look up the filename in the localized builder for pages,
> and that the translator would need to translate all of "/help",
> "../help", "../../help" etc.
>
> (There needs to be some mapping from the lingua “es” to the locale
> identifier “es_ES.utf8” in the call to setlocale.  Currently the code
> uses linguas like “es_ES” instead of “es”, which may or may not
> complicate the logic for nginx, but this could easily be changed.)

I think it would be important to make sure that these URLs do not change
after they are published for the first time, in order to make sure that
links to them still work at a later point. See [1] for more elaborate
arguments against changing URLs.

Being able to change (translated) URLs while also having the old URLs
redirecting to the new ones would be nice, but this seems like it would
actually be much more challenging to do with our current setup.

Is there a way in which to state that certain translations should only
be done once, and not change afterwards?

Regards,
Jelle

[1]: https://www.w3.org/Provider/Style/URI




Re: Non-bootstrappable NPM packages

2019-07-24 Thread Jelle Licht
Timothy Sample  writes:

[snip]

> I’ve come to think that bootstrapping JavaScript might be easier than it
> looks.  As time goes on, Node gets better at the newer JavaScript
> features.  This removes the need for things like Babel or Rollup, since
> with some care, Node can run the source directly with out any
> transformations or bundling.  That being said, TypeScript looks to be a
> major issue, as it is used in many fundamental JavaScript packages and
> it is not bootstrappable.

Very recently (IE about 94 minutes ago), I found out something
interesting that might be helpful; Sucrase[0] is, among other things, a
typescript transpiler that does not do any type checking, and it only
has some runtime dependencies.

I created some “fiio”-packages as well [1] , and I have confirmed that
it actually works! My next step was of course to compile TypeScript
proper, and this worked with one tiny snag that I reported at [2]. After
manually fixing these problems in the TypeScript source tree, I was able
to transpile the TypeScript sources using guix-packaged
`node-sucrase-bootstrap'.

> I’m not sure in what capacity I want to pursue this.  It’s been sitting
> dormant on my computer for while, so I thought sharing it would be
> better than letting it fall by the wayside.  I hope it proves useful one
> way or another.
>
> If you got this far, thanks for reading!  :)
Thank you for sending this informative email :)
>
>
> -- Tim

[0]: https://github.com/alangpierce/sucrase
[1]: https://paste.debian.net/1092893/
[2]: https://github.com/alangpierce/sucrase/issues/464



Re: IceWeasel-UXP and IceDove-UXP

2019-07-19 Thread Jelle Licht


Hey guixuser,

guixuser  writes:

> [use this thread]
>
> Budget? Are you serious? Any distro has to focus on their basic things like 
> email and browser for new user; at least if project wants to grow and 
> accommodate new users.

For at least your browsing needs, I know of two options.

Guix currently has IceCat, which is as close to FireFox as you can get
while still being a FSDG-distro. I assume that if one is used to
IceWeasel, using IceCat should feel very familiar. The only iffy part is
working with extensions that are not yet available in the IceCat
upstream list of supported web extensions.

There is also an `ungoogled-chromium' package, but be aware that this
can take quite some time to build on your local machine if no binary
substitutes are available.

Getting used to the guix way of doing things can take some time and
effort, but it is IMHO definitely worth the investment. Guix empowers
users to exert control over their computing in a way that few other
software does.

If you run into any unclear steps in the documentation or are missing
packages, there are usually people available on the mailing list or on
#guix on the Freenode IRC network to help you figure out how to get
something working.




Loading modules built using linux-module-build-system

2019-07-08 Thread Jelle Licht
Hello Guix,

Not too long ago, the linux-module-build-system was introduced. I ran
into some code in the wild written by Alex Griffin that defines a
shepherd service that does the following for a given kernel-module
package:

- set the LINUX_MODULE_DIRECTORY environment variable to
  /lib/modules
- call modprobe on the .ko file (without .ko)

I have verified this way of loading modules to work, but was wondering
whether we should rather provide a `out-of-tree-kernel-module' service
of sorts to do this.

To resolve out-of-tree kernel module dependencies, I guess we would need
to construct a union of all outputs so we can pass along one value for
LINUX_MODULE_DIRECTORY that contains all out-of-tree modules that might
be needed for one invocation of modprobe.

Another issue is working with custom kernels; Alex' approach allows one
to override the module package by providing an expression that resolves
to a package object, which can use guile's `(inherit my-module)'
approach in combination with `substitute-keyword-arguments' to do
override the `#:linux' argument. This works and has the benefit of being
pretty explicit in defining what we want to happen. The drawback is that
one could possibly try to load a module that was built against a
different kernel version than the one in your operating system
expression.

Is there a way by which a service can refer to the
e.g. `operating-system-kernel' of the operating-system it is embedded
in?

- Jelle




Re: guix import crate wraps #:cargo-inputs twice

2019-07-02 Thread Jelle Licht
Ivan Petkov  writes:

> [...]
> I tried building some of the crates and most of the errors I saw were 
> something
> like “failed to select a version for the requirement `foo = ^0.6
> candidate versions found which didn’t match: 0.8.5”.
>
> Please note that the crate importer always picks the latest version available
> on crates.io  but existing crates may depend on an earlier 
> version. In this case
> you’ll need to manually pull in the right version and update the package 
> definition
> of the consumer (ideally we can update the importer to be smart about these 
> kinds
> of imports, but this hasn’t been done yet!).

Shameless plug; I have some guile code lying around for dealing
with exactly these problems; `guile-semver'[1] might give you a start
for handling these cases.
>
> Hope this helps,
> —Ivan

[1]: https://notabug.org/jlicht/guile-semver



Re: Guix trademarked by Express Logic

2019-03-11 Thread Jelle Licht
mikadoZero  writes:

> Jelle Licht writes:
>> I really think that "software" is much too broad a category to consider
>> for a trademark clash in this case. From what I can see, there is barely
>> any overlap between our Guix and the GUIX product that Express Logic is
>> working on. This might just be my vocational bias in action as a
>> software engineer though, and of course; I Am Not A Lawyer.
>
> There was likely no overlap between Jade the free software project and
> Jade the company's offerings.  Regardless Jade the company forced Jade
> the free software project to rename itself.  
>

It rather seems that under threat of litigation, the Jade project was
bullied into changing their name. I understand why the Jade maintainers
decided on the name change, but I think the wrong lesson to take home
from this story is to actively enable these litigation-happy companies
to continue this behaviour. Additionally, it seems Guix-the-project has
earlier mentions than GUIX-the-product online as well (but again, IANAL).



Re: Guix trademarked by Express Logic

2019-03-11 Thread Jelle Licht
mikadoZero  writes:

> I have search guix-devel for this and did not find it.  I would like to
> [ snip ] 
Thanks for looking into this.

> # Proactive name change
>
> Looking at the pug thread above shows that it would have been nice if
> Jade had not been forced to change their name so quickly and could have
> engaged it's community further on ideas for a new name.
>
> This raises the idea that proactively changing Guix's name might be
> better than reacting to a forced name change.  A benefit to a proactive
> name change is being able to chose the timing.  So for example the name
> change could be planned to coincide with the 1.0 release which I have
> heard is approaching.  Similar to a butterfly emerging from a
> chrysalis.  Maybe there is a opportunity here and this could be turned
> into a nice announcement.

I humbly disagree with proactively doing anything of the sorts; first of
all, there are two separate issues (as you mentioned):
- Are we allowed to call Guix Guix?
- Do we want to call Guix Guix?

As such, I think it is premature to proactively change something which
*might* not even be a (legal) problem at all, let alone something
desired by the community. I *do* agree that these questions should
probably be answered before 1.0 comes around.

> [ snip ]
> # Contacting Express Logic
>
> Also it might be good to reach out to Express Logic as they may not
> actually have any problem with the Guix free software project using the
> name they have trademarked.

I really think that "software" is much too broad a category to consider
for a trademark clash in this case. From what I can see, there is barely
any overlap between our Guix and the GUIX product that Express Logic is
working on. This might just be my vocational bias in action as a
software engineer though, and of course; I Am Not A Lawyer.

>
> # Summary
>
> I am not recommending any specific course of action.  I just want to
> start a discussion.

Point taken :-).



Re: Meeting in Brussels on Wednesday evening?

2019-01-30 Thread Jelle Licht
Apologies for the undesired ML noise.
Because of a last-minute cancellation on the side of my intended sleeping
venu,
I also decided to join at ICAB. I have a reservation, but will only be
there around 22.30.

Could one of the attendees at ICAB joining the group at the bar pick up my
keys as well?
Contact me if you feel like helping me out, and thanks in advance.

Regards,
Jelle Licht


Re: 07/07: gnu: emacs-ghub: Update to 3.2.0.

2019-01-11 Thread Jelle Licht


Mark H Weaver  writes:

> Hi Jelle,
> [...]
>
> In toplevel form:
> magit-reset.el:30:1:Error: Cannot open load file: No such file or directory, 
> graphql
> make[1]: *** [Makefile:65: magit-reset.elc] Error 1
> make[1]: Leaving directory 
> '/tmp/guix-build-emacs-magit-2.13.0.drv-0/magit-2.13.0/lisp'
> make: *** [Makefile:72: lisp] Error 2

I am not quite sure what goes wrong, but I do know that ghub and magit
should probably be updated in lockstep. I reverted the patch for now and
will push it once I have verified other things still work. Later I will
update everything at once when emacs-forge is properly released.

BTW, how does our one-change-per-patch policy apply when running into
situations where multiple changes have to be made at once in order to
keep everything building correctly?

Thanks,

- Jelle



Re: Help wanted with recursive npm import returning #f

2018-12-04 Thread Jelle Licht
Op di 4 dec. 2018 om 22:07 schreef Jelle Licht :

> Hi swedebugia,
>
> Super swell to see you take an interest in this! :D.
>
> Some points;
>
> It seems you wrote your own sanitize-npm-version, but this is not (at all)
> how npm
> deals with versions; I implore you to have a look at https://semver.org/
> again to see
> what all the silly squigles mean :).
>
Oops, I meant https://www.npmjs.com/package/semver

>
> [...]
>


Re: Help wanted with recursive npm import returning #f

2018-12-04 Thread Jelle Licht
Hi swedebugia,

Super swell to see you take an interest in this! :D.

Some points;

It seems you wrote your own sanitize-npm-version, but this is not (at all)
how npm
deals with versions; I implore you to have a look at https://semver.org/
again to see
what all the silly squigles mean :).

A general question on your blacklist approach; I like and appreciate this
attempt
at 'breaking the cycle', but could we not just as well define some dummy
packages?
As far as the importer is concerned, a dummy package would still be seen as
"dependency resolved, my work here is done", or am I missing an advantage
of
your approach?

Op di 4 dec. 2018 om 21:44 schreef :

> Hi
>
> Introduction
> 
> Inspired by Ricardos commit here I rewrote most of the npm importer.
>
> Added memoization, receive, stream->list, values and rewrote the tarball
> fetcher to use only npm-uri and tarballs from the registry. Additionally
> I implemented handling of scoped packages (e.g. @babel/core).
>

> It contains less lines of code than Jelles importer.
>
And hopefully less places for bugs to hide in.


> The single import works and is a lot faster and more reliable than
> before when fuzzy matching on github was used. See it in action:
> http://paste.debian.net/1054384/


I think it is a step in the _wrong_ direction to depend in major ways on
the npm
registry, besides for meta-information where needed. Nonetheless, the fuzzy
matching was really brittle as you say, and could have been a lot faster
indeed.


> Caveats:
> 1) we don't know if the registry-tarballs are reproducible.
>
Back in the day, they most definitely were not. Seeing as npm-land does not
put
an emphasis on this at all, I think it is unwise to assume that any
reproducible
features they offer today will still be available in the future.

2) filename is the same as the upstream tarball -> we should convert it
> to guix-name.
> 3) we have to download the tarball because sha256 is not among the
> hashes computed by npm. (I emailed n...@npmjs.org to ask for them to
> compute it for all their tarballs :) )
>
The result of the importer will probably be a package that we will be
building in the near future, right?

>
> Help wanted
> ---
>
> There is a bug which only affects the recursive importer. I tried hard
> finding it but I'm in way over my head and my guile-foo does not seem to
> cut it with this one. :)
>

> For recursive output it downloads but outputs #f in the end instead of
> the sexps. See example output: http://paste.debian.net/1054383/
>
> Trying to debug via the REPL I met this:
> scheme@(guile-user) [1]> (npm-recursive-import "async")
> $3 = #
>
> Any ideas?


I think I also ran into this. Could you please make your work available on a
public repo somewhere? It would be easier to look at your changes and play
around with it that way.

>
> --
> Cheers
> Swedebugia


Re: Cyclic npm dependencies

2018-11-24 Thread Jelle Licht
Op za 24 nov. 2018 om 16:38 schreef Pjotr Prins :

> On Sat, Nov 24, 2018 at 03:41:35PM +0000, Jelle Licht wrote:
> >Hey swedebugia,
> >I will still send a more elaborate reply to the general npm-importer
> >thread later this week, but we can assume that generally these
> >recursive dependencies can be untangled by looking at the different
> >versions of the dependencies.
> >So in your example, I imagine an input chain like:
> >node-glob 0.1  -> node-rimraf 0.1 -> node-glob 0.2 -> node-rimraf 0.2
> >->  -> node-glob 1.0 -> node-rimraf 1.0
> >While *extremely* annoying to untangle, this is definitely doable.
>
> Appears to me that it would suffice to pick the latest version. In
>
What do you mean? In my specific example, you would need to package
and build each version in succession in order to actually be able to
use recent versions of either of these packages. IOW, you can not
choose any; you need to choose each.

case there is no clear version info just pick whatever is there.
>
> In general this should work. Any unit tests should show breakage.
>

If only we could run unit tests for most packages. Most test
frameworks have huge list of transitive dependencies, although not
nearly as bad as the build tooling.


>
> Circular dependencies are (unfortunately) getting more common. Not
> only in npm, but in all ad hoc package managers. I think their
> assumption is too that you pick the latest.
>
> I agree that we should expose the latest-and-greatest (and secure)
version of most packages, but we would still need to build older
intermediate releases in order to have reproducible builds from source
in a lot of cases. Whether we should expose these bootstrap packages
as well is another issue. Am I perhaps missing the point you are making?


Pj.
>
>


Re: Cyclic npm dependencies

2018-11-24 Thread Jelle Licht
Hey swedebugia,

I will still send a more elaborate reply to the general npm-importer
thread later this week, but we can assume that generally these
recursive dependencies can be untangled by looking at the different
versions of the dependencies.

So in your example, I imagine an input chain like:
node-glob 0.1  -> node-rimraf 0.1 -> node-glob 0.2 -> node-rimraf 0.2 ->
 -> node-glob 1.0 -> node-rimraf 1.0

While *extremely* annoying to untangle, this is definitely doable.
Problems arise if this chain does not actually exist, which basically
means that we have to hunt down commits [1] which are steps in these
chains. Another complication is the versioning scheme used by many npm
packages, the semver [2] + ranges notation [3]. This makes this kind of
'versioning archeology' even harder to do.

For the case where this chain does exist, I have been working on a
semi-npm-compatible semver parser for guile [4], which I was hoping to
integrate in the npm importer or a standalone tool to assist people
wanting to untangle these dependency chains. The goal would be to
reconstruct the needed versions to package by parsing data in the
package.json files of historic versions of these packages.

HTH,

Jelle

[1]: ... or even more granular, parts of commits!
[2]: https://semver.org/
[3]: https://docs.npmjs.com/misc/semver
[4]: https://git.fsfe.org/jlicht/guile-semver


Op za 24 nov. 2018 om 15:10 schreef swedebugia :

> Hi
>
> We need to wonder about what to do with cyclic dependencies.
>
> In the case below node-rimraf has node-glob in input and node-glob has
> node-rimraf in NATIVE-input.
>
> I have no idea how to solve this. Both packages are by the same author.
>
> Same problem with jasmine and jasmine-core
>
> ~/guix-tree/pre-inst-env guix build -K node-rimraf
> allocate_stack failed: Cannot allocate memory
> Warning: Unwind-only `stack-overflow' exception; skipping pre-unwind
> handler.
> allocate_stack failed: Cannot allocate memory
> Warning: Unwind-only `stack-overflow' exception; skipping pre-unwind
> handler.
> allocate_stack failed: Cannot allocate memory
> Warning: Unwind-only `stack-overflow' exception; skipping pre-unwind
> handler.
>
>
> --
> Cheers Swedebugia
>
>


Re: Promoting the GNU Kind Communication Guidelines?

2018-10-31 Thread Jelle Licht
Hello!


HiPhish  writes:

> If you don't want to continue the discussion then so be it, but I cannot leave
> my points misrepresented. When I say "you" I don't necessarily mean you
> personally, but rather the larger discussion. You don't have to respond if you
> don't want to, I believe we have both made our points and it's up to the
> readers to draw their conclusions.
>
> On Wednesday, 31 October 2018 13:46:49 CET you wrote:
>> After this email I'm done with the conversation.  I have tried to
>> provide you with evidence.  You make it clear you have a bone to pick
>> with people concerned with gender equality.  This will go around in
>> circles.
> I have no issue with gender equality, but this is not what feminism is doing.

   ^^
   Good to hear that! I think you can leave debates about the actual or
   intended goals of any feminism movements to mailing lists or other
   platforms devoted to that topic though.


> Let's do an analogy: strong nations are good, fascism promotes strong nations,

  Let's not, as the points that are being are discussed are specific,
  not abstract and quite real. Analogies have a time and place for being
  useful, but this is not one of them.

> therefore if you believe in a strong nation you are naturally a fascist. Oh,
> those death camps? Well, that's not *real* fascism, that was just Nazism. And
> now we have reached Godwin's law. You presuppose that feminism is acting for a
> good cause (gender equality), so therefore the actions of feminists must be
> good. There is your problem: never listen to what people say, always look at
> what they do (this is a rule for life in general, not just this issue). Of
> course comparing fascism and feminism is a hyperbole, the point is not to look
> at the labels of a group, but at their actions.
>
>> The TUC is the trade union congress.  They are not a feminist
>> organisation.  The Belgian government is not a feminist organization.
>> The Guardian is a newspaper and the EEOC is a US government office.
> You can have a strong political bias and still not be an activist group.
> Organizations cooperate, their members can be friends with one another.
> Happens all the time in all areas.

This confused me. You mean collectives of people are made up of people,
and therefore associate with other people?

>
>> My line of argument above was precisely that this does not only happen
>> in a field with "awkward nerds".  Also I find your assertion that
>> "nerds" are unable to behave decently to other people an insult to
>> myself and "nerds" as a whole.
> Anyone can behave, but anyone can also slip up. And some people slip up more
> often than others. Why? I don't know, I'm not a psychologist, I just know
> that's they way it is. Again, this is not limited to the issue at hand.
> Everyone knows that hitting people is wrong, but some people are more prone to
> losing their tamper then others. Why? Again, I don't know, all I know is that
> you are less likely to be slapped on the head at a university than at a trade
> job.
>
>> I find it shocking you are basically telling people who are being
>> mis-treated by others to just suck it up.
>>
>> It's because of these attitudes I'm glad we have a code of conduct.
> Everyone has hardships to put up with. It's about the severity of hardships.
> This is like looking at workplace accidents and putting a papercut right next
> to a cut to the bone as if they were comparable. If you have a papercut you
> suck it up, put a band aid on it so you don't bleed over the papers and get
> back to work. But if you have a cut to the bone you need the wound to get
> disinfected and stitched up. It would be absurd to say that an office job is
> more hazardous than a construction site job because people in the office 
> suffer
> paper cuts more often. I would rather suffer a hundred paper cuts than one cut
> to the bone.
>
>> Here's the problem with your argument.  These findings are reproduced
>> over and over: women are disproportionately affected by harassment,
>> especially of a gendered kind.  Even if you find an issue with a
>> specific study, the consensus of virtually all these studies find the
>> same thing.
>>
>> You might have better results if you actually pointed to studies that
>> overturned the consensus.  Good luck with that.
> I am not saying these studies cannot be reproduced, I am doubting the severity
> of the issue. If we suppose that certain people tend to slip up more often
> (which I did above) then of course you will find these patterns more often. 
> But
> again, how severe of a problem is Steve making a stupid joke at coffee break?

The problem is not only Steve making a stupid joke; the problem is the
environment that led to Steve thinking it is okay to make statements
like these in the first place. The only way to 'fix' this problem is to
change the environment so that people are less likely to slip up, and to
keep each other honest about (tiny) mistakes that 

Re: Promoting the GNU Kind Communication Guidelines?

2018-10-31 Thread Jelle Licht


Hello,

HiPhish  writes:

> I am really trying to understand the other side here, so please help me out on

Without attributing malice to your statement here, I think it is
disingenuous to talk about "the other side". We are all part of
communities we interact with, there is no need for any additional
"othering" here :).

> this one. Let's say you have two people for the sake of simplicity, we can
> call them Alice and Bob. Alice and Bob hate each other's guts, Alice is
> unwilling to work on the team if Bob stays on the team, but Bob is willing to
> work on the team regardless of Alice. Furthermore, Bob has already worthwhile
> contributions under his belt, whereas Alice has done nothing yet, but she
> might if Bob were to be remove.
>
Again, while some people might be calling out for these cases to happen,
this is not what the discussion is about; _any_ document that describes
our norms and policies is intended to create a welcoming environment,
where anyone can decide to become an active member of the community.

That the means through which this can happen, at its most extreme,
involves actively removing potentially harmful elements from the
community is in that sense a means to achieve these goals.

> And your choice would be to remove Bob from the team. Am I correct so far?

You are correct in the sense that what you state is not really false,
but at the same time also far removed from the actuality of any
realistic social setting.

To me it seems that you only consider what one might call the
"worst-case", and I'd rather state that any community pledge/policy
document is first of all intended to prevent these situations from
arising in the first place, and give the often-powerless some semblance
of equal opportunity to become active participants, while still offering
a safety net if push comes to shove.


> What sense does it make to remove someone who 1) has already a proven track-
> record and 2) has shown that he is willing to control his emotions to
> focus on

Again, if part of this "proven track-record" includes something that
could reasonably be seen as being in direct opposition of our norms as a
community, it would make sense to have an honest and direct dialogue in
order to resolve this situation. In extreme cases, it might still make
sense to exclude harmful elements of the community, even if they are
otherwise considered productive/effective/efficient. Nobody is above the
rules we set ourselves as a community.


> the task, all in the hopes that 3) the other person might perhaps fill in the
> void and 4) already has show to let emotions override work duty, and 5) has a
> track-record of wanting people remove from the project?

If we are going to play an open hand here, number 3 is literally the
goal of having this discussion in the first place: We want *anyone* to
feel like they could fill a perceived void in our community, if they so
choose.

Number 4 seems a very weird point to make. We all have emotions, and
some of us are more in touch with them then others, but somehow
insinuating that having emotions influence you is a bad thing is
confusing me. For me, most of the projects I undertake are labours of
love.

The rudest point I will make; number 5 comes across to me as an almost
hostile way of viewing any critique. If "wanting people removed from the
project" is done for legitimate reasons (after careful consideration),
this is IMHO a good thing. If this does not apply, the people should not
be removed in the first place, so I do not feel the problem for opening
each of our own behaviour up to criticism.

>
> Please explain to me how kicking Bob out of the team is supposed to improve
> the project. I am really trying hard to wrap my head around the issue, but
> this logic is entirely alien to me. Wouldn't it make more sense to just tell
> people to keep any personal grudges out of the workplace and carry on? It is
> not that the project management is preventing Alice from joining, she refuses
> out of her own volition.

I appreciate you writing up your thoughts in a concise and clear
manner. I would advise you to consider less of this a cold and reasoned
logic, and look more into the community building aspects.


* Collaboration is about community.

* Communities are about people, so telling them to leave their "personal
  grudges" at the door is wholly unreasonable.

* Fostering welcoming, productive and even fun environments to do work
  in is an active and on-going task. Just look at most of human history
  to see what happens if this is not an actively sough-after goal.

Kind regards,

Jelle



Re: GSoC update

2018-07-11 Thread Jelle Licht
2018-07-11 0:40 GMT+02:00 Ludovic Courtès :

> Hi Ioannis,
>
> Ioannis Panagiotis Koutsidis  skribis:
>
> > This patch adds initial support for .socket unit files. It does not
> > currently work but is near completion.
>
> Could you expound a bit?  That’s a very short summary for all the sweat
> you’ve put in it.  :-)
>
> Also, what is the patch against?  It’s not against ‘master’; I suppose
> it’s against the previous state of your own branch, do you have a copy
> of your repo on-line?
>
> > During the past month I also worked on a patch that adds signalfd and
> > fiber support but these are currently way too unstable and for that
> > reason I have not included them in this patch.
>
> It’s OK that the thing doesn’t quite work—we knew it was not an easy
> task.  What’s disappointing though is that you didn’t come to us to
> discuss the issues until now.  GSoC is not about working in all
> loneliness separately from the rest of the group; it’s about becoming
> part of the group.
>
> On IRC Jelle and I (and possibly others) offered help on the ‘signalfd’
> issue; I also outlined reasonable milestones (first, only use
> signalfd(2) instead of SIGCHLD, then discuss together what it would take
> to Fiberize the whole thing.)  It’s sad that you remained stuck instead
> of taking this opportunity to discuss it with us.
>

Ioannis, could you perhaps share some of your w.i.p. code regarding
signalfd-based signal handling in guile? Adding to what Ludo'
mentioned, I imagine you are running into some peculiarities regarding
guile's way of handling signals, so I would recommend to start
lurking on #guile if you did not do this before now, so you can interact
with the folks with the most expertise regarding the problems you
might be facing :-)

>
> > From cd260ae65056b53749e7c03f2498a28af2525934 Mon Sep 17 00:00:00 2001
> > From: Ioannis Panagiotis Koutsidis 
> > Date: Tue, 10 Jul 2018 20:03:21 +0300
> > Subject: [PATCH] .socket units
> >
> > ---
> >  modules/shepherd.scm |  44 +++--
> >  modules/shepherd/service.scm | 170 ++---
> >  modules/shepherd/systemd.scm | 354 ---
> >  3 files changed, 368 insertions(+), 200 deletions(-)
>
> The patch changes lots of things and unfortunately, without
> explanations, I do not understand what to do with it.  Like what’s the
> new feature?  How is it used?  What implementation choices were made?
> What’s left to be done?…
>
> Thank you,
> Ludo’.
>
>


Re: GSoC: Adding a web interface similar to the Hydra web interface

2018-07-04 Thread Jelle Licht
2018-07-04 22:54 GMT+02:00 Tatiana Sholokhova :

> Hi all,
>
>
Hi Tatiana,


> I just committed the code I wrote trying to improve pagination. I screwed
> up a bit with the new pagination.
> The problem I encountered is following. If we want to maintain a link to
> the previous page we have to filter the database table entries with to
> types of filters: one with lower bound on the id, and the other with the
> upper bound. As we do so simultaneously with limiting the query output to
> the maximal page size, we need to change the ordering type depending on the
> type of the filter. At the moment I am not sure, what it the best way to
> implement database query with such properties. Could you please take a look
> on the commit and share your ideas on that?
>

> The current implementation of pagination works correctly but it does not
> support link to the previous page (first, and next only).
>

It has been some time since I last had a look at databases, so you have my
apologies
in advance if what I am saying does not really apply, or is even not
entirely correct.

You could perhaps have a look at reversing the sort order, and replace ">"
with "<" and "<"
with "<" in your WHERE clauses. The query to the previous page would be
similar to
retrieving the next page, but basically reversing the order you would page
through the results,
if that makes any sense.

If this works, you could also hack together a maybe-efficient query to
retrieve the items
for the last page; simply insert the maximum possible value in your query,
and retrieve the
previous page with regards to that maximum value. In the current case, you
could enter the
highest possible value any id can have.

If it is possible for new items to show up on previous pages as well (so
with id's that are lower
than the highest id), you would need to replace your sorting and filtering
to act on a composite
value of e.g. , instead of on only the id value.


>
> I have been trying to improve pagination for a while, and I now am
> thinking about starting the parallel work on the implementation of the
> features we listed before. What do you think about it?
>

> Best regards,
> Tatiana
>
> Good luck, and HTH!

- Jelle


Re: Maintaining implementations of similar utility functions like json-fetch

2018-06-10 Thread Jelle Licht
Ludovic Courtès  writes:

> Hey,
>
> Jelle Licht  skribis:
>
>> I basically added the robust features of `json-fetch*' to the exported
>> `json-fetch'
>> instead, and all existing functionality seems to work out as far as I can
>> see.
>
> So are you saying that we can get rid of ‘json-fetch*’?
>
>> I did notice that I now produce hash-tables by default, and some of the
>> existing usages of `json-fetch*' expect an alist instead. What would be a
>> guile-
>> appropriate way of dealing with this? I currently have multiple
>> `(hash-table->alist (json-fetch <...>))' littered in my patch which seems
>> suboptimal,
>> but always converting the parsed json into an alist seems like it might
>> also not be
>> what we want.
>
> Why insist on having an alist?  Perhaps you can just stick to hash
> tables?   :-)
>
> Ludo’.

Hey hey,

Sorry for the delay. Cue the drum roll; Attached is my initial draft of
this patch. I initially wanted to split it up into 2 or more patches, but
could not make this work in a way that I could wrap my head around.

Also, there is yet another 'json-fetch'-like function implemented in
`guix/ci.scm', but I was not sure whether the error-handling facilities
would be applicable there.

Anyway, I am open to comments. I have verified that at least the
(tests of the) importers still work as they did before. After the
comments, I could push it myself if that is okay.
From c60686975df206118c3a26cc9c2cef2a93b2 Mon Sep 17 00:00:00 2001
From: Jelle Licht 
Date: Sun, 10 Jun 2018 20:35:39 +0200
Subject: [PATCH] import: json: Consolidate duplicate json-fetch functionality.

* guix/import/json.scm (json-fetch): Return a list or hash table.
  (json-fetch-alist): New procedure.
* guix/import/github.scm (json-fetch*): Remove.
  (latest-released-version): Use json-fetch.
* guix/import/cpan.scm (module->dist-name): Use json-fetch-alist.
  (cpan-fetch): Likewise.
* guix/import/crate.scm (crate-fetch): Likewise.
* guix/import/gem.scm (rubygems-fetch): Likewise.
* guix/import/pypi.scm (pypi-fetch): Likewise.
* guix/import/stackage.scm (stackage-lts-info-fetch): Likewise.
---
 guix/import/cpan.scm |  9 +
 guix/import/crate.scm|  4 ++--
 guix/import/gem.scm  |  2 +-
 guix/import/github.scm   | 19 ++-
 guix/import/json.scm | 24 +---
 guix/import/pypi.scm |  4 ++--
 guix/import/stackage.scm |  2 +-
 7 files changed, 30 insertions(+), 34 deletions(-)

diff --git a/guix/import/cpan.scm b/guix/import/cpan.scm
index 58c051e28..08bed8767 100644
--- a/guix/import/cpan.scm
+++ b/guix/import/cpan.scm
@@ -88,9 +88,10 @@
   "Return the base distribution module for a given module.  E.g. the 'ok'
 module is distributed with 'Test::Simple', so (module->dist-name \"ok\") would
 return \"Test-Simple\""
-  (assoc-ref (json-fetch (string-append "https://fastapi.metacpan.org/v1/module/;
-module
-"?fields=distribution"))
+  (assoc-ref (json-fetch-alist (string-append
+"https://fastapi.metacpan.org/v1/module/;
+module
+"?fields=distribution"))
  "distribution"))
 
 (define (package->upstream-name package)
@@ -113,7 +114,7 @@ return \"Test-Simple\""
   "Return an alist representation of the CPAN metadata for the perl module MODULE,
 or #f on failure.  MODULE should be e.g. \"Test::Script\""
   ;; This API always returns the latest release of the module.
-  (json-fetch (string-append "https://fastapi.metacpan.org/v1/release/; name)))
+  (json-fetch-alist (string-append "https://fastapi.metacpan.org/v1/release/; name)))
 
 (define (cpan-home name)
   (string-append "http://search.cpan.org/dist/; name "/"))
diff --git a/guix/import/crate.scm b/guix/import/crate.scm
index a7485bb4d..3724a457a 100644
--- a/guix/import/crate.scm
+++ b/guix/import/crate.scm
@@ -51,7 +51,7 @@
   (define (crate-kind-predicate kind)
 (lambda (dep) (string=? (assoc-ref dep "kind") kind)))
 
-  (and-let* ((crate-json (json-fetch (string-append crate-url crate-name)))
+  (and-let* ((crate-json (json-fetch-alist (string-append crate-url crate-name)))
  (crate (assoc-ref crate-json "crate"))
  (name (assoc-ref crate "name"))
  (version (assoc-ref crate "max_version"))
@@ -63,7 +63,7 @@
  string->license)
   '()))   ;missing license info
  (path (string-append "/" version "/dependencies"))
- (deps-json (json-fetch (string-append crate-url name path)))
+   

Re: Improving Shepherd

2018-02-15 Thread Jelle Licht


Ludovic Courtès <l...@gnu.org> writes:


Heya,

Jelle Licht <jli...@fsfe.org> skribis:

Good news: signfalfd seems to work as far as I can see. I am 
not quite sure

how to make it work consistently with guile ports yet though.


Good!  What do you mean by “work with guile ports” though?



It seems that I am running into problems with the way guile 
handles

signals atm. As far as I understood the good people of #guile on
freenode, guile handles signals with a separate thread that 
actually
makes sure signal handling is done at the 'right' time. As such, 
it
seems that there is no easy way to set the mask of blocked signals 
for

all guile threads.

My approach was to wrap `pthread_sigmask' (initially 
`sigprocmask') icw
a call to `signalfd', but it seems that "my" guile thread only 
receives

the signal about ~two-thirds of the time. This only happens when
triggering the signal via 'external' means, such as the kill 
command.
Using the `raise' function from within my guile repl/program did 
always

reliably trigger events coming in on my signalfd based port.

Without being able to block all relevant signals via 
`pthread_sigmask'
from the other guile threads, it seems very difficult to reliably 
use

signalfd based ports to handle signals. Some (ugly) code at [1]
demonstrates this: run the guile script, and find the pid of the 
guile

process via `pgrep', and then send a SIGCHLD signal via `kill -17
'. You should still see the signal handler for the supposedly
blocked signal be triggered.

tl;dr: I cannot seem to block signals from being handled by guile 
in
some way, which to me seems a prerequisite for using 
signalfd-based
signal handling. My uneducated guess is that guile needs to 
support a
way to set signal masks for all threads in order to deal with 
this.



To make use of signalfd, one normally masks signals so that 
these can
handled via signalfd instead of the default signal handlers; 
any process
forked start out with the same signal mask, so we would need to 
make

sure to either reset the signal mask for spawned processes.


Right, we could do that in ‘exec-command’, which is the central 
place

for fork+exec.


Right, this does not seem as difficult as I initially thought. If 
the
earlier things I mentioned are resolved/worked around, this should 
be

easy to implement.


Well, let us know what to do next, then!  :-)

Ludo’.


-Jelle

[1]: https://paste.debian.net/1010454/



Re: Improving Shepherd

2018-02-10 Thread Jelle Licht
Hey all,

2018-02-05 14:08 GMT+01:00 Ludovic Courtès :

> Hello!
>
> [...]
>
> Currently shepherd monitors SIGCHLD, and it’s not supposed to miss
> those; in some cases it might handle them later than you’d expect, which
> means that in the meantime you see a zombie process, but otherwise it
> seems to work.
>
> ISTR you reported an issue when using ‘shepherd --daemonize’, right?
> Perhaps the issue is limited to that mode?
>

Playing around with signalfd(2) for a bit, it seems that implementations
are
allowed to coalesce several 'pending' signals at the same time. In the case
of SIGCHLD, this means the parent process might never be properly
informed of *mutliple* signals being received around the same time. Could
it have something to do with this problem as well?

>
> > Concurrency/parallelism - I think Jelle was planning to work on this,
> > but I might be wrong about that. Maybe I volunteered? We're keen to
> > see Shepherd starting services in parallel, where possible. This will
> > require some changes to the way we start/stop services (because at the
> > moment we just send a "start" signal to a single service to start it,
> > which makes it hard to be parallel), and will require us to actually
> > build some sort of real dependency resolution. Longer-term our goal
> > should be to bring fibers into Shepherd, but Efraim mentioned that
> > fibers doesn't compile on ARM at the moment, so we'll have to get that
> > working first at least.
>
> I’d really like to see that happen.  I’ve become more familiar with
> Fibers, and I think it’ll be perfect for the Shepherd (and we’ll fix the
> ARM build issue, no doubt.)
>
> One thing I’d like to do is to handle SIGCHLD via signalfd(2) instead of
> an actual signal handler like we do now.  That would make it easy to
> have signal handling part of the main event loop and thus, it would
> integrate well with Fibers.
>
> It seems that signalfd(2) is Linux-only though, which is a bummer.  The
> solution might be to get over it and have it implemented on GNU/Hurd…
> (I saw this discussion:
> ; I
> suspect it’s within reach.)
>

Good news: signfalfd seems to work as far as I can see. I am not quite sure
how to make it work consistently with guile ports yet though.

To make use of signalfd, one normally masks signals so that these can
handled via signalfd instead of the default signal handlers; any process
forked start out with the same signal mask, so we would need to make
sure to either reset the signal mask for spawned processes.

>
> [...]
>
> Ludo’.
>
>
Jelle


Re: Maintaining implementations of similar utility functions like json-fetch

2018-01-31 Thread Jelle Licht
Hi Ludo',


2018-01-27 17:09 GMT+01:00 Ludovic Courtès <l...@gnu.org>:

> Hello!
>
> Jelle Licht <jli...@fsfe.org> skribis:
>
> > I noticed that there are currently two very similar functions for
> fetching
> > json data; `json-fetch' in (guix import json) and `json-fetch*' in (guix
> > import github).
> >
> > Some things I noticed:
> > - Dealing with http error codes seems to be a bit more robust in
> > `json-fetch*'.
> > - Making sure that (compliant) servers give responses in the proper
> format
> > seems more robust in `json-fetch' due to using Accept headers.
> > - Dealing with the fact that json responses are technically allowed to be
> > lists of objects, which `json-fetch' does not handle gracefully.
> >
> > For this issue specifically, would it make sense to combine the two
> > definitions into a more general one?
>
> Definitely, we should just keep one.  It’s not even clear how we ended
> up with the second one.
>

I even had a third one in my local tree which happened to have a conflict,
which
is how I found out in the first place, so I understand how these things can
happen.

>
> > My more general concern would be on how we can prevent bug fixes only
> being
> > applied to one of several nearly identical functions. IOW, should we try
> to
> > prevent situations like this from arising, or is it okay if we somehow
> make
> > sure that fixes should be applied to both locations?
>
> We should prevent such situations from arising, and I think we do.
>
> The difficulty is that avoiding duplication requires knowing the whole
> code base well enough.  Sometimes you just don’t know that a utility
> function is available so you end up writing your own, and maybe the
> reviewers don’t notice either and it goes through; or sometimes you need
> a slightly different version so you duplicate the function instead of
> generalizing it.
>
> Anyway, when we find occurrences of this pattern, we should fix them!
>

I basically added the robust features of `json-fetch*' to the exported
`json-fetch'
instead, and all existing functionality seems to work out as far as I can
see.

I did notice that I now produce hash-tables by default, and some of the
existing usages of `json-fetch*' expect an alist instead. What would be a
guile-
appropriate way of dealing with this? I currently have multiple
`(hash-table->alist (json-fetch <...>))' littered in my patch which seems
suboptimal,
but always converting the parsed json into an alist seems like it might
also not be
what we want.


> Thanks,
> Ludo’.
>

- Jelle


Re: Dinner in Brussels?

2018-01-30 Thread Jelle Licht
Me and Michiel would also like to be there on Friday.

2018-01-30 14:20 GMT+01:00 Amirouche Boubekki 
:

> In for dinner on Friday.
>
> Le 30 janv. 2018 2:11 PM, "Gábor Boskovits"  a
> écrit :
>
>> 2018-01-30 14:09 GMT+01:00 Julien Lepiller :
>>
>>> Le 30 janvier 2018 13:34:46 GMT+01:00, l...@gnu.org a écrit :

 Hello Guix!

 To those going to the Guix workshop in Brussels this Friday: who’s in
 for dinner (+ drink) on Friday evening?

 Even better: who would like to book something (I’m looking at you,
 Brusselers ;-))?

 Actually I’m arriving on Thursday afternoon, so if people are around,
 I’d be happy to have dinner on Thursday evening too!  :-) Let’s arrange
 something.

 Ludo’.


>>> I'm in for dinner!
>>>
>>
>> I'm in for dinner also!
>>
>>> --
>>> Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser
>>> ma brièveté.
>>>
>>
>>


Re: Improving Shepherd

2018-01-29 Thread Jelle Licht
2018-01-29 22:14 GMT+01:00 Carlo Zancanaro :

> I'm keen to do some work on shepherd. Partially this is driven by me using
> it to manage my user session and having it not always work right, and
> partially this is driven by me grepping the code for "FIXME" (which was
> slightly overwhelming). If anyone is keen to chat about it on Friday,
> please find me! I have some ideas about things I'd like to do, but I don't
> really have any idea what I'm doing. Any help/advice/encouragement you can
> give me will be appreciated!
>

Count me in! I am currently not using GNU Shepherd for my user session yet,
but would like to collaborate on some future direction on making it more
easy to use.
I'll only be there after/around lunch though ;-).

>
> Carlo
>
- Jelle


Maintaining implementations of similar utility functions like json-fetch

2018-01-26 Thread Jelle Licht
Hello!

I noticed that there are currently two very similar functions for fetching
json data; `json-fetch' in (guix import json) and `json-fetch*' in (guix
import github).

Some things I noticed:
- Dealing with http error codes seems to be a bit more robust in
`json-fetch*'.
- Making sure that (compliant) servers give responses in the proper format
seems more robust in `json-fetch' due to using Accept headers.
- Dealing with the fact that json responses are technically allowed to be
lists of objects, which `json-fetch' does not handle gracefully.

For this issue specifically, would it make sense to combine the two
definitions into a more general one?

My more general concern would be on how we can prevent bug fixes only being
applied to one of several nearly identical functions. IOW, should we try to
prevent situations like this from arising, or is it okay if we somehow make
sure that fixes should be applied to both locations?

-- 
Jelle


Re: [RFC] A simple draft for channels

2018-01-19 Thread Jelle Licht
2018-01-19 9:24 GMT+01:00 Ricardo Wurmus :

> Hi Guix,
>
> I’d like to retire GUIX_PACKAGE_PATH as the main way to get third-party
> packages, because we can’t really keep track of packages that were added
> or redefined in this way.  I want to replace it with slightly more
> formal “channels”.
>
> As a first implementation of channels I’d just like to have a channel
> description file that records at least the following things:
>
> * the channel name (all lower case, no spaces)
> * a URL from where package definitions can be loaded (initially, this
>   can be restricted to git repositories)
>
> Optional fields:
>
> * a description of the channel
>
> * a URL from where substitutes for the packages can be obtained (this
>   will be backed by “guix publish”)
>
> * a mail address or URL to contact the maintainers of the channel, or to
>   view the status of the channel
>
> * the Guix git commit that was used when this channel was last
>   updated.  This is useful when Guix upstream breaks the ABI or moves
>   packages between modules.
>
> On the Guix side we’d need to add the “guix channel” command, which
> allows for adding, removing, updating, and downgrading channels.  Adding
> a channel means fetching the channel description from a URL and storing
> state in ~/.config/guix/channels/, and fetching the git repo it
> specifies (just like what guix pull does: it’s a git frontend).  It also
> authorizes the the substitute server’s public key.
>
> Internally, it’s just like GUIX_PACKAGE_PATH in that the repos are used
> to extend the modules that Guix uses.  Unlike GUIX_PACKAGE_PATH,
> however, we now have a way to record the complete state of Guix,
> including any extensions: the version of Guix and all active channels
> with their versions.  We would also have a way to fetch substitutes from
> channels without having to “globally” enable new substitute servers and

authorize their keys.

[...]

> (Is this safe?  Can we have per-user extensions
> to the set of public keys that are accepted?)
>
> I am not sure, but I think we need to be able to ensure that these 'new'
substitute servers
will only be used to get substitutes for the derivations in that specific
channel.

I am not sure how easy it will be to make sure this will be the case, but I
guess we do not
want to give any user-defined the possibility to 'overwrite' substitutes
for existing derivations
from system-trusted substitute servers.

Downsides: Guix has no stable ABI, so channels that are not up-to-date
> will break with newer versions of Guix.  Moving around packages to
> different modules might break channels.  That’s okay.  It’s still an
> improvement over plain GUIX_PACKAGE_PATH.
>
> We might be able to mitigate this by using by using Semantic Versioning [1]
on a best-effort basis. Perhaps (some) changes to the abi could even be
picked up and warned
about by a tool not unlike the one used to generate the package listings
for new releases. I am
thinking of things like:
- A package was renamed (so the previous named version no longer exists)
- A package was moved



> I don’t think it has to be more complicated than that.  What do you
> think?
>
> --
> Ricardo
>
> In general, I like it and would love to play around with this soon.

- Jelle

[1]: https://semver.org/


Re: license naming

2017-12-22 Thread Jelle Licht

ng0  writes:

> I've just read this link: 
> https://www.fsf.org/blogs/rms/rms-article-for-claritys-sake-please-dont-say-licensed-under-gnu-gpl-2
>
> Full Quote:
>
>> In this article, For Clarity's Sake, Please Don't Say "Licensed under GNU 
>> GPL 2"!, Free Software Foundation president Richard Stallman (RMS) explains 
>> how to properly identify what GNU license your work is under. Whenever a 
>> developer releases their work under a GNU license, they have the option to 
>> either release it under that version of the license only, or to make it 
>> available under any later version of that license. This option ensures that 
>> software can remain compatible with future versions of the license. But what 
>> happens if someone just says their program is under GNU GPL version 2, for 
>> example?
>>
>>>[T]hey are leaving the licensing of the program unclear. Is it released 
>>> under GPL-2.0-only, or GPL-2.0-or-later? Can you merge the code with 
>>> packages released under GPL-3.0-or-later?
>>
>> Thus, it is vitally important that developers indicate in their license 
>> notices whether they are licensing their work under that version "only" or 
>> under "any later version." Of course, these days it is also helpful for 
>> license notices to be machine-readable. The Software Package Data Exchange 
>> (SPDX) specification sets a standardized way of identifying licenses on 
>> software packages. They are updating their license identifiers to include 
>> this distinction in their upcoming version. For example, for GNU GPL version 
>> 2, the identifiers are now "GPL-2.0-only or GPL-2.0-or-later." The old 
>> identifiers (e.g. "GPL-2.0") are now deprecated and should no longer be 
>> used. Based on the changes SPDX says are coming in the SPDX specification 
>> and its Web site, the FSF expects to endorse the new version of the SPDX. We 
>> thank SPDX and their community for making these helpful changes.
>
>
> Maybe we could make use of what https://spdx.org/licenses/
> provides. I didn't compare the names with our names, I'll do
> this on the train next week.
> Good idea, bad idea?

We already have a `spdx-string->license' function in
`(guix import utils)', in case you need a starting point. It
makes sense to me to use a de facto way of referring to licenses,
but I am not sure whether this has some disadvantages compared to the
currently used way of referring to licenses.

- Jelle




Re: Dualbooting with guixsd not handling grub installation

2017-12-09 Thread Jelle Licht
2017-12-10 0:35 GMT+01:00 Martin Castillo :

> Hi guixers,
>
> I want to dualboot into GuixSD. My main os is currently NixOS.
> Currently, I don't want to let guixsd control my grub setup. So my
> situation is similar to [1].
>
> One solution is to use the unreliable chainloading with blocklists by
> invoking grub-install --force /dev/sda3 after every guix system
> reconfigure config.scm. (The config.scm has sda3 as grub target.)
>
> The second (and IMHO the right) solution I am aware of is adding the
> following in the grub.cfg which is handled by nix:
>menuentry "GuixSD - Configloader" {
>  configfile (hd0,gpt3)/boot/grub/grub.cfg
>}
>
> This way, grub loads the newest grub config file created from GuixSD.
> There is only a minor annoyance:
> guix system reconfigure config.scm returns non-zero and spits out an
> error (because grub-install wants --force to use blocklists). But it
> succeeds in everything else, especially in creating a new
> /boot/grub/grub.cfg.
> The alternative (guix system reconfigure --no-bootloader config.scm)
> doesn't update /boot/grub/grub.cfg.
> I'd like to have a way to have /boot/grub/grub.cfg updated without
> reinstalling grub on the disk/partition and without having a command
> return non-zero.
>
> This could be done by adding a cli argument for reconfigure or allowing
> an empty string in (grub-configuration (target "")).
>
> WDYT?
>
> Martin Castillo
>
>
> [1]: https://lists.gnu.org/archive/html/guix-devel/2014-12/msg00046.html
> --
> GPG: 7FDE 7190 2F73 2C50 236E  403D CC13 48F1 E644 08EC
>

This seems like a useful change. I am currently running into a similar issue
using GuixSD on a laptop /w libreboot, in a way similar to what is done at
[2].
Reading your email just now reminded me that living in mediocrity is
something that can be changed when you run only/mostly free software :-).

Maybe the orphaned patch at [3] can be ad{o,a}pted to address both of these
use-cases?

- Jelle

[2]: https://lists.gnu.org/archive/html/help-guix/2017-04/msg00083.html
[3]: https://lists.gnu.org/archive/html/guix-devel/2016-02/msg00116.html


Re: Automatically checking commit messages

2017-09-21 Thread Jelle Licht
2017-09-20 17:47 GMT+02:00 Arun Isaac :

>
> I have been working on a guile script to automatically check commit
> messages -- something like `guix lint' but for commit messages instead
> of package definitions. This could help us enforce our commit message
> guidelines and avoid screw-ups like the one I did in commit
> 1ee879e96705e6381c056358b7f426f2cf99c1df. I believe more automation is
> essential and would help us scale better if/when we have more people
> with commit access.
>
Hear, hear.

>
> I have taken the following approach with my script: I have devised a
> grammar (shown below) to parse commit messages. Once the parser outputs
> a parse tree for the commit message, we can apply any number of checks
> on it.
>
> Do you think this is a good approach? If so, I shall proceed with the
> work, and complete the script. If not, what other approach would be
> good?
>

Nice. What parts of the commit message guidelines do you expect to be
verifiable, and which parts do you think will be too hard/restrictive to
automatically verify?

>
> Note that the grammar shown below is incomplete and buggy. Do ignore
> that for now.
>
Alright. If you haven't done so already, adding test cases (from
existing proper and improper commit messages) would ease understanding.

>
> (define-peg-string-patterns
>   "commit <-- module S module S summary NL NL (description NL NL)?
> changelog signature?
> module <-- (!C !S .)* C
> summary <-- (!NL .)*
> description <-- (!(NL NL) .)*
> changelog <-- entry*
> entry <-- bullet S file S section C S change
> file <-- word
> section <-- LR (!RR .)* RR
> change <-- (!(bullet / (NL NL)) .)*
> signature <-- signedoffby signatory S email
> signedoffby < 'Signed-off-by: '
> signatory <-- (!' <' .)*
> email <-- LA (!RA .)* RA
> word <- (!S .)*
> S < ' '
> C < ':'
> NL < '\n'
> bullet < '*'
> LR < '('
> RR < ')'
> LA < '<'
> RA < '>'")


If this works, I would love for it to be a commit hook. Thanks for looking
into this!

- Jelle


Re: collaboration from students of a technical school

2017-07-20 Thread Jelle Licht
It could also be interesting to have the more software-engineering/devops
focused students
look at improving guix QA process.

They could work on technical solutions for making things more robust, as to
make sure
`master' is broken less often ;-). I have no specific ideas on how to
approach this though.

- Jelle



2017-07-20 16:59 GMT+02:00 Ludovic Courtès :

> Hello,
>
> (Sorry for the late reply, Quiliro!)
>
> Alex Vong  skribis:
>
> > I cannot speak for others on what are most needed in Guix. But here is a
> > page summurizing project ideas for Google Summer of Code 2017:
> > 
>
> Yes, that’s a good start for developers.
>
> For translators, there are pointers on
> .  Basically, translation is handled
> by the Translation Project (TP).  On the Guix side we simply grab
> updated translations (PO files) when the TP sends them to us.
>
> HTH!
>
> Ludo’.
>
>


Re: npm (mitigation)

2017-07-14 Thread Jelle Licht
2017-07-15 5:34 GMT+02:00 Mike Gerwitz <m...@gnu.org>:

> On Fri, Jul 14, 2017 at 13:57:30 +0200, Jelle Licht wrote:
> > Regardless, the biggest issue that remains is still that npm-land is
> mired
> > in cyclical dependencies and a fun-but-not-actually unique dependency
> > resolving scheme.
>
> I still think the largest issue is trying to determine if a given
> package and its entire [cyclic cluster] subgraph is Free.  That's a lot
> of manual verification to be had (to verify any automated
> checks).  npm's package.json does include a `license' field, but that is
> metadata with no legal significance, and afaik _defaults_ to "MIT"
> (implying Expat), even if there's actually no license information in the
> repository.


And that is exactly why this probably won't end up in Guix proper, at least
for the
foreseeable future. And also the reason that the entire npm situation is so
sad.

The default MIT/Expat only applies to people who generate their package
metadata
via npm init by just pressing enter; IANAL, but directly referring to a
valid and
common SPDX identifier is not that different from including some file under
the name
of LICENSE/COPYING.

It is true that lots of npm projects do not include copyright and/or
license headers in
 each source file, but this is also true for lots of other free software.

-
Jelle


Re: npm (mitigation)

2017-07-14 Thread Jelle Licht
Hi Catonano,

I would be be happy to help you with this, but tbh, I am not comfortable
discussing this in-depth on guix-devel, as this seems antithetical to Guix'
goals.
All I will say here is that you need to adapt the npm importer to use the
sources from the npm registry instead of resolving to any 'upstream' urls.
I believe Jan's importer was already able to do this last time I checked,
so you might really only need to checkout their branch and rebase on
current master.

Regardless, the biggest issue that remains is still that npm-land is mired
in cyclical dependencies and a fun-but-not-actually unique dependency
resolving scheme.
I am currently working on a guile version of what Sander did for Nix for
importing entire npm dependency trees, but this will likely lead to lots of
programmatically
defined packages instead of the guix approach of mostly-manually defining
each package. It might therefore be a good candidate for a guix channel, if
that is still
being worked on.

Good luck!

- Jelle

2017-07-14 10:52 GMT+02:00 Catonano :

> I read that Jelle and Jan used their own branch in order to have npm based
> software to be installed in their GuixSD environments, as binary blobs
>
> Can I ask you for instructions about how to do that exactly ?
>
> I might need to work a litle on some web siites in the future
>
> Which branch exactly did you use, and how, exactly ?
>
> Thanks
>


Re: On merging the npm importer

2017-03-30 Thread Jelle Licht
2017-03-29 19:39 GMT+02:00 Christopher Allan Webber 
:

> Jan Nieuwenhuizen writes:
>
> > Hi,
>
> Hi Jan!
>

Hello Jan and Christopher!

>
> > We have a working importer for npm packages written by Jelle that I have
> > been using for about half a year.  It can use some improvements and
> > that's why I think we should merge it.
> >
> > Have alook at my npm branch here, rebased on master
> >
> > https://gitlab.com/janneke/guix
>
> Would like to review soon, though I'll say that I think unless there are
> serious problems, we should probably merge it.  Avoiding bitrot is prety
> important, and at the very least I don't think it will hurt to have it
> merged.
>

> > I added a patch with several fixes for the importer and and build
> > system.  So far, so good.
> >
> > There's a problem however with the --recursive option and the build
> > system.  To quote Jelle[1]
> >
> >To start of with something that did not work out as well as I had
> >hoped, getting a popular build system (e.g. Gulp, Grunt, Broccoli and
> >others) packaged.  As mentioned in my earlier mails, the list of
> >transitive dependencies of any of these suffer from at least the
> >following:
> >
> >- It is a list with more than 4000 packages on it
> >- It is a list with at some point the package itself on it
> >
> > Most nontrivial npm packages use a build system, and all build systems
> > have circular development dependencies.  Not all development
> > dependencies are always required to build a package, but some certainly
> > are nd there's no way to tell which is which, afaik.
> >
> > That's why I added a --binary option to the importer: it will not
> > try to use the build system and instead mimick `what npm does.'  This
> > does provide, however, an amazing reproducibility feature to the
> > dependency woes that npm hackers are familar with.
> >
> > I suggest to not add any npm package to Guix that is the result of using
> > the --binary option and to build a base of full-source/sanitized npm
> > packages.
>
> Cool... makes sense to me to have this as something we don't use for
> Guix packages, but which might make Guix more useful for people who have
> to use npm in the awkward "real world" that is the current state of npm.
>

As one of these people living in the "real world", this is exactly how I
have been using the importer up till now.
I like and agree with most of your changes as they make the code much more
robust in the face of inevitable failure.

Nonetheless, one could say that we should not make it too easy to
inadvertently create package specifications for 'binaries'.

One tiny improvement might be to use `spdx-string->license` from (guix
import utils), instead of duplicating this effort in the npm importer.

How would you propose we get to reviewing your code? Would you care to send
some patches, or should we bother you via gitlab a bit more?

>
> > Greetings, janneke
> >
> > [1] https://lists.gnu.org/archive/html/guix-devel/2016-08/msg01567.html
>
>
Regards,

jlicht


[PATCH] gnu: node: Update to 7.8.0.

2017-03-30 Thread Jelle Licht
Hi,

Attached you will find a (simple) patch to update Node.js to the latest
released version.

Regards,
jlicht
From fe46d754e61c776e46d59f72f5fc2bed5a0a177e Mon Sep 17 00:00:00 2001
From: Jelle Licht <jli...@fsfe.org>
Date: Thu, 30 Mar 2017 15:57:59 +0200
Subject: [PATCH] gnu: node: Update to 7.8.0.

* gnu/packages/node.scm (node): Update to 7.8.0.
---
 gnu/packages/node.scm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 2df7816b59..ed8e7338bf 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -38,14 +38,14 @@
 (define-public node
   (package
 (name "node")
-(version "6.8.0")
+(version "7.8.0")
 (source (origin
   (method url-fetch)
   (uri (string-append "http://nodejs.org/dist/v; version
   "/node-v" version ".tar.gz"))
   (sha256
(base32
-"0lj3250hglz4w5ic4svd7wlg2r3qc49hnasvbva1v69l8yvx98m8"))
+"1nkngdjbsm81nn3v0w0c2aqx9nb7mwy3z49ynq4wwcrzfr9ap8ka"))
   ;; https://github.com/nodejs/node/pull/9077
   (patches (search-patches "node-9077.patch"
 (build-system gnu-build-system)
@@ -62,6 +62,7 @@
  ;; Fix hardcoded /bin/sh references.
  (substitute* '("lib/child_process.js"
 "lib/internal/v8_prof_polyfill.js"
+"test/parallel/test-child-process-spawnsync-shell.js"
 "test/parallel/test-stdio-closed.js")
(("'/bin/sh'")
 (string-append "'" (which "sh") "'")))
-- 
2.11.1



[PATCH] gnu: Add libtorrent-rasterbar.

2017-01-22 Thread Jelle Licht
* gnu/packages/bittorrent.scm (libtorrent-rasterbar): New variable.
---
 gnu/packages/bittorrent.scm | 40 
 1 file changed, 40 insertions(+)

diff --git a/gnu/packages/bittorrent.scm b/gnu/packages/bittorrent.scm
index 43ec087bf5..0340b1c874 100644
--- a/gnu/packages/bittorrent.scm
+++ b/gnu/packages/bittorrent.scm
@@ -5,6 +5,7 @@
 ;;; Copyright © 2016, 2017 Efraim Flashner <efr...@flashner.co.il>
 ;;; Copyright © 2016 Tomáš Čech <sleep_wal...@gnu.org>
 ;;; Copyright © 2016 Tobias Geerinckx-Rice <m...@tobias.gr>
+;;; Copyright © 2017 Jelle Licht <jli...@fsfe.org>
 ;;;
 ;;; This file is part of GNU Guix.
 ;;;
@@ -29,6 +30,7 @@
   #:use-module (guix build-system glib-or-gtk)
   #:use-module ((guix licenses) #:prefix l:)
   #:use-module (gnu packages adns)
+  #:use-module (gnu packages boost)
   #:use-module (gnu packages check)
   #:use-module (gnu packages compression)
   #:use-module (gnu packages crypto)
@@ -326,3 +328,41 @@ the distributed hash table (DHT) and Peer Exchange.  
Hashing is multi-threaded
 and will take advantage of multiple processor cores where possible.")
 (license (list l:public-domain  ; sha1.*, used to build without OpenSSL
l:gpl2+  ; with permission to link with OpenSSL
+
+(define-public libtorrent-rasterbar
+  (package
+   (name "libtorrent-rasterbar")
+   (version "1.0.10")
+   (source (origin
+(method url-fetch)
+(uri
+ (string-append
+  
"https://github.com/arvidn/libtorrent/releases/download/libtorrent-;
+  "1_0_10" "/libtorrent-rasterbar-" version ".tar.gz"))
+(sha256
+ (base32
+  "0gjcr892hzmcngvpw5bycjci4dk49v763lsnpvbwsjmim2ncwrd8"
+   (build-system gnu-build-system)
+   (arguments
+`(#:configure-flags
+  (list (string-append "--with-boost-libdir="
+   (assoc-ref %build-inputs "boost")
+   "/lib")
+"--enable-python-binding"
+"--enable-tests")
+  #:make-flags (list
+(string-append "LDFLAGS=-Wl,-rpath="
+   (assoc-ref %outputs "out") "/lib"
+(inputs `(("boost" ,boost)
+  ("openssl" ,openssl)))
+(native-inputs `(("python" ,python-2)
+ ("pkg-config" ,pkg-config)))
+(home-page "http://www.rasterbar.com/products/libtorrent/;)
+(synopsis "Feature complete BitTorrent implementation")
+(description
+ "libtorrent-rasterbar is a feature complete C++ BitTorrent implementation
+focusing on efficiency and scalability.  It runs on embedded devices as well as
+desktops.")
+(license l:bsd-2)))
+
+
-- 
2.11.0




Re: jquery 3.1.1

2017-01-19 Thread Jelle Licht
Hello Catonano,

2017-01-19 21:48 GMT+01:00 Catonano <caton...@gmail.com>:

> I made a crawler and I let it loose on the jquery 3.1.1 dependencies on
> registry.npmjs.com
> It recursevely fetched the dependencies of jquery 3.1.1, then the
> dependencies of the dependencies, then the dependencies of the dependencies
> of the dependencies... and so on
>
> Until there were no more dependencies to fetch.
>
> It stored all such dependencies in a graph database made by the amazing
> amz3 for their project, Culturia. They wrote about such project several
> times on the guile-users mailing list.
>
> It took days to download. And it took me months to produce a working
> version of this code. Because I don't know Guile very well and maybe I'm
> not that smart overall.
>
> Anyway, now I have a COMPLETE graph of the dependencies of jquery 3.1.1
>

These pictures are very informative indeed. I will try to be brief, but I
quickly wanted to share
that I find your efforts (and results!) amazing.


>
> It's made of
> 47311 vertices and
> 324569 edges
>
> I made a graph of a subset (graphviz chocked a bit on this ;-) )
> It's here
> http://catonano.altervista.org/grafo.svg
>
> There are 448 "broken" packages. Probably I should explain how I
> classified packages as "broken". Maybe not in this email.
>
> Anyway, these broken packages pose a challenge to the mission of porting
> Jquery into Guix, in my opinion, and they should be considered with some
> attention.
> Here's the list
> http://catonano.altervista.org/broken_packages.txt
>

Something of note: a big chunk of the packages you classified as being
'broken' are
making me recall some (unpleasant) memories; From my own crawling
experiments,
which were not nearly as complete as this one, I also ran into a lot of the
same
show-stoppers. In a very real sense, resolving node dependencies quickly
devolves into
resolving dependencies for most of the popular build systems as well as
plugins like
broccoli-funnel. What I am trying to get at here is that fixing and
packaging these 448 packages
will likely contribute a lot towards making an npm-enriched guix possible.


>
>
> There are 1314 packages with NO dependencies that could be used as
> starting points in porting Jquery into Guix.
> Here's the list
> http://catonano.altervista.org/broken_packages.txt
>

These could probably make use of the npm importer I worked on earlier. Do
you make use of your own? Otherwase I'll get to
rebasing my version on the current guix master branch.


>
>
> If there's anyone interested, I can give you the data folder so you can
> try all the queries you want on these data without having to to run this
> thing for a bunch of hours
>

If possible, yes please. What would be the most convenient way you to share
this data?


>
> In the future, I'd like to run this thing on some other package and merge
> the graphs so I will be able to investigate which are the common
> fundamental dependencies for SEVERAL important packages in Nodejs.
>
> So if someone wants to dedicate time to porting Nodejs stuff in Guix they
> will be able to select most urgent packages to start from.
>
> The same could be said of broken packages taht affect several important
> packages.
>
> The porting of Nodejs in Guix cannot be done with brute strength. A data
> oriented approach can help, in my opinion.
>
> Indeed.


> The ideal would be to have something that, like bitcoin, coordinates a
> swarm in such a way that every node can contribute a tiniy bit of data to a
> common data structure, so all the nodes would have a complete copy of the
> database.
>
> Collecting a mantaining of datasets should be freed of the client server
> model too. Not only the social media.
>
> I have no idea what you are referring to. Could you please elaborate a bit
at a later point in time?


> But that's more than I can handle, anyway.
>

I am already thankful

>
> I'd like to talk about the stumbling blocks I run into to discuss Guile
> and my knowledge of it.
>
> For example, I can't use that thing in the autotools that processes
> configure.am files so I just forked amz3's project and added my files in
> there. As guests. Thanks amz3 !
>
> I'd also like to describe te screw ups in the format I put the data into.
> I realized my mistakes when hours of crunching had already been done.
>
> They can be migrated to a better format though
>
> If you don't mind, I will discuss these issue in the future, not now.
>
> The code is here
> https://gitlab.com/humanitiesNerd/Culturia
>
> One last fun fact: while I was watching the output flowing in my terminal,
> I saw a package called
>
> "broccoli-funnel"
>
> No, really. It's here
> https://registry.npmjs.org/broccoli-funnel
>
> Ok, that's all for now.
>

Thanks again for your efforts on this. I am looking forward to working with
your data.

Regards,
Jelle Licht


Re: NPM and trusted binaries

2016-09-08 Thread Jelle Licht

Just a quick note from me;

AFAIK, the http module is a built-in of node, so you can probably save
yourselves the efforts of packaging it ;-).

Furthermore, lots of development dependencies are not strictly
necessary; e.g. a minifier/uglifier is not required for most
functionality of a package, and ditto for linters and to a certain
extent test frameworks, at least for our initial set of node packages.
This initial set of packages can then (hopefully) be used to package the
rest of npm properly, including tests etc.

The biggest issue here is that an importer can not decide for you which
devDependency is actually is needed to properly build a source archive,
and which just provides convenience functions. The importer should
become more useful when we have a solid set of npm packages in guix.
Before that, the importer will probably be useful to a lesser degree for
any packages besides the most trivial.

Regarding feasibility and its weight, I would say that a simple
transformation such as concatenating files should not be an issue,
whereas more involved transformations such as tree shaking,
uglification, or tranpilation do involve a transformation that take away
much of our freedoms to modify the software, at least in practice.

- Jelle

Pjotr Prins  writes:

> On Wed, Sep 07, 2016 at 07:51:46PM +0200, Jan Nieuwenhuizen wrote:
>> Ludovic Courtès writes:
>> 
>> >> Still, I think Guix would benefit from a somewhat more relaxed stance
>> >> in this.
>> >
>> > It’s part of Guix’s mission to build from source whenever that is
>> > possible, which is the case here, AIUI.
>
> Mission is fine and I agree with that (in principle).
>
>> WDYT, do we have enough information to decide if building from `source'
>> the right metaphor?  Is it pracically feasible and does feasibilty have
>> any weight?  What's the next step I could take to help to bring `q' and
>> `http' (and the other 316 packages I need) into Guix?
>
> I think we are clear we do not want binaries in the main project
> unless there is no way to do it from source.
>
> Personally I think we should be easier on ourselves which implies that
> we get multiple flavours of Guix. 
>
> Another reason to make 'guix channels' work.
>
> Pj.




Re: GSoC NPM

2016-09-02 Thread Jelle Licht
Hi Jan,

Thanks for your interest and work. I am currently quite occupied with
getting ready
for my next year of studies, so I will only shortly address your points;

The short of it is that the dist tarball does not always contain the actual
source code.
Examples of this include generated code, minified code etc.

The devDependencies are, in these cases, the things we need to be able to
actually
build the package. Examples of this include gulp, grunt, and several
testing frameworks.

For simple packages, the difference between a npm tarball and a GH
tarball/repo are
non-existent. I made the choice to skip the npm tarball because I'd rather
err on the
side of caution, and not let people download and run these non-source
packages by accident ;-).

I will have more time to see this through next week.

- Jelle


2016-09-02 16:24 GMT+02:00 Jan Nieuwenhuizen <jann...@gnu.org>:

> Jelle Licht writes:
>
> Hi Jelle!
>
> > - The ability to parse npm version data
> > - An npm backend for ~guix import~
> > - Npm modules in guix
> > - An actual build system for npm packages
>
> That's amazing.  I played with it today and noticed that it always
> downloads devDependencies.  Why is that...I disabled that because
> I think I don't need those?
>
> Also, I found that you prefer going through the repository/github
> instead of using the dist tarball.  Why is that?  Some packages do not
> have a repository field, such as `http'.  I changed that to prefer using
> the dist tarball and use repository as fallback.  You probably want to
> change that order?
>
> I made some other small changes, see attached patch, to be able to
> download all packages that I need, notably: cjson, http and xmldom.
>
> Thanks again for your amazing work, hoping to have this in master soon.
>
> Greetings,
> Jan
>
>
>
> --
> Jan Nieuwenhuizen <jann...@gnu.org> | GNU LilyPond http://lilypond.org
> Freelance IT http://JoyofSource.com | Avatar®  http://AvatarAcademy.nl
>
>


Re: [PATCH 3/4] gnu: node: Do not use bundled dependencies.

2016-08-28 Thread Jelle Licht
2016-08-27 22:38 GMT+02:00 Alex Kost <alez...@gmail.com>:

> Jelle Licht (2016-08-27 14:23 +0300) wrote:
>
> > The Node build system was previously building its own copies of
> > C-ares and http-parser.
> >
> > * gnu/packages/node.scm (node)[inputs]: Add c-ares and http-parser.
> > [arguments]: Add configure flags for using system libraries.
> > ---
> >  gnu/packages/node.scm | 7 ++-
> >  1 file changed, 6 insertions(+), 1 deletion(-)
> >
> >
> > diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
> > index d1c5e1b..7c020e6 100644
> > --- a/gnu/packages/node.scm
> > +++ b/gnu/packages/node.scm
> > @@ -25,6 +25,7 @@
> >#:use-module (guix derivations)
> >#:use-module (guix download)
> >#:use-module (guix build-system gnu)
> > +  #:use-module (gnu packages adns)
> >#:use-module (gnu packages base)
> >#:use-module (gnu packages compression)
> >#:use-module (gnu packages gcc)
> > @@ -86,6 +87,8 @@ it does not buffer data, it can be interrupted at
> anytime.")
> >   '(#:configure-flags '("--shared-openssl"
> > "--shared-zlib"
> > "--shared-libuv"
> > +   "--shared-cares"
> > +   "--shared-http-parser"
> > "--without-snapshot")
> > #:phases
> > (modify-phases %standard-phases
> > @@ -158,7 +161,9 @@ it does not buffer data, it can be interrupted at
> anytime.")
> >  (inputs
> >   `(("libuv" ,libuv)
> > ("openssl" ,tls:openssl)
> > -   ("zlib" ,zlib)))
> > +   ("zlib" ,compression:zlib)
>
> This change shouldn't belong this patch: here you use 'compression'
> prefix which is introduced by the next patch.  This would leave the git
> repo in a broken state on this commit.
>
> > +   ("http-parser" ,http-parser)
> > +   ("c-ares" ,c-ares)))
> >  (synopsis "Evented I/O for V8 JavaScript")
> >  (description "Node.js is a platform built on Chrome's JavaScript
> runtime
> >  for easily building fast, scalable network applications.  Node.js uses
> an
>
> --
> Alex
>

I probably put the wrong things together when rebasing. Should I provide
updated
patches in these threads, or should I just send in a new patch series?

- Jelle


Re: GSoC NPM

2016-08-27 Thread Jelle Licht

Hi

Ricardo Wurmus  writes:

> Hi
>
>> I also took both Ludovic', as well as Catonano's detailed feedback on the
>> initial draft of the recursive importer into account when rewriting it. It
>> should now only visit each node in the dependency graph once, and be a
>> whole lot
>> more efficient as well. It is still based on the multi-valued return values
>> that drove Ricardo's initial work on the CRAN recursive importer.
>
> [...]
>
>> After rewriting the recursive importer to be
>> more
>> sane, I scrawled some notes on my notepad that basically boil down to the
>> following:
>> 1. We should only look up each npm package once, if possible
>> 2. We should have a list of all npm package names.
>> 3. We should be able to specify the maximum traversal depth
>
> I’m not sure I understand.  The CRAN recursive importer visits
> packages only once because it keeps track of previously imported
> packages (in addition to those that are already in Guix).


With the recursive fold approach, I had the issue that
sometimes packages that were imported in a 'leaf fold' had to be
imported again in a different 'leaf fold'. This might very well also be
a mistake I made in my original adaptation of the CRAN importer ;-).

If the general consensus on the ML is that using higher order functions
+ recursion is more elegant than a big, bold loop, I can still
reimplement it as such.

>
>> An easy-yet-inelegant solution would be to include the package name as used
>> within the npm registry as metadata via an argument to the
>> node-build-system.
>
> That’s not so inelegant; or at least we have precedent in Guix.  For
> CRAN and Bioconductor packages we often add something like this:
>
> (properties `((upstream-name . "ACSNMineR")))
>
> This is already used by the updater.

Well, that is most likely what I will be implementing then. An
advantage of this is that it would also make my life easier when making
runtime npm module loading more robust :-).

>
> ~~ Ricardo

Thanks for your feedback

- Jelle




[PATCH 4/4] gnu: node: Use compression: prefix.

2016-08-27 Thread Jelle Licht
* gnu/packages/node.scm (define-module): Import gnu packages compression
  with a prefix
(node): Likewise.
---
 gnu/packages/node.scm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 7c020e6..351e988 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -27,7 +27,8 @@
   #:use-module (guix build-system gnu)
   #:use-module (gnu packages adns)
   #:use-module (gnu packages base)
-  #:use-module (gnu packages compression)
+  #:use-module ((gnu packages compression) #:prefix compression:)
+  #:use-module (gnu packages curl)
   #:use-module (gnu packages gcc)
   #:use-module (gnu packages libevent)
   #:use-module (gnu packages linux)
-- 
2.9.3




[PATCH 3/4] gnu: node: Do not use bundled dependencies.

2016-08-27 Thread Jelle Licht
The Node build system was previously building its own copies of
C-ares and http-parser.

* gnu/packages/node.scm (node)[inputs]: Add c-ares and http-parser.
[arguments]: Add configure flags for using system libraries.
---
 gnu/packages/node.scm | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index d1c5e1b..7c020e6 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -25,6 +25,7 @@
   #:use-module (guix derivations)
   #:use-module (guix download)
   #:use-module (guix build-system gnu)
+  #:use-module (gnu packages adns)
   #:use-module (gnu packages base)
   #:use-module (gnu packages compression)
   #:use-module (gnu packages gcc)
@@ -86,6 +87,8 @@ it does not buffer data, it can be interrupted at anytime.")
  '(#:configure-flags '("--shared-openssl"
"--shared-zlib"
"--shared-libuv"
+   "--shared-cares"
+   "--shared-http-parser"
"--without-snapshot")
#:phases
(modify-phases %standard-phases
@@ -158,7 +161,9 @@ it does not buffer data, it can be interrupted at anytime.")
 (inputs
  `(("libuv" ,libuv)
("openssl" ,tls:openssl)
-   ("zlib" ,zlib)))
+   ("zlib" ,compression:zlib)
+   ("http-parser" ,http-parser)
+   ("c-ares" ,c-ares)))
 (synopsis "Evented I/O for V8 JavaScript")
 (description "Node.js is a platform built on Chrome's JavaScript runtime
 for easily building fast, scalable network applications.  Node.js uses an
-- 
2.9.3




[PATCH 2/4] gnu: node: Add search path specification for 'NODE_PATH'.

2016-08-27 Thread Jelle Licht
* gnu/packages/node.scm (node)[native-search-paths]: New field.
---
 gnu/packages/node.scm | 4 
 1 file changed, 4 insertions(+)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 545..d1c5e1b 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -151,6 +151,10 @@ it does not buffer data, it can be interrupted at 
anytime.")
("procps" ,procps)
("util-linux" ,util-linux)
("which" ,which)))
+(native-search-paths
+ (list (search-path-specification
+(variable "NODE_PATH")
+(files '("lib/node_modules")
 (inputs
  `(("libuv" ,libuv)
("openssl" ,tls:openssl)
-- 
2.9.3




[PATCH 1/4] gnu: node: Add http-parser.

2016-08-27 Thread Jelle Licht
* gnu/packages/node.scm (http-parser): New variable.
* gnu/packages/node.scm (define-module): Import gnu packages tls with
  tls: prefix
---
 gnu/packages/node.scm | 39 +--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 2b27774..545 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -32,7 +32,42 @@
   #:use-module (gnu packages linux)
   #:use-module (gnu packages perl)
   #:use-module (gnu packages python)
-  #:use-module (gnu packages tls))
+  #:use-module ((gnu packages tls) #:prefix tls:)
+  #:use-module (gnu packages valgrind))
+
+(define-public http-parser
+  (package
+(name "http-parser")
+(version "2.7.1")
+(source (origin
+  (method url-fetch)
+  (uri (string-append 
"https://github.com/nodejs/http-parser/archive/v;
+  version ".tar.gz"))
+  (sha256 (base32 
"1cw6nf8xy4jhib1w0jd2y0gpqjbdasg8b7pkl2k2vpp54k9rlh3h"
+(build-system gnu-build-system)
+(arguments
+ '(#:make-flags (list "CC=gcc" (string-append "DESTDIR=" (assoc-ref 
%outputs "out")))
+   #:test-target "test-valgrind"
+   #:phases
+   (modify-phases %standard-phases
+ (delete 'configure)
+ (add-before 'build 'patch-makefile
+   (lambda* (#:key inputs outputs #:allow-other-keys)
+  (substitute* '("Makefile")
+(("/usr/local") ""
+ (replace 'build
+   (lambda* (#:key make-flags #:allow-other-keys)
+ (zero? (apply system* "make" "library" make-flags
+ )))
+(inputs '())
+(native-inputs `(("valgrind" ,valgrind)))
+(home-page "https://github.com/nodejs/http-parser;)
+(synopsis "HTTP request/response parser for C")
+(description "HTTP parser is a parser for HTTP messages written in C.  It
+parses both requests and responses.  The parser is designed to be used in
+performance HTTP applications.  It does not make any syscalls nor allocations,
+it does not buffer data, it can be interrupted at anytime.")
+(license expat)))
 
 (define-public node
   (package
@@ -118,7 +153,7 @@
("which" ,which)))
 (inputs
  `(("libuv" ,libuv)
-   ("openssl" ,openssl)
+   ("openssl" ,tls:openssl)
("zlib" ,zlib)))
 (synopsis "Evented I/O for V8 JavaScript")
 (description "Node.js is a platform built on Chrome's JavaScript runtime
-- 
2.9.3




[PATCH 0/4] Unbundle node dependencies patch series

2016-08-27 Thread Jelle Licht
These patches allow us to make use of the existing c-ares package, as well as
an unbundled version of http parser.

Jelle Licht (4):
  gnu: node: Add http-parser.
  gnu: node: Add search path specification for 'NODE_PATH'.
  gnu: node: Do not use bundled dependencies.
  gnu: node: Use compression: prefix.

 gnu/packages/node.scm | 53 +++
 1 file changed, 49 insertions(+), 4 deletions(-)

-- 
2.9.3




[PATCH] gnu: node: Update to 6.4.0.

2016-08-26 Thread Jelle Licht

This patch builds reproducible, although that was also the case for me
with the previous Node 6.3.1. patch. It would be great if someone could
verify this.

This patch supercedes the 'gnu: node: Update to 6.3.1.' patch at [0].

Thanks,
Jelle

[0]: https://lists.gnu.org/archive/html/guix-devel/2016-08/msg00816.html

>From 9765c88b70f03fdee8a1ac5c55de3b7a34af7fad Mon Sep 17 00:00:00 2001
From: Jelle Licht <jli...@fsfe.org>
Date: Fri, 5 Aug 2016 12:51:15 +0200
Subject: [PATCH] gnu: node: Update to 6.4.0.
To: guix-devel@gnu.org

Remove <https://debbugs.gnu.org/23744> and
<https://debbugs.gnu.org/23723> workaround.

* gnu/packages/node.scm (node): Update to 6.4.0.
  (node)[arguments]: Disabled more tests. Remove custom 'patch-shebangs'
  phase. Manually patch npm script shebang in new 'patch-npm-shebang'
  phase.
---
 gnu/packages/node.scm | 31 ---
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 887ef93..2b27774 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -37,14 +37,14 @@
 (define-public node
   (package
 (name "node")
-(version "6.0.0")
+(version "6.4.0")
 (source (origin
   (method url-fetch)
   (uri (string-append "http://nodejs.org/dist/v; version
   "/node-v" version ".tar.gz"))
   (sha256
(base32
-"0cpw7ng193jgfbw2g1fd0kcglmjjkbj4xb89g00z8zz0lj0nvdbd"
+"1l4p2zgld68c061njx6drxm06685hmp656ijm9i0hnyg30397355"
 (build-system gnu-build-system)
 (arguments
  ;; TODO: Package http_parser and add --shared-http-parser.
@@ -78,10 +78,10 @@
  ;; FIXME: These tests fail in the build container, but they don't
  ;; seem to be indicative of real problems in practice.
  (for-each delete-file
-   '("test/parallel/test-cluster-master-error.js"
+   '("test/parallel/test-dgram-membership.js"
+ "test/parallel/test-cluster-master-error.js"
  "test/parallel/test-cluster-master-kill.js"
  "test/parallel/test-npm-install.js"
- "test/parallel/test-stdout-close-unref.js"
  "test/sequential/test-child-process-emfile.js"))
  #t))
  (replace 'configure
@@ -101,22 +101,15 @@
  (string-append (assoc-ref inputs "python")
 "/bin/python")
  "configure" flags)
- (replace 'patch-shebangs
-   (lambda* (#:key outputs #:allow-other-keys #:rest all)
- ;; Work around <http://bugs.gnu.org/23723>.
- (let* ((patch  (assoc-ref %standard-phases 'patch-shebangs))
-(npm(string-append (assoc-ref outputs "out")
-   "/bin/npm"))
+ (add-after 'patch-shebangs 'patch-npm-shebang
+   (lambda* (#:key outputs #:allow-other-keys)
+ (let* ((bindir (string-append (assoc-ref outputs "out")
+   "/bin"))
+(npm(string-append bindir "/npm"))
 (target (readlink npm)))
-   (and (apply patch all)
-(with-directory-excursion (dirname npm)
-  ;; Turn NPM into a symlink to TARGET again, which 'npm'
-  ;; relies on for the resolution of relative file names
-  ;; in JS files.
-  (delete-file target)
-  (rename-file npm target)
-  (symlink target npm)
-  #t
+   (with-directory-excursion bindir
+ (patch-shebang target (list bindir))
+ #t)))
 (native-inputs
  `(("python" ,python-2)
("perl" ,perl)
-- 
2.9.3



GSoC NPM

2016-08-23 Thread Jelle Licht
.

Ricardo's idea of a recursive importer is pretty nice, imho. It should be
doable
to implement some more of them in a similar fashion what has been done for
cran
and npm.


While I hope nobody (including myself) has to package so many variants of
the
same package again, it would be nice to somehow download _only_ the
revision you
are interested in. AFAIK, there is no proper way for git to do this for the
general 'give me this commit' case. Something that I eventually did in
order to
alleviate the ~3 minute checkout times for each iteration of CS, was the
following hack[2]. It basically puts a recent-enough copy of the CS git
repo in
my store, and then made a shallow copy from that when using git-fetch. This
took
my build times down to less than 10 seconds per iteration.

If you are interested in my work, have a look at:
https://github.com/wordempire/guix/commits/gsoc-final
, or just
`git clone https://github.com/wordempire/guix.git`
`git checkout gsoc-final`.

I will be trickling in a patch series onto the ML the next few days.

I guess that is enough text from me again. I would still like to express my
gratitude to my mentors David Thompson and Christopher Allan Webber, as
well as
the rest of #guix and guix-devel (and some folks at GHM as well) for dealing
with my ramblings, questions and helping me keep this project fun. Special
thanks to Catonano as well for having a close look at my code as well.

With just some tweaks to the importer, we should be able to at least
package a huge subset of all the packages that require zero to few
dependencies, once we are able to identify them.


I probably forgot quite some important and unimportant details, so if you
have
any questions, tips or just want to blame me for getting more messy
JavaScript into guix-land, send me a mail ;-).

- Jelle Licht

[0] https://lists.gnu.org/archive/html/guix-devel/2016-07/msg01726.html
[1] https://www.npmjs.com/
[2] http://paste.lisp.org/display/323999 <- beware, here be dragons etc
[3] http://paste.lisp.org/display/324007


Re: node FTBFS

2016-08-11 Thread Jelle Licht
Hi Alex,

The patch I supplied at [0] seems to not have these issues:
- It builds (yay)
- That particular test seems run successfully

Some other tests had to be disabled, but this was mostly due
to constraints of the build environment.

Thanks,
Jelle


[0]: https://lists.gnu.org/archive/html/guix-devel/2016-08/msg00351.html

2016-08-11 14:15 GMT+02:00 Alex Vong :

> Just to be complete, this is the error obtained when building on local
> machine:
>
>
> === release test-tls-alpn-server-client ===
> Path: parallel/test-tls-alpn-server-client
> assert.js:90
>   throw new assert.AssertionError({
>   ^
> AssertionError: 'first-priority-unsupported' === false
> at checkResults (/tmp/guix-build-node-6.0.0.drv-0/node-v6.0.0/test/
> parallel/test-tls-alpn-server-client.js:32:10)
> at /tmp/guix-build-node-6.0.0.drv-0/node-v6.0.0/test/
> parallel/test-tls-alpn-server-client.js:101:5
> at TLSSocket. (/tmp/guix-build-node-6.0.0.
> drv-0/node-v6.0.0/test/parallel/test-tls-alpn-server-client.js:66:9)
> at TLSSocket.g (events.js:286:16)
> at emitNone (events.js:86:13)
> at TLSSocket.emit (events.js:185:7)
> at TLSSocket. (_tls_wrap.js:1072:16)
> at emitNone (events.js:86:13)
> at TLSSocket.emit (events.js:185:7)
> at TLSSocket._finishInit (_tls_wrap.js:580:8)
> Command: out/Release/node /tmp/guix-build-node-6.0.0.
> drv-0/node-v6.0.0/test/parallel/test-tls-alpn-server-client.js
>
>
> Alex Vong  writes:
>
> > Hi guixers,
> >
> > Node now fails 1 test more, in addition to some of the already disabled
> > tests. Perhaps someone interested can have a look. Node fails to build
> > in hydra since 2/8.
> >
> > Thanks,
> > Alex
>
>


[PATCH] gnu: jq: Fix CVE-2015-8863.

2016-08-11 Thread Jelle Licht
Hello,

Attached patch backports the commit[0] for jq that fixed the vulnerability
referred to as CVE-2015-8863[1]. Some feedback would be welcome.

- Jelle

>From cbd181ae84003bf3cf4a2d15f44b5242dcc97860 Mon Sep 17 00:00:00 2001
From: Jelle Licht <jli...@fsfe.org>
Date: Thu, 11 Aug 2016 17:02:41 +0200
Subject: [PATCH] gnu: jq: Fix CVE-2015-8863.
To: guix-devel@gnu.org

* gnu/packages/patches/jq-CVE-2015-8863.patch: New file.
* gnu/local.mk (dist_patch_DATA): Add it.
* gnu/packages/web.scm (jq): Add it.
---
 gnu/local.mk|  1 +
 gnu/packages/patches/jq-CVE-2015-8863.patch | 34 +
 gnu/packages/web.scm|  6 -
 3 files changed, 40 insertions(+), 1 deletion(-)
 create mode 100644 gnu/packages/patches/jq-CVE-2015-8863.patch

diff --git a/gnu/local.mk b/gnu/local.mk
index af47311..44ace61 100644
--- a/gnu/local.mk
+++ b/gnu/local.mk
@@ -595,6 +595,7 @@ dist_patch_DATA =		\
   %D%/packages/patches/jasper-CVE-2016-2089.patch		\
   %D%/packages/patches/jasper-CVE-2016-2116.patch		\
   %D%/packages/patches/jbig2dec-ignore-testtest.patch		\
+  %D%/packages/patches/jq-CVE-2015-8863.patch			\
   %D%/packages/patches/khmer-use-libraries.patch\
   %D%/packages/patches/kmod-module-directory.patch		\
   %D%/packages/patches/laby-make-install.patch			\
diff --git a/gnu/packages/patches/jq-CVE-2015-8863.patch b/gnu/packages/patches/jq-CVE-2015-8863.patch
new file mode 100644
index 000..b182626
--- /dev/null
+++ b/gnu/packages/patches/jq-CVE-2015-8863.patch
@@ -0,0 +1,34 @@
+From 8eb1367ca44e772963e704a700ef72ae2e12babd Mon Sep 17 00:00:00 2001
+From: Nicolas Williams <n...@cryptonector.com>
+Date: Sat, 24 Oct 2015 17:24:57 -0500
+Subject: [PATCH] Heap buffer overflow in tokenadd() (fix #105)
+
+This was an off-by one: the NUL terminator byte was not allocated on
+resize.  This was triggered by JSON-encoded numbers longer than 256
+bytes.
+---
+ jv_parse.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/jv_parse.c b/jv_parse.c
+index 3102ed4..84245b8 100644
+--- a/jv_parse.c
 b/jv_parse.c
+@@ -383,7 +383,7 @@ static pfunc stream_token(struct jv_parser* p, char ch) {
+ 
+ static void tokenadd(struct jv_parser* p, char c) {
+   assert(p->tokenpos <= p->tokenlen);
+-  if (p->tokenpos == p->tokenlen) {
++  if (p->tokenpos >= (p->tokenlen - 1)) {
+ p->tokenlen = p->tokenlen*2 + 256;
+ p->tokenbuf = jv_mem_realloc(p->tokenbuf, p->tokenlen);
+   }
+@@ -485,7 +485,7 @@ static pfunc check_literal(struct jv_parser* p) {
+ TRY(value(p, v));
+   } else {
+ // FIXME: better parser
+-p->tokenbuf[p->tokenpos] = 0; // FIXME: invalid
++p->tokenbuf[p->tokenpos] = 0;
+ char* end = 0;
+ double d = jvp_strtod(>dtoa, p->tokenbuf, );
+ if (end == 0 || *end != 0)
diff --git a/gnu/packages/web.scm b/gnu/packages/web.scm
index fa791ff..9106295 100644
--- a/gnu/packages/web.scm
+++ b/gnu/packages/web.scm
@@ -3293,7 +3293,11 @@ It uses the uwsgi protocol for all the networking/interprocess communications.")
   "/" name "-" version ".tar.gz"))
   (sha256
(base32
-"0g29kyz4ykasdcrb0zmbrp2jqs9kv1wz9swx849i2d1ncknbzln4"
+"0g29kyz4ykasdcrb0zmbrp2jqs9kv1wz9swx849i2d1ncknbzln4"))
+  ;; This patch has been pushed and the vulnerability will be
+  ;; fixed in the next release after 1.5.
+  ;; https://github.com/stedolan/jq/issues/995
+  (patches (search-patches "jq-CVE-2015-8863.patch"
 (inputs
  `(("oniguruma" ,oniguruma)))
 (native-inputs
-- 
2.9.2


[0]: 
https://github.com/stedolan/jq/commit/8eb1367ca44e772963e704a700ef72ae2e12babd
[1]: https://access.redhat.com/security/cve/CVE-2015-8863




Re: [PATCH v2] gnu: node: Update to 6.3.1.

2016-08-09 Thread Jelle Licht

Leo Famulari <l...@famulari.name> writes:

> On Sun, Aug 07, 2016 at 02:45:20PM +0200, Jelle Licht wrote:
>> Leo Famulari <l...@famulari.name> writes:
>> >> - (replace 'patch-shebangs
>> >> -   (lambda* (#:key outputs #:allow-other-keys #:rest all)
>> >> - ;; Work around <http://bugs.gnu.org/23723>.
>> >> - (let* ((patch  (assoc-ref %standard-phases 'patch-shebangs))
>> >> -(npm(string-append (assoc-ref outputs "out")
>> >> -   "/bin/npm"))
>> >> + (add-after 'patch-shebangs 'patch-npm-shebang
>> >> +   (lambda* (#:key outputs #:allow-other-keys)
>> >> + (let* ((bindir (string-append (assoc-ref outputs "out")
>> >> +   "/bin"))
>> >> +(npm(string-append bindir "/npm"))
>> >>  (target (readlink npm)))
>> >> -   (and (apply patch all)
>> >> -(with-directory-excursion (dirname npm)
>> >> -  ;; Turn NPM into a symlink to TARGET again, which 
>> >> 'npm'
>> >> -  ;; relies on for the resolution of relative file 
>> >> names
>> >> -  ;; in JS files.
>> >> -  (delete-file target)
>> >> -  (rename-file npm target)
>> >> -  (symlink target npm)
>> >> -  #t
>> >> +   (with-directory-excursion bindir
>> >> + (patch-shebang target (list bindir))
>> >> + #t)))
>> >
>> > Will you mention these changes in the commit message?
>> What do you mean by this exactly? The short of it is that a change to
>> the patch-shebangs phase was merged by way of the core-updates merge,
>> which no longer necessitated this workaround.
>
> The commit log should mention all changes made in the commit. So, I
> think the commit message should have a line like this:
>
> [arguments]: Disable more tests. Update code that does foo.
>
> ... where foo is the diff quoted above.

Attached you will find the updated version of this patch. Please let me
know what you think.

- Jelle
>From 798d0888cc57a18ab31fa546a94932476e39088e Mon Sep 17 00:00:00 2001
From: Jelle Licht <jli...@fsfe.org>
Date: Fri, 5 Aug 2016 12:51:15 +0200
Subject: [PATCH] gnu: node: Update to 6.3.1.
To: guix-devel@gnu.org

Remove <https://debbugs.gnu.org/23744> and
<https://debbugs.gnu.org/23723> workaround.

* gnu/packages/node.scm (node): Update to 6.3.1.
  (node)[arguments]: Disabled more tests. Remove custom 'patch-shebangs'
  phase. Manually patch npm script shebang in new 'patch-npm-shebang'
  phase.
---
 gnu/packages/node.scm | 32 ++--
 1 file changed, 14 insertions(+), 18 deletions(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 887ef93..4c98799 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -37,14 +37,14 @@
 (define-public node
   (package
 (name "node")
-(version "6.0.0")
+(version "6.3.1")
 (source (origin
   (method url-fetch)
   (uri (string-append "http://nodejs.org/dist/v; version
   "/node-v" version ".tar.gz"))
   (sha256
(base32
-"0cpw7ng193jgfbw2g1fd0kcglmjjkbj4xb89g00z8zz0lj0nvdbd"
+"1xh883fbhyhgna1vi8xmd6klg4r186lb1h1xr08hn89wy7f48q9z"
 (build-system gnu-build-system)
 (arguments
  ;; TODO: Package http_parser and add --shared-http-parser.
@@ -78,7 +78,10 @@
  ;; FIXME: These tests fail in the build container, but they don't
  ;; seem to be indicative of real problems in practice.
  (for-each delete-file
-   '("test/parallel/test-cluster-master-error.js"
+   '("test/parallel/test-https-connect-address-family.js"
+ "test/parallel/test-tls-connect-address-family.js"
+ "test/parallel/test-dgram-membership.js"
+ "test/parallel/test-cluster-master-error.js"
  "test/parallel/test-cluster-master-kill.js"
  "test/parallel/test-npm-install.js"
  "test/parallel/test-stdout-close-unref.js"
@@ 

Re: [PATCH] python-kivy and Adobe source-code-pro font

2016-08-07 Thread Jelle Licht

Dylan Jeffers  writes:

> Hi all,
>
Hello Dylan,

Thanks for this patch! I love the source-code-pro font :).

> First patch to guix-devel; tried my best to conform to the guidelines.

Guix (usually) has a one commit per change policy. Contributions in guix
usually follow a specific format. You might want to review the
guidelines as specified in the manual, which one can find online or by
executing
  `info -f doc/guix.info "(guix) Contributing"`
in a guix git checkout. 
> Couldn't fix some "abiguous package" issues with
> font-adobe-source-code-pro package. Let me know if someone finds a
> solution.

I am not quite sure what "ambiguous package" issues you are talking
about. Are you referring to "collision encountered" warnings?


I added your patch inline with some observations;
> From db8338214f57249f7f513d9b27b6f31ae6cea345 Mon Sep 17 00:00:00 2001
> From: Dylan Jeffers 
> Date: Fri, 5 Aug 2016 18:37:34 -0700
> Subject: [PATCH] Add python-kivy and adobe source-code-pro font

Guix follows the CHANGELOG[0] format for commit messages. You can check the
commit history for examples of how to phrase certain common changes.

> 
> ---
>  gnu/packages/fonts.scm  | 51 
> +
>  gnu/packages/python.scm | 51 
> +
>  2 files changed, 102 insertions(+)
> 
> diff --git a/gnu/packages/fonts.scm b/gnu/packages/fonts.scm
> index 9b2281a..fa8420e 100644
> --- a/gnu/packages/fonts.scm
> +++ b/gnu/packages/fonts.scm
> @@ -414,6 +414,55 @@ The Liberation Fonts are sponsored by Red Hat.")
>  for long (8 and more hours per day) work with computers.")
>  (license license:silofl1.1)))
>  
> +(define-public font-adobe-source-code-pro
> +  (package
> +(name "font-adobe-source-code-pro")
> +(version "2.030")
> +(source (origin
> +  (method url-fetch)
> +  (uri (string-append
> +"https://github.com/adobe-fonts/source-code-pro/archive/;
> +version "R-ro/1.050R-it.tar.gz"))
> +  (sha256
> +   (base32
> +"0arhhsf3i7ss39ykn73d1j8k4n8vx7115xph6jwkd970p1cxvr54"
> +(build-system trivial-build-system)
> +(arguments
> + `(#:modules ((guix build utils))
> +   #:builder
> + (begin
> +   (use-modules (guix build utils))
> +   (let ((tar  (string-append (assoc-ref %build-inputs
> + "tar")
> +  "/bin/tar"))
> + (PATH (string-append (assoc-ref %build-inputs
> + "gzip")
> +  "/bin"))
> + (font-dir (string-append
> +%output "/share/fonts/truetype")))
> + (setenv "PATH" PATH)
> + (system* tar "xvf" (assoc-ref %build-inputs "source"))
> + (mkdir-p font-dir)
> + (chdir (string-append "source-code-pro-" ,version
> +   "R-ro-1.050R-it/TTF/"))
> + (for-each (lambda (ttf)
> + (copy-file ttf
> +(string-append font-dir "/"
> +   (basename ttf
> +   (find-files "." "\\.ttf$"))
Here you install all the ttf files ...
> +(native-inputs
> + `(("gzip" ,gzip)
> +   ("tar" ,tar)))
> +(home-page "https://adobe-fonts.github.io/source-code-pro/;)
> +(synopsis "Source-Code-Pro fonts")
> +(description
> + "Source Code Pro is a set of OpenType fonts that have been designed
> +to work well in user interface (UI) environments.  In addition to a
> +functional OpenType font, this open source project provides all of the
> +source files that were used to build this OpenType font by using the
> +AFDKO makeotf tool.")
... and here you mention the source of these ttf files also being
available in the package (which you do not seem to install). Best case
scenario would be if we can generate the ttf files ourselves from these
sources. I had a brief look at AFDKO, and IMHO the build procedure seems
quite complicated. Right now, I would remove the part about the source
files and AFDKO from the description.

> +(license license:silofl1.1)))
> +
>  (define-public font-adobe-source-han-sans
>(package
>  (name "font-adobe-source-han-sans")
> @@ -865,3 +914,5 @@ powerline support.")
>  (license (license:x11-style
>"https://github.com/chrissimpkins/Hack/blob/master/LICENSE.md;
>"Hack Open Font License v2.0"
> +
> +
These added newlines seem superfluous

> diff --git a/gnu/packages/python.scm b/gnu/packages/python.scm
> index 470bad8..2cdc398 100644
> --- a/gnu/packages/python.scm
> +++ b/gnu/packages/python.scm
> @@ -88,10 +88,15 @@
>#:use-module (gnu 

Re: [PATCH v2] gnu: node: Update to 6.3.1.

2016-08-07 Thread Jelle Licht

Leo Famulari <l...@famulari.name> writes:

> On Fri, Aug 05, 2016 at 01:02:45PM +0200, Jelle Licht wrote:
>>   ;; FIXME: These tests fail in the build container, but they 
>> don't
>>   ;; seem to be indicative of real problems in practice.
>>   (for-each delete-file
>> -   '("test/parallel/test-cluster-master-error.js"
>> +   
>> '("test/parallel/test-https-connect-address-family.js"
>> + "test/parallel/test-tls-connect-address-family.js"
>
> The file names suggest they require a network interface, which is
> unavailable in the build environment.
>
>> + "test/parallel/test-dgram-membership.js"
>> + "test/parallel/test-cluster-master-error.js"
>
> I assume the above comment about not being real problems in practice
> holds for these tests?
>
Yes, this was my conclusion. The tests either required access to a dns
service, or wanted to some things with sockets.

>> - (replace 'patch-shebangs
>> -   (lambda* (#:key outputs #:allow-other-keys #:rest all)
>> - ;; Work around <http://bugs.gnu.org/23723>.
>> - (let* ((patch  (assoc-ref %standard-phases 'patch-shebangs))
>> -(npm(string-append (assoc-ref outputs "out")
>> -   "/bin/npm"))
>> + (add-after 'patch-shebangs 'patch-npm-shebang
>> +   (lambda* (#:key outputs #:allow-other-keys)
>> + (let* ((bindir (string-append (assoc-ref outputs "out")
>> +   "/bin"))
>> +(npm(string-append bindir "/npm"))
>>  (target (readlink npm)))
>> -   (and (apply patch all)
>> -(with-directory-excursion (dirname npm)
>> -  ;; Turn NPM into a symlink to TARGET again, which 
>> 'npm'
>> -  ;; relies on for the resolution of relative file names
>> -  ;; in JS files.
>> -  (delete-file target)
>> -  (rename-file npm target)
>> -  (symlink target npm)
>> -  #t
>> +   (with-directory-excursion bindir
>> + (patch-shebang target (list bindir))
>> + #t)))
>
> Will you mention these changes in the commit message?
What do you mean by this exactly? The short of it is that a change to
the patch-shebangs phase was merged by way of the core-updates merge,
which no longer necessitated this workaround.

Thanks
- Jelle



[PATCH v2] gnu: node: Update to 6.3.1.

2016-08-05 Thread Jelle Licht
* gnu/packages/node.scm (node): Update to 6.3.1.
---
 gnu/packages/node.scm | 32 ++--
 1 file changed, 14 insertions(+), 18 deletions(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 887ef93..4c98799 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -37,14 +37,14 @@
 (define-public node
   (package
 (name "node")
-(version "6.0.0")
+(version "6.3.1")
 (source (origin
   (method url-fetch)
   (uri (string-append "http://nodejs.org/dist/v; version
   "/node-v" version ".tar.gz"))
   (sha256
(base32
-"0cpw7ng193jgfbw2g1fd0kcglmjjkbj4xb89g00z8zz0lj0nvdbd"
+"1xh883fbhyhgna1vi8xmd6klg4r186lb1h1xr08hn89wy7f48q9z"
 (build-system gnu-build-system)
 (arguments
  ;; TODO: Package http_parser and add --shared-http-parser.
@@ -78,7 +78,10 @@
  ;; FIXME: These tests fail in the build container, but they don't
  ;; seem to be indicative of real problems in practice.
  (for-each delete-file
-   '("test/parallel/test-cluster-master-error.js"
+   '("test/parallel/test-https-connect-address-family.js"
+ "test/parallel/test-tls-connect-address-family.js"
+ "test/parallel/test-dgram-membership.js"
+ "test/parallel/test-cluster-master-error.js"
  "test/parallel/test-cluster-master-kill.js"
  "test/parallel/test-npm-install.js"
  "test/parallel/test-stdout-close-unref.js"
@@ -101,22 +104,15 @@
  (string-append (assoc-ref inputs "python")
 "/bin/python")
  "configure" flags)
- (replace 'patch-shebangs
-   (lambda* (#:key outputs #:allow-other-keys #:rest all)
- ;; Work around .
- (let* ((patch  (assoc-ref %standard-phases 'patch-shebangs))
-(npm(string-append (assoc-ref outputs "out")
-   "/bin/npm"))
+ (add-after 'patch-shebangs 'patch-npm-shebang
+   (lambda* (#:key outputs #:allow-other-keys)
+ (let* ((bindir (string-append (assoc-ref outputs "out")
+   "/bin"))
+(npm(string-append bindir "/npm"))
 (target (readlink npm)))
-   (and (apply patch all)
-(with-directory-excursion (dirname npm)
-  ;; Turn NPM into a symlink to TARGET again, which 'npm'
-  ;; relies on for the resolution of relative file names
-  ;; in JS files.
-  (delete-file target)
-  (rename-file npm target)
-  (symlink target npm)
-  #t
+   (with-directory-excursion bindir
+ (patch-shebang target (list bindir))
+ #t)))
 (native-inputs
  `(("python" ,python-2)
("perl" ,perl)
-- 
2.9.2




Re: [PATCH] gnu: node: Update to 6.3.1

2016-08-05 Thread Jelle Licht
Please disregard this patch,

I was a wee bit impatient with trying out git send-mail

- Jelle

2016-08-05 12:28 GMT+02:00 Jelle Licht <jli...@fsfe.org>:

> * gnu/packages/node.scm (node): Update to 6.3.1.
> ---
>  gnu/packages/node.scm | 35 ---
>  1 file changed, 16 insertions(+), 19 deletions(-)
>
> diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
> index 887ef93..f62555e 100644
> --- a/gnu/packages/node.scm
> +++ b/gnu/packages/node.scm
> @@ -25,6 +25,7 @@
>#:use-module (guix derivations)
>#:use-module (guix download)
>#:use-module (guix build-system gnu)
> +  #:use-module (guix build utils)
>#:use-module (gnu packages base)
>#:use-module (gnu packages compression)
>#:use-module (gnu packages gcc)
> @@ -37,14 +38,14 @@
>  (define-public node
>(package
>  (name "node")
> -(version "6.0.0")
> +(version "6.3.1")
>  (source (origin
>(method url-fetch)
>(uri (string-append "http://nodejs.org/dist/v; version
>"/node-v" version ".tar.gz"))
>(sha256
> (base32
> -"0cpw7ng193jgfbw2g1fd0kcglmjjkb
> j4xb89g00z8zz0lj0nvdbd"
> +"1xh883fbhyhgna1vi8xmd6klg4r186
> lb1h1xr08hn89wy7f48q9z"
>  (build-system gnu-build-system)
>  (arguments
>   ;; TODO: Package http_parser and add --shared-http-parser.
> @@ -78,7 +79,10 @@
>   ;; FIXME: These tests fail in the build container, but they
> don't
>   ;; seem to be indicative of real problems in practice.
>   (for-each delete-file
> -   '("test/parallel/test-cluster-master-error.js"
> +   '("test/parallel/test-https-
> connect-address-family.js"
> + "test/parallel/test-tls-
> connect-address-family.js"
> + "test/parallel/test-dgram-membership.js"
> + "test/parallel/test-cluster-master-error.js"
>   "test/parallel/test-cluster-master-kill.js"
>   "test/parallel/test-npm-install.js"
>   "test/parallel/test-stdout-close-unref.js"
> @@ -101,22 +105,15 @@
>   (string-append (assoc-ref inputs "python")
>  "/bin/python")
>   "configure" flags)
> - (replace 'patch-shebangs
> -   (lambda* (#:key outputs #:allow-other-keys #:rest all)
> - ;; Work around <http://bugs.gnu.org/23723>.
> - (let* ((patch  (assoc-ref %standard-phases 'patch-shebangs))
> -(npm(string-append (assoc-ref outputs "out")
> -   "/bin/npm"))
> -(target (readlink npm)))
> -   (and (apply patch all)
> -(with-directory-excursion (dirname npm)
> -  ;; Turn NPM into a symlink to TARGET again, which
> 'npm'
> -  ;; relies on for the resolution of relative file
> names
> -  ;; in JS files.
> -  (delete-file target)
> -  (rename-file npm target)
> -  (symlink target npm)
> -  #t
> + (add-after 'patch-shebangs 'patch-npm-shebang
> +(lambda* (#:key outputs #:allow-other-keys)
> +  (let* ((bindir (string-append (assoc-ref outputs "out")
> +"/bin"))
> + (npm(string-append bindir "/npm"))
> + (target (readlink npm)))
> +(with-directory-excursion bindir
> +  (patch-shebang target (list bindir))
> +  #t)))
>  (native-inputs
>   `(("python" ,python-2)
> ("perl" ,perl)
> --
> 2.9.2
>
>


[PATCH] gnu: node: Update to 6.3.1

2016-08-05 Thread Jelle Licht
* gnu/packages/node.scm (node): Update to 6.3.1.
---
 gnu/packages/node.scm | 35 ---
 1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 887ef93..f62555e 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -25,6 +25,7 @@
   #:use-module (guix derivations)
   #:use-module (guix download)
   #:use-module (guix build-system gnu)
+  #:use-module (guix build utils)
   #:use-module (gnu packages base)
   #:use-module (gnu packages compression)
   #:use-module (gnu packages gcc)
@@ -37,14 +38,14 @@
 (define-public node
   (package
 (name "node")
-(version "6.0.0")
+(version "6.3.1")
 (source (origin
   (method url-fetch)
   (uri (string-append "http://nodejs.org/dist/v; version
   "/node-v" version ".tar.gz"))
   (sha256
(base32
-"0cpw7ng193jgfbw2g1fd0kcglmjjkbj4xb89g00z8zz0lj0nvdbd"
+"1xh883fbhyhgna1vi8xmd6klg4r186lb1h1xr08hn89wy7f48q9z"
 (build-system gnu-build-system)
 (arguments
  ;; TODO: Package http_parser and add --shared-http-parser.
@@ -78,7 +79,10 @@
  ;; FIXME: These tests fail in the build container, but they don't
  ;; seem to be indicative of real problems in practice.
  (for-each delete-file
-   '("test/parallel/test-cluster-master-error.js"
+   '("test/parallel/test-https-connect-address-family.js"
+ "test/parallel/test-tls-connect-address-family.js"
+ "test/parallel/test-dgram-membership.js"
+ "test/parallel/test-cluster-master-error.js"
  "test/parallel/test-cluster-master-kill.js"
  "test/parallel/test-npm-install.js"
  "test/parallel/test-stdout-close-unref.js"
@@ -101,22 +105,15 @@
  (string-append (assoc-ref inputs "python")
 "/bin/python")
  "configure" flags)
- (replace 'patch-shebangs
-   (lambda* (#:key outputs #:allow-other-keys #:rest all)
- ;; Work around .
- (let* ((patch  (assoc-ref %standard-phases 'patch-shebangs))
-(npm(string-append (assoc-ref outputs "out")
-   "/bin/npm"))
-(target (readlink npm)))
-   (and (apply patch all)
-(with-directory-excursion (dirname npm)
-  ;; Turn NPM into a symlink to TARGET again, which 'npm'
-  ;; relies on for the resolution of relative file names
-  ;; in JS files.
-  (delete-file target)
-  (rename-file npm target)
-  (symlink target npm)
-  #t
+ (add-after 'patch-shebangs 'patch-npm-shebang
+(lambda* (#:key outputs #:allow-other-keys)
+  (let* ((bindir (string-append (assoc-ref outputs "out")
+"/bin"))
+ (npm(string-append bindir "/npm"))
+ (target (readlink npm)))
+(with-directory-excursion bindir
+  (patch-shebang target (list bindir))
+  #t)))
 (native-inputs
  `(("python" ,python-2)
("perl" ,perl)
-- 
2.9.2




Re: License auditing

2016-08-03 Thread Jelle Licht
Something like this could be quite convenient.

The following spdx->guix license symbol converter
might save you some time:
http://paste.lisp.org/display/322105


- Jelle



2016-08-03 19:55 GMT+02:00 Danny Milosavljevic :

> On Wed, 3 Aug 2016 18:28:38 +0200
> David Craven  wrote:
>
> > How can I tell the difference between a lgpl2.1 and lgpl2.1+ license?
>
> "or later"
>
> > Is this a job that an automated tool could do? Detecting licenses
> > included in a tarball?
>
> I also wonder about that. Usually, the license text is just copied &
> pasted anyway, so it should be quite regular.
>
> If there isn't one, I could write one which would basically, per source
> file,
> - try to find SPDX identifier, if that doesn't work:
> - ignore newline, "#" or ";" or "*" or "//" at the beginning of the line
> - lex that into words, where "word" is either [a-zA-Z0-9-]+ or [.,;]
> - try to 1:1 match with all the licenses similarily mapped
> - if that didn't work, try to find signal words and guess the license and
> print the difference in a short form.
>
> I could do that program in maybe 2 hours and find and extract all the
> official license texts in a few more hours. But does such a thing already
> exist? [Seems like something obvious to have and I'm writing many other
> things already.]
>
> A human would still have to review the non-1:1 things - there could always
> be strange exceptions in the README or whatever - but the majority of cases
> should work just fine.
>
> See also  (especially <
> https://github.com/triplecheck/>), <
> http://www.sciencedirect.com/science/article/pii/S0164121216300905> (also
> lists several license checkers; Fossology seems to be a whole webservice
> which does that).
>
>


Re: Rust

2016-07-29 Thread Jelle Licht
I looked into this once;
I found a Rust compiler written in OCaml on the web.
There should be a path from that compiler to the current version of Rust.
The problem lies in the fact that the entire "1.9 build 1.10 builds 1.11"
spiel only become the official policy of the Rust project recently.

disclaimer: the following is how I interpreted my communications with
the rust community. My apologies to anyone involved for any inaccuracies
and/or misrepresentations.

This means that you could be looking at individual commits, as before this,
I believe nightlies (snapshots) were used to compile the Rust compiler.

So, to ad to Alex' point, we would also need to expend (one-time) effort to
find this 'bootstrap-path' from the OCaml-based compiler to a more recent
rustc.

In my admittedly little amounts of experience, the people working on rust
were not really seeing the problem with the current state of affairs, so
good luck getting support with some arcane issues one might encounter
packaging these ancient versions of rustc.

To quote kibwen from [1]:
"We can determine exactly how many builds you'd need by looking at the
master snapshot file:
https://github.com/rust-lang/rust/blob/master/src/snapshots


According to this, there have been 290 snapshots in total. And keep in mind
that you would also need to rebuild LLVM quite a few times as well during
this process, as Rust has continually upgraded its custom LLVM fork over
the years."

Not sure if all this is worth the effort...

- Jelle


[1]: https://news.ycombinator.com/item?id=8732669


2016-07-29 17:34 GMT+02:00 Alex Griffin :

> On Fri, Jul 29, 2016, at 10:16 AM, Ludovic Courtès wrote:
> > Do you know what’s Rust’s bootstrapping story is?  Can we reasonably
> > expect to bootstrap it from source, using a series of previous Rust
> > versions, or using an alternative implementation?
>
> Yes, Rust 1.10 builds with the previous stable release (1.9) for the
> first time. So we will only need one binary to bootstrap Rust. Although
> this will quickly require a long chain of builds because Rust releases
> every 6 weeks and 1.11 is only guaranteed to build with 1.10, etc.
>
> So after only two years we may need to compile like 17 different
> releases to get the current version.
> --
> Alex Griffin
>
>


Re: [GSoC update] Npm & guix

2016-07-29 Thread Jelle Licht
Quick reply from my phone, but thanks for the feedback.

On Jul 29, 2016 16:53, "Catonano" <caton...@gmail.com> wrote:

> 2016-07-25 23:26 GMT+02:00 Ludovic Courtès <l...@gnu.org>:
>
>> Hello!
>>
>> Jelle Licht <jli...@fsfe.org> skribis:
>>
>> > On Ludo's advice, I snarfed Ricardo's recursive importer and bolted it
>> > on my npm importer. After leaving the importer running for a quite
>> > some hours (and making it more robust in the face of inconsistent npm
>> > information), it turns out that jQuery has a direct or indirect
>> > dependcy on about everything. We are talking pretty much all of the
>> > build systems, all of the testing frameworks and all of the test
>> > runners. Literally thousands of packages, and multiple (conflicting)
>> > versions of most.
>>
>> I’m really impressed that your importer can already grovel this much!
>> In itself, that’s already a significant achievement, despite the
>> frustration of not getting “guix package -i jquery” right away.
>>
>> Do you have figures on the number of vertices and edges on this graph?
>> Could it be that the recursive importer keeps revisiting the same nodes
>> over and over again?  :-)
>>
>
>
>
>> I would suggest publishing the code somewhere so others can try to
>> import their favorite JS library and give feedback.
>>
>>
> I'd like to indicate that in the Guix code base there's a function
> visiting the graph of the dependencies of a package with the concern of
> covering it all AND of not considering the same node more than one time
>
> It's in guix/packages.scm on line 552, it's called "transitive-inputs" and
> takes as an arguments the inputs of a single package.
>
> It could be used as a blueprint for such task of dependencies graphs
> covering.
>
> The main observation that I can do is that both versions do check whether
> a package being considered is already been seen in a previous passage.
>
> So, it doesn't seem to me that Jelle's version could revisit the same
> package (node) over and over
>
> Also, the "official" version clearly distinguishes between the current
> depth level in the graph and the next one, using 2 different variables for
> containing the packages (nodes) at the current depth and those in the next
> level ("inputs" and "propagated") as it does a breadth first visit.
>
> Instead the Jelle's version uses the same variable for the current AND the
> next level and it's a list and it conses the nodes for the next level on
> the current list
>
> So it seems to me that it does a depth first visit (I might be wrong, I
> didn't do a complete trace)
>
> If anything, the "official" version uses a VHash to check if a package has
> already been "seen" while Jelle's uses a plain list (again)
>
> So the cost of this check is constant in the official version and linear
> in Jelle's version.
>
> Further, in Jelle's version, every package gets checked against the list
> of already imported packages (as a plain list) AND against the packages
> already present in the store.
>
> Both checks at every iteration. It seems to me that there's space for
> improving here ;-)
>
> In fact, in Jelle's guix/import/npm.scm, on line 462 the "and" could be
> switched to an "or". It would work anyway and it could save a handful of
> checks on a complete run of the algorithm
>

You are totally right here! The quick and dirty way of doing it, which I am
currently doing,
is to wrap these checks in a memoized function.
I'll take some of your points into account to properly rewrite this part.


>
> Anyway, I tried to "import" a random package and it has a ton of
> dependencies.
>

Story of my life these past few weeks :-)


>
> It seems to me that a more systematic approach (like that of a real data
> scientist ;-) ) could help here
>
>

> The whole graph should be imported in some database and then graph
> algorithms should be run on it
>
> For example: which are the packages with less or no dependencies (and a
> lot of dependants) ?
> Because those should be imported first, in my opinion.
>
> Jelle came to the notion that testing frameworks are a dependencies for
> almost all the packages but as far as I understand that's a quite empirical
> knowledge.
>

^ This, I like. Does anyone have any suggestions on tools that could help
me do this in guile?
I know of projects like [1] that already do something similar, although I
do no have any experience
with constructing and querying graphs using free software.

>
>
>> > I am currently hovering bet

[GSoC update] Npm & guix

2016-07-23 Thread Jelle Licht
Hello Guix!

After hopefully enough contemplation and a lot of elbow grease, I would
like to
give you an overview of what I have been up to these past weeks.

To start off with something that might make some people less than happy;
jQuery
and its dependencies will most likely not be packaged this summer. By me, at
least.

On Ludo's advice, I snarfed Ricardo's recursive importer and bolted it on
my npm
importer. After leaving the importer running for a quite some hours (and
making
it more robust in the face of inconsistent npm information), it turns out
that
jQuery has a direct or indirect dependcy on about everything. We are talking
pretty much all of the build systems, all of the testing frameworks and all
of
the test runners. Literally thousands of packages, and multiple
(conflicting)
versions of most.

While this is a sad realization indeed, it just makes it easier to focus on
other important packages of npm. Running the recursive importer on a
handful of
packages leads me to the following possibly redundant observations:

* Quite some packages have tests (Yay!)
* Running tests requires test frameworks and test runners

This makes it IMHO a worthwhile goal bootstrap to a working test framework,
with
of course at first tests disabled for the dependencies of this test
framework.
Test frameworks all have an (indirect) dependency on the Coffeescript
compiler,
of which the first version was written in Ruby. Using this initial (alpha)
compiler, and the awesome git-bisect command, I was able to subsequently
compile
and use the more modern (but still old) Coffeescript-in-coffeescript
compilers.

I am currently hovering between version 0.6 and 0.7, which can properly
recompile itself and slightly more contemporary version. Getting to version
1.6 from June 2013 should be doable using this exact same approach. This
will
allow us to package a 2014 version of the Mocha testing framework.

For the people more knowledgeable in these matters, how would you deal with
deprecated functionality in languages such as python, ruby etc? Because npm
packages are so interdependent, I simply need to start somewhere, by
packaging
things back when they did not have as many dependencies. Currently, I have a
file containing implementations of old Node (pre 1.0) functionality on Node
6.0.
I was thinking of releasing this 'hack' as an npm package and then package
it in
Guix.

The alternative would be to package each version of Node that was used to
build
these ancient packages. For bootstrapping Coffeescript, this already forces
us to
have 3~4 versions of Node, although it is conceptually a lot cleaner.

So my current view of our options:
* Backport ancient node features to a contemporary node version
* Package a significant variety of node versions
Please let me know if anyone has some thoughts, critiques or silver bullets
for this
problem.

A goal of this project is still to have a working jQuery by the end of this
summer, just
not via the procedures defined by npm. My current plan is to partially
reimplement the build procedures used for jquery, albeit in a much simpler
manner: just a single, some might say bloated, javascript file created from
the
sources, instead of the a-la-carte build afforded by Grunt.

In this approach, I also see something that might make packaging npm
packages more Guix-friendly: as long as we have _functional_ equivalents,
we should be able to build a
huge subset of packages. This can be either a blessed version of an npm
package, or a
simplistic yet correct reimplementation, depending on the complexity of the
specific
package. This would only work for build-time dependencies, or native-inputs.

Thanks for reading this WoT and looking forward to hearing your feedback.

- Jelle


Re: [PATCH] gnu: wxwidgets-2: Update to upstream's re-release of 2.8.12.

2016-07-22 Thread Jelle Licht
Hello guix,

Shamelessly stole most of this from Efraim, as wxwidgets had the same
problem.

Lets hope that it's just a couple of packages on SourceForge that have been
changed in place.


Jelle

2016-07-22 15:59 GMT+02:00 Jelle Licht <jli...@fsfe.org>:

> * gnu/packages/wxwidgets.scm (wxwidgets-2): Add a guix revision number
>   to the version scheme of wxwidgets-2 to force an update.
> ---
>  gnu/packages/wxwidgets.scm | 54
> +-
>  1 file changed, 29 insertions(+), 25 deletions(-)
>
> diff --git a/gnu/packages/wxwidgets.scm b/gnu/packages/wxwidgets.scm
> index c9eb178..f4866e1 100644
> --- a/gnu/packages/wxwidgets.scm
> +++ b/gnu/packages/wxwidgets.scm
> @@ -81,29 +81,33 @@ a graphical user interface.  It has language bindings
> for Python, Perl, Ruby
>  and many other languages.")
>  (license (list l:lgpl2.0+ (l:fsf-free "file://doc/license.txt")
>
> +;; wxwidgets version 2.8.12 was updated in-place, resulting in a hash
> +;; mismatch. This can be removed at the next version update.
>  (define-public wxwidgets-2
> -  (package
> -(inherit wxwidgets)
> -(version "2.8.12")
> -(source
> - (origin
> -   (method url-fetch)
> -   (uri (string-append "mirror://sourceforge/wxwindows/" version
> -   "/wxWidgets-" version ".tar.bz2"))
> -   (sha256
> -(base32 "1gjs9vfga60mk4j4ngiwsk9h6c7j22pw26m3asxr1jwvqbr8kkqk"
> -(inputs
> - `(("gtk" ,gtk+-2)
> -   ("libjpeg" ,libjpeg)
> -   ("libtiff" ,libtiff)
> -   ("libmspack" ,libmspack)
> -   ("sdl" ,sdl)
> -   ("unixodbc" ,unixodbc)))
> -(arguments
> - `(#:configure-flags
> -   '("--enable-unicode" "--with-regex=sys" "--with-sdl")
> -   #:make-flags
> -   (list (string-append "LDFLAGS=-Wl,-rpath="
> -(assoc-ref %outputs "out") "/lib"))
> -   ;; No 'check' target.
> -   #:tests? #f
> +  (let ((upstream-version "2.8.12")
> +(guix-revision "1"))
> +(package
> +  (inherit wxwidgets)
> +  (version (string-append upstream-version "-" guix-revision))
> +  (source
> +   (origin
> + (method url-fetch)
> + (uri (string-append "mirror://sourceforge/wxwindows/"
> upstream-version
> + "/wxWidgets-" upstream-version ".tar.bz2"))
> + (sha256
> +  (base32
> "01zp0h2rp031xn6nd8c4sr175fa4nzhwh08mhi8khs0ps39c22iv"
> +  (inputs
> +   `(("gtk" ,gtk+-2)
> + ("libjpeg" ,libjpeg)
> + ("libtiff" ,libtiff)
> + ("libmspack" ,libmspack)
> + ("sdl" ,sdl)
> + ("unixodbc" ,unixodbc)))
> +  (arguments
> +   `(#:configure-flags
> + '("--enable-unicode" "--with-regex=sys" "--with-sdl")
> + #:make-flags
> + (list (string-append "LDFLAGS=-Wl,-rpath="
> +  (assoc-ref %outputs "out") "/lib"))
> + ;; No 'check' target.
> + #:tests? #f)
> --
> 2.9.1
>
>


[PATCH] gnu: wxwidgets-2: Update to upstream's re-release of 2.8.12.

2016-07-22 Thread Jelle Licht
* gnu/packages/wxwidgets.scm (wxwidgets-2): Add a guix revision number
  to the version scheme of wxwidgets-2 to force an update.
---
 gnu/packages/wxwidgets.scm | 54 +-
 1 file changed, 29 insertions(+), 25 deletions(-)

diff --git a/gnu/packages/wxwidgets.scm b/gnu/packages/wxwidgets.scm
index c9eb178..f4866e1 100644
--- a/gnu/packages/wxwidgets.scm
+++ b/gnu/packages/wxwidgets.scm
@@ -81,29 +81,33 @@ a graphical user interface.  It has language bindings for 
Python, Perl, Ruby
 and many other languages.")
 (license (list l:lgpl2.0+ (l:fsf-free "file://doc/license.txt")
 
+;; wxwidgets version 2.8.12 was updated in-place, resulting in a hash
+;; mismatch. This can be removed at the next version update.
 (define-public wxwidgets-2
-  (package
-(inherit wxwidgets)
-(version "2.8.12")
-(source
- (origin
-   (method url-fetch)
-   (uri (string-append "mirror://sourceforge/wxwindows/" version
-   "/wxWidgets-" version ".tar.bz2"))
-   (sha256
-(base32 "1gjs9vfga60mk4j4ngiwsk9h6c7j22pw26m3asxr1jwvqbr8kkqk"
-(inputs
- `(("gtk" ,gtk+-2)
-   ("libjpeg" ,libjpeg)
-   ("libtiff" ,libtiff)
-   ("libmspack" ,libmspack)
-   ("sdl" ,sdl)
-   ("unixodbc" ,unixodbc)))
-(arguments
- `(#:configure-flags
-   '("--enable-unicode" "--with-regex=sys" "--with-sdl")
-   #:make-flags
-   (list (string-append "LDFLAGS=-Wl,-rpath="
-(assoc-ref %outputs "out") "/lib"))
-   ;; No 'check' target.
-   #:tests? #f
+  (let ((upstream-version "2.8.12")
+(guix-revision "1"))
+(package
+  (inherit wxwidgets)
+  (version (string-append upstream-version "-" guix-revision))
+  (source
+   (origin
+ (method url-fetch)
+ (uri (string-append "mirror://sourceforge/wxwindows/" upstream-version
+ "/wxWidgets-" upstream-version ".tar.bz2"))
+ (sha256
+  (base32 "01zp0h2rp031xn6nd8c4sr175fa4nzhwh08mhi8khs0ps39c22iv"
+  (inputs
+   `(("gtk" ,gtk+-2)
+ ("libjpeg" ,libjpeg)
+ ("libtiff" ,libtiff)
+ ("libmspack" ,libmspack)
+ ("sdl" ,sdl)
+ ("unixodbc" ,unixodbc)))
+  (arguments
+   `(#:configure-flags
+ '("--enable-unicode" "--with-regex=sys" "--with-sdl")
+ #:make-flags
+ (list (string-append "LDFLAGS=-Wl,-rpath="
+  (assoc-ref %outputs "out") "/lib"))
+ ;; No 'check' target.
+ #:tests? #f)
-- 
2.9.1




Re: [PATCH] gnu: node: Correctly patch npm shebang.

2016-06-15 Thread Jelle Licht

Jelle Licht <jli...@fsfe.org> writes:

> * gnu/packages/node.scm (node): Correctly patch npm shebang.
> ---
>  gnu/packages/node.scm | 6 ++
>  1 file changed, 6 insertions(+)
>
> diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
> index 2f269d0..2cedda8 100644
> --- a/gnu/packages/node.scm
> +++ b/gnu/packages/node.scm
> @@ -51,8 +51,14 @@

This patch should be applied to core-updates, as it depends on the
correct handling of symlinks in the `patch-shebangs' phase introduced by
Ludo's recent patch [1].

Thanks,
- Jelle

[1]https://lists.gnu.org/archive/html/bug-guix/2016-06/msg00043.html



[PATCH] gnu: node: Correctly patch npm shebang.

2016-06-15 Thread Jelle Licht
* gnu/packages/node.scm (node): Correctly patch npm shebang.
---
 gnu/packages/node.scm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/gnu/packages/node.scm b/gnu/packages/node.scm
index 2f269d0..2cedda8 100644
--- a/gnu/packages/node.scm
+++ b/gnu/packages/node.scm
@@ -51,8 +51,14 @@



Re: CFP: GNU Hacker Meeting 2016

2016-06-07 Thread Jelle Licht

Efraim Flashner  writes:

> On Sun, Jun 05, 2016 at 02:59:56PM +0200, Ludovic Courtès wrote:
>> Hello Guix!
>> 
>> Now is the time to register for the GNU Hackers Meeting!  The more, the
>> merrier!
>> 
>>   https://gnunet.org/ghm2016
>>   https://www.gnu.org/ghm/upcoming.html
>> 
>> I think we can give a few talks/demos, such as a status update, an
>> overview of Guix, an overview of GuixSD, something about reproducible
>> builds, the foo bar importer, pyrubygonpm packages, Cuirass, Bournish,
>> GNU/Hurd, etc.  Let’s coordinate!  Who’s in?  :-)
>> 
>> Maybe we could have a Guix hacking session, or hard-core technical
>> discussions among Guix hackers, if there’s interest.
>> 
>> Thoughts?  Suggestions?
>> 
>> Ludo’.
>
> I'll be there! Just booked my plane ticket. Come and be one of the first
> to sign my gpg key!

That looks very interesting. Getting there is feasible for me, so I have
a reservation of some bus tickets. This is probably a good opportunity
to get into the whole gpg key deal as well :-).

- Jelle



[GSoC] Integrating npm into the Guix ecosystem

2016-06-06 Thread Jelle Licht

Greetings Guix hackers,

It has been some time since my last mail to this list, so I wanted to
share what I have been up to. For the people who might want to watch
along after today, I will be posting the changes that should not break
everything immediately to [1]. (Apologies for it being web-only, my
sysadminfu is still lacking. If you know how to easily and most of all
securely expose a git repo for read-only anonymous checkouts, please do
tell).

These first two weeks, I have mostly been keeping busy with writing some
of the features that will quickly allow me to identify problems with the
import and build procedures as I envision them. I was in luck; there
were many of these problems as I found out ;-).

One of the things that kind of worried me for a while was the fact that
npm was broken after my first attempts at writing _anything_. At this
moment, still being somehow intimidated by the guix code base, I had
quite some trouble finding out what was happening; it turns out
somewhere along the way, the way Node was built since version 6 (using
guix, at least) has changed, making the `npm' in the PATH of your guix
profile a _copy_ instead of a symlink to the actual executable
`npm-cli.js' script.

A patch to fix this is currently still brewing, so expect to see it
soon. In order to continue a bit (and implement something that actually
works), up till now I have worked with the 5.10 release of node.

After toiling a bit to find a nice way of getting node to find
dependencies, I started out with the NODE_PATH variable. Although some
alarming messages I read some time ago indicated this method of allowing
node to load modules is discouraged, but not deprecated.

At first I wanted to make use of a variant of the npm command to
actually take care of installing node modules to the store outputs; some
of the advantages include:

- The 'correct' files are ignored/always included
- The 'correct' symlinks are generated to executable scripts, man-pages
  and other documentation.
- Automatically up to date with new conventions in npm-land

While that all seemed (and still seems) quite nice, it happens to be the
case that npm is quite particular about dependencies that it will
accept. Patching the actual dependencies and versions in the
`package.json' file was another option, but after some consideration and
advice from my mentors I decided this would not work nicely in the long
term. Instead, I want to implement the relevant set of behaviors in
guix, of course allowing for our particular kind of dependency
management.

As such, the current installation phase mostly involves copying files to
their expected locations. I added support (as an exercise in build
system arguments) for 'global' installs[3], which as far as my current
bare-bones implementation is concerned means properly symlinking
executable scripts. In the (near-)future, I am looking into properly
wrapping these scripts to make sure the amount of propagated inputs is
kept to a minimum. As a sanity test, I packaged [4] in my own little
scratchpad. If you want to follow along, it only takes 4 `guix import
npm' invocations ;).

The current importer functions as expected for the "90%" of packages
that I tried. A problem that I ran into that I could not recognize as
easily in other importers is the fact that the npm community only
distributes the artifacts that you need to run the npm modules, but not
to build them. In most trivial cases, there are literally no
differences, but especially for more complicated packages involving
transpilation steps, this poses a problem.

As such, the importer can not actually 'know' of the location of the
source. Right now it uses some limited heuristics to probe GitHub
repositories for a tarball release, and if these are not found or the
sources are hosted at non-GH sites, it tries to check out a tag
according to the npm packaging conventions (SemVer).

The most important thing that needs to happen right now would be to
extend the range of packages that are buildable by the build system. A
combination of working towards having working 'large' packages and test
frameworks should help me quickly identify problems. This will be my
main focus for the next week.

I am aware that the current importer is quite brittle, so this needs to
change as well. As I run into problems (or someone brings them to my
attention), I will improve the quality of this subsystem as well.

If you have some advice, questions or problems, please do not hesitate
to reply. The same goes if you have some npm packages you would love to
see packaged.

Thanks a bunch for reading this WoT :)
- Jelle

[1]https://gitweb.jlicht.org/?p=guix.git;a=shortlog;h=refs/heads/gsoc-npm
[2]https://github.com/nodejs/node/issues/1627
[3]https://docs.npmjs.com/getting-started/installing-npm-packages-globally
[4]https://www.npmjs.com/package/package-json-validator



  1   2   >