bug#47283: Performance regression in narinfo fetching

2021-03-20 Thread Christopher Baines

Christopher Baines  writes:

> I haven't looked in to this yet, but maybe it would be possible to
> adjust the code so that it doesn't perform so badly, but still tries to
> handle possible exceptions.
>
> The two ideas I have is to rewrite the (let ...) bit in terms of a fold,
> maybe that would perform better, or stop using let for iteration and
> setup the exception handling, then process each request, using set! to
> update the state. I haven't tested either of these.

I tried something, neither of these things, but just not calling (loop
...) within the catch block. I don't know why this might work, but it
seems to make guix weather much faster.

Here's the patch [1], I've just realised it's broken, as it'll loose the
result value (and use an old one) when the connection is closed. I'll
send a updated patch without this issue in a moment.

1: https://issues.guix.gnu.org/47288


signature.asc
Description: PGP signature


bug#46849: ELPA packages are fetched from unstable url -> not reproducible

2021-03-20 Thread zimoun


Follow up 2.

 Start of forwarded message 
Date: Fri, 05 Mar 2021 13:08:05 +0100
From: Johannes Rosenberger 
Subject: Re: bug#46849: ELPA packages are fetched from unstable url -> not
 reproducible
To: zimoun 

Excerpts from Johannes Rosenberger's message of March 5, 2021 12:56 pm:

> Excerpts from zimoun's message of March 5, 2021 2:32 am:
> 
>> There is 2 solutions:
>> 
>>  1- trust the future Tarball Heritage [1]
>>  2- switch to git-fetch all the ELPA packages.
>   3- trust archive.org

and maybe a fourth one:

4- https://www.softwareheritage.org/
   (Blog entry about Nix & this by Tweag: https://www.softwareheritage.org/)

Best,

Johannes

 End of forwarded message 





bug#46849: ELPA packages are fetched from unstable url -> not reproducible

2021-03-20 Thread zimoun
Follow up 3.

 Start of forwarded message 
From: zimoun 
To: Johannes Rosenberger 
Subject: Re: bug#46849: ELPA packages are fetched from unstable url -> not
 reproducible
Date: Fri, 05 Mar 2021 13:31:09 +0100

Hi Johannes,

On Fri, 05 Mar 2021 at 13:08, Johannes Rosenberger  wrote:

>>> There is 2 solutions:
>>>
>>>  1- trust the future Tarball Heritage [1]
>>>  2- switch to git-fetch all the ELPA packages.
>>   3- trust archive.org

About archive.org, I do not know.  Currently, there is no fallback in
Guix to it that I am aware, and nothing planned AFAIK.

> and maybe a fourth one:
>
> 4- https://www.softwareheritage.org/
>(Blog entry about Nix & this by Tweag: 
> https://www.softwareheritage.org/)

Yeah, this is what I called #1. :-) Currently, via the ’nixguix’ SWH
loader [1], packages using url-fetch are archived via the file [2].
However, work remains to have a full robust end-to-end solution:

  a) not all the extensions of ’url-fetch’ are archived (and I do not
remember the status about the .el)

  b) the fallback is not robust because of inconsistent addresses
between SWH (swh-id) and the-rest-of-the-world (checksum hashes)–to say
it quickly.

The aim of the disarchive’s project [3] is to address b) by creating a
bridge, i.e., stores in a separate database [4] the structure of the
metadata and then rebuild the archive from a checksum using the files
addressed by swh-id.


1: 

2: 
3: 
4: 


Cheers,
simon
 End of forwarded message 





bug#46849: ELPA packages are fetched from unstable url -> not reproducible

2021-03-20 Thread zimoun
Hi,

I did a mistake and the bug report had not been CC.

Cheers,
simon

 Start of forwarded message 
From: zimoun 
To: Johannes Rosenberger 
Subject: Re: bug#46849: ELPA packages are fetched from unstable url -> not
 reproducible
Date: Fri, 05 Mar 2021 02:32:11 +0100

Hi,

Thanks for the notification.

On Mon, 01 Mar 2021 at 14:15, Johannes Rosenberger  wrote:

> These are only available for the newest version of a package.
> ELPA keeps compressed archives only of around 20 hand-selected versions. 
> All package versions are kept in their git repo, which is a complete archive,
> but there you must somehow extract the commit hash of a version.

So it would break the “guix time-machine”, right?

There is 2 solutions:

 1- trust the future Tarball Heritage [1]
 2- switch to git-fetch all the ELPA packages.

> - https://debbugs.gnu.org/cgi/bugreport.cgi?bug=46441

About #2, I am confused by this quote:

If you can work from the elpa.git instead, then you'll avoid
those problems (but the content is slightly different, so it
might be less convenient).


1: 


All the best,
simon
 End of forwarded message 





bug#46849: ELPA packages are fetched from unstable url -> not reproducible

2021-03-20 Thread zimoun


Follow-up.

 Start of forwarded message 
Date: Fri, 05 Mar 2021 12:56:27 +0100
From: Johannes Rosenberger 
Subject: Re: bug#46849: ELPA packages are fetched from unstable url -> not
 reproducible
To: zimoun 

Hi Simon,


Excerpts from zimoun's message of March 5, 2021 2:32 am:

> On Mon, 01 Mar 2021 at 14:15, Johannes Rosenberger  wrote:
> 
>> These are only available for the newest version of a package.
>> ELPA keeps compressed archives only of around 20 hand-selected versions. 
>> All package versions are kept in their git repo, which is a complete archive,
>> but there you must somehow extract the commit hash of a version.
> 
> So it would break the “guix time-machine”, right?

Not only this. In Nixpkgs it broke the release of auctex in the 
stable branch, because this wasn't at the newest version.
The old version was still available lz-compressed, but there is no 
guarantee for this.

> There is 2 solutions:
> 
>  1- trust the future Tarball Heritage [1]
>  2- switch to git-fetch all the ELPA packages.

I documented (2) there:

https://github.com/NixOS/nixpkgs/issues/110796#issuecomment-779297144

There is one third solution:

   3- trust archive.org

In Nixpkgs we also add archive.org urls as secondary source urls for 
proprietary printer drivers.

>> - https://debbugs.gnu.org/cgi/bugreport.cgi?bug=46441
> 
> About #2, I am confused by this quote:
> 
> If you can work from the elpa.git instead, then you'll avoid
> those problems (but the content is slightly different, so it
> might be less convenient).

I don't understand this sentence either, because the file

http://git.savannah.gnu.org/cgit/emacs/elpa.git/tree/elpa-admin.el?h=elpa-admin

seems to create the packages, so every package in the elpa built from 
the git should be the same. One could check whether all packages on ELPA 
are also in the git and vice versa. Also, some packages might not be 
`external` in the language of ELPA, so not residing in an `external/*` 
branch.


Best,

Johannes
 End of forwarded message 





bug#47221: [PATCH v2]: Correct some inputs / native-inputs issues with guile

2021-03-20 Thread Maxime Devos
Hi Guix,

The first patch ‘gnu: Explicitely pass the guile binary to wrap-script.’
does what it say what it does.  This is important for picking up the guile
binary for the architecture in --target instead of the guile from native-inputs.

(The unpatched & patched package definitions do not have a guile native-input,
so before this patch they wouldn't pick up a guile at all -- packages in inputs
do not contribute to the $PATH when cross-compiling.)

The second patch ‘gnu: Make guile packages cross-compilable when the fix is 
trivial.’
only touches guile libraries.  It adds guile to the native-inputs when required,
sometimes it adds guile to inputs and sometimes it copies propagated-inputs & 
inputs
to native-inputs when cross-compiling.  It also fixes some other 
cross-compilations
issues like autoconf being in inputs instead of native-inputs.

The second patch only touches gnu/packages/guile-xyz.scm; other packages are 
ignored.
Also ignored are: emacsy-minimal (there have been patches and/or bug reports 
lately)
guile-bash (retired project) and guile-studio (it is an Emacs package).

Suggested testing method:

(start shell script)
# BAD_PACKAGES: fails to compile
BAD_PACKAGES="guile2.0-commonmark guile3.0-ncurses-with-gpm guile-dbi 
guile2.2-pfds"

# OK_PACKAGES: compiles & cross-compiles
OK_PACKAGES="guildhall guile-daemon guile-dsv guile-filesystem 
guile2.0-filesystem guile2.2-filesystem guile-syntax-highlight guile2.2-
syntax-highlight guile-sjson guile-sparql guile-email guile-mastodon 
guile-parted guile2.2-parted guile-config guile-hall guile-wisp
guile-commonmark guile-stis-parser guile-persist guile-file-names 
guile-srfi-158 guile-semver jupyter-guile-kernel guile-ics srfi-64-
driver guile-websocket g-wrap guile-webutils guile-srfi-159 guile-torrent 
guile-irc guile-machine-code guile2.2-sjson guile2.2-dsv
guile2.2-email guile2.2-config guile2.2-hall guile2.2-ics guile2.2-wisp 
guile2.2-commonmark guile2.2-semver guile2.2-webutils guile-redis
guile2.2-redis guile2.0-redis guile-irregex guile2.0-irregex guile2.2-irregex 
mcron guile2.2-mcron guile-srfi-89 guile-srfi-145 guile-
srfi-180 guile-jpeg guile-hashing guile2.2-hashing guile-packrat guile-ac-d-bus 
guile-lens guile2.2-lens guile-rdf guile-jsonld guile-
struct-pack guile-laesare guile-mkdir-p guile-jwt guile-r6rs-protobuf 
guile-shapefile schmutz guile-cbor guile-8sync guile-squee guile2.2-
squee guile-colorized guile2.2-colorized guile-pfds guile-prometheus 
guile-aa-tree guile-simple-zmq guile2.2-simple-zmq guile-debbugs
guile-email-latest guile-miniadapton guile-lib guile2.0-lib guile2.2-lib 
guile-minikanren guile2.0-minikanren guile2.2-minikanren python-
on-guile"

# NOT_CROSS_COMPILABLE: self-describing (compiles natively)
NOT_CROSS_COMPILABLE="guile-cv guile-gi guile-ncurses g-golf 
guile-picture-language guile-sly guile-aspell guile-fibers guile-sodium
guile-reader guile-udev haunt guile2.2-haunt guile2.0-haunt guile2.2-ncurses 
guile-ncurses-with-gpm guile-xosd artanis guile-xapian
guile2.2-xapian guile-newt guile2.2-newt guile2.2-reader guile-avahi 
guile2.0-pg guile-libyaml guile-eris guile-ffi-fftw "

make
# replace aarch64 with architecture of choice and maybe adjust -M and -c
./pre-inst-env guix build $OK_PACKAGES $NOT_CROSS_COMPILABLE -M6 -c1 --fallback
./pre-inst-env guix build $OK_PACKAGES -M6 -c1 --target=aarch64-linux-gnu 
--fallback
make as-derivation
(end shell script)

Greetings,
Maxime
From 039d1526600971e90a3ff5183ee7a2fe3055af5b Mon Sep 17 00:00:00 2001
From: Maxime Devos 
Date: Thu, 18 Mar 2021 14:40:20 +0100
Subject: [PATCH 1/2] gnu: Explicitely pass the guile binary to wrap-script.

If the #:guile argument of wrap-script is not set,
then a guile binary will be searched for in the PATH.

When cross-compiling, this would result in a guile package
compiled for a wrong architecture being used (if guile is in
native-inputs) or a broken wrapper script that tries to use
"#f" as interpreter.

Note that there are more cross-compilation issues
lurking in the affected packages, e.g. gess uses a
python of the incorrect architecture.

Partially fixes: .

* gnu/packages/xdisorg.scm (clipmenu)[arguments]: Use the
  guile of the target platform in wrap-script.
* gnu/packages/vpn.scm (vpnc-scripts)[arguments]: Likewise.
* gnu/packages/mail.scm (sieve-connect)[arguments]: Likewise.
* gnu/packages/bioinformatics.scm
  (proteinortho)[arguments]: Likewise.
  (prinseq)[arguments]: Likewise.
  (gess)[arguments]: Likewise.
  (nanopolish)[arguments]: Likewise.
---
 gnu/packages/bioinformatics.scm | 29 -
 gnu/packages/mail.scm   |  4 +++-
 gnu/packages/vpn.scm|  2 ++
 gnu/packages/xdisorg.scm|  4 +++-
 4 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/gnu/packages/bioinformatics.scm b/gnu/packages/bioinformatics.scm
index eb466868d1..d62a6f8643 100644
--- a/gnu/packages/bioinformatics.scm
+++ b/gnu/packages/bioinformatics.scm
@@ 

bug#47256: generic-html updater does not work for mediainfo package

2021-03-20 Thread Ludovic Courtès
Hi,

Léo Le Bouter  skribis:

> $ ./pre-inst-env guix refresh mediainfo
> gnu/packages/video.scm:3852:2: warning: 'generic-html' updater failed
> to determine available releases for mediainfo

Fixed in 1710e8cb59e2ef6f1e0eb19235b6a4f1b8158bc4, thanks!

Ludo’.





bug#47239: Test failure in tests/publish.scm with commit 1955ef93b76e51cab5bed4c90f7eb9df7035355a

2021-03-20 Thread Ludovic Courtès
Hi Konrad,

Konrad Hinsen  skribis:

> test-name: with cache
> location: /home/hinsen/src/guix/tests/publish.scm:417
> source:
> + (test-equal
> +   "with cache"
> +   (list #t
> + `(("StorePath" unquote %item)
> +   ("URL"
> +unquote
> +(string-append "nar/gzip/" (basename %item)))
> +   ("Compression" . "gzip"))
> + 200
> + #t
> + #t
> + 404)
> +   (call-with-temporary-directory
> + (lambda (cache)
> +   (let ((thread
> +   (with-separate-output-ports
> + (call-with-new-thread
> +   (lambda ()
> + (guix-publish
> +   "--port=6797"
> +   "-C2"
> +   (string-append "--cache=" cache)
> +   "--cache-bypass-threshold=0"))
> + (wait-until-ready 6797)
> + (let* ((base "http://localhost:6797/;)
> +(part (store-path-hash-part %item))
> +(url (string-append base part ".narinfo"))
> +(nar-url
> +  (string-append base "nar/gzip/" (basename %item)))
> +(cached
> +  (string-append
> +cache
> +"/gzip/"
> +(basename %item)
> +".narinfo"))
> +(nar (string-append
> +   cache
> +   "/gzip/"
> +   (basename %item)
> +   ".nar"))
> +(response (http-get url)))
> +   (and (= 404 (response-code response))
> +(match (assq-ref
> + (response-headers response)
> + 'cache-control)
> +   quote max-age) . ttl)) (< ttl 3600)))
> +(wait-for-file cached)
> +(= 420 (stat:perms (lstat cached)))
> +(= 420 (stat:perms (lstat nar)))
> +(let* ((body (http-get-port url))
> +   (compressed (http-get nar-url))
> +   (uncompressed
> + (http-get
> +   (string-append base "nar/" (basename %item
> +   (narinfo (recutils->alist body)))
> +  (list (file-exists? nar)
> +(filter
> +  (lambda (item)
> +(match item
> +   (("Compression" . _) #t)
> +   (("StorePath" . _) #t)
> +   (("URL" . _) #t)
> +   (_ #f)))
> +  narinfo)
> +(response-code compressed)
> +(= (response-content-length compressed)
> +   (stat:size (stat nar)))
> +(= (string->number (assoc-ref narinfo "FileSize"))
> +   (stat:size (stat nar)))
> +(response-code uncompressed)
> expected-value: (#t (("StorePath" . 
> "/home/hinsen/src/guix/test-tmp/store/892j9b0gqgbj4a7sv40jif3yyv25sm90-item") 
> ("URL" . "nar/gzip/892j9b0gqgbj4a7sv40jif3yyv25sm90-item") ("Compression" . 
> "gzip")) 200 #t #t 404)
> actual-value: #f
> result: FAIL

Is it reproducible?  (You can run “make check TESTS=tests/publish.scm”.)

If it is, could you add ‘pk’ calls here and there to see which of the
sub-expressions in (and …) returns false?

For example, replace:

  (= 404 (response-code response)

by:

  (pk 'four-oh-four (= 404 (response-code response)))

That’ll print a line in the test log with the value of that (= …)
expression.

TIA,
Ludo’.





bug#47283: Performance regression in narinfo fetching

2021-03-20 Thread Christopher Baines

Ludovic Courtès  writes:

> As reported on guix-devel, ‘guix weather’ has become extremely slow.
> Specifically, in the narinfo-fetching phase, it runs at 100% CPU, even
> though that part should be network-bound (pipelined HTTP GETs).
>
> A profile of the ‘report-server-coverage’ call would show this:
>
> --8<---cut here---start->8---
> % cumulative   self 
> time   seconds seconds  procedure
>  62.50  1.06  1.06  fluid-ref*
>   6.25  0.11  0.11  regexp-exec
>   3.13  0.05  0.05  ice-9/boot-9.scm:1738:4:throw
>   2.08  0.04  0.04  string-index
>   2.08  0.04  0.04  write
>   1.04568.08  0.02  ice-9/boot-9.scm:1673:4:with-exception-handler
>   1.04  0.02  0.02  %read-line
>   1.04  0.02  0.02  guix/ci.scm:78:0:json->build
>   1.04  0.02  0.02  string-append
> --8<---cut here---end--->8---
>
> More than half of the time spent in ‘fluid-ref*’—sounds fishy.
>
> Where does that that call come from?  There seems to be a single caller,
> in boot-9.scm:
>
> (define* (raise-exception exn #:key (continuable? #f))
>   (define (capture-current-exception-handlers)
> ;; FIXME: This is quadratic.
> (let lp ((depth 0))
>   (let ((h (fluid-ref* %exception-handler depth)))
> (if h
> (cons h (lp (1+ depth)))
> (list fallback-exception-handler)
>   ;; …
>   )
>
> We must be abusing exceptions somewhere…
>
> Indeed, there’s one place on the hot path where we install exception
> handlers: in ‘http-multiple-get’ (from commit
> 205833b72c5517915a47a50dbe28e7024dc74e57).  I don’t think it’s needed,
> is it?  (But if it is, let’s find another approach, this one is
> prohibitively expensive.)

I think the exception handling has moved around, but I guess the
exceptions that could be caught in http-multiple-get could happen,
right? I am really just guessing here, as Guile doesn't help tell you
about possible exceptions, and I haven't spent enough time to read all
the possible code involved to find out if these are definitely possible.

> A simple performance test is:
>
>   rm -rf ~/.cache/guix/substitute/
>   time ./pre-inst-env guix weather $(guix package -A|head -500| cut -f1)
>
> After removing this ‘catch’ in ‘http-multiple-get’, the profile is
> flatter:
>
> --8<---cut here---start->8---
> % cumulative   self
> time   seconds seconds  procedure
>   8.33  0.07  0.07  string-index
>   8.33  0.07  0.07  regexp-exec
>   5.56  0.05  0.05  anon #x154af88
>   5.56  0.05  0.05  write
>   5.56  0.05  0.05  string-tokenize
>   5.56  0.05  0.05  read-char
>   5.56  0.05  0.05  set-certificate-credentials-x509-trust-data!
>   5.56  0.05  0.05  %read-line
> --8<---cut here---end--->8---
>
> There’s also this ‘call-with-connection-error-handling’ call in (guix
> substitute), around an ‘http-multiple-get’ call, that may not be
> justified.
>
> Attached is a diff of the tweaks I made to test this.
>
> WDYT, Chris?

I haven't looked in to this yet, but maybe it would be possible to
adjust the code so that it doesn't perform so badly, but still tries to
handle possible exceptions.

The two ideas I have is to rewrite the (let ...) bit in terms of a fold,
maybe that would perform better, or stop using let for iteration and
setup the exception handling, then process each request, using set! to
update the state. I haven't tested either of these.

It's good to know that Guile exception handling can be excessively
expensive though, I wouldn't have expected it to beat out anything over
the network in terms of the performance penalty.


signature.asc
Description: PGP signature


bug#47266: guix pull: error (substituter)

2021-03-20 Thread Christoph Schumacher
Hi,

repeating "guix pull && systemctl restart guix-daemon.service"
again and again finally worked.  :-)

Thank you very much!
--Christoph






bug#47283: Performance regression in narinfo fetching

2021-03-20 Thread Ludovic Courtès
Hello!

As reported on guix-devel, ‘guix weather’ has become extremely slow.
Specifically, in the narinfo-fetching phase, it runs at 100% CPU, even
though that part should be network-bound (pipelined HTTP GETs).

A profile of the ‘report-server-coverage’ call would show this:

--8<---cut here---start->8---
% cumulative   self 
time   seconds seconds  procedure
 62.50  1.06  1.06  fluid-ref*
  6.25  0.11  0.11  regexp-exec
  3.13  0.05  0.05  ice-9/boot-9.scm:1738:4:throw
  2.08  0.04  0.04  string-index
  2.08  0.04  0.04  write
  1.04568.08  0.02  ice-9/boot-9.scm:1673:4:with-exception-handler
  1.04  0.02  0.02  %read-line
  1.04  0.02  0.02  guix/ci.scm:78:0:json->build
  1.04  0.02  0.02  string-append
--8<---cut here---end--->8---

More than half of the time spent in ‘fluid-ref*’—sounds fishy.

Where does that that call come from?  There seems to be a single caller,
in boot-9.scm:

(define* (raise-exception exn #:key (continuable? #f))
  (define (capture-current-exception-handlers)
;; FIXME: This is quadratic.
(let lp ((depth 0))
  (let ((h (fluid-ref* %exception-handler depth)))
(if h
(cons h (lp (1+ depth)))
(list fallback-exception-handler)
  ;; …
  )

We must be abusing exceptions somewhere…

Indeed, there’s one place on the hot path where we install exception
handlers: in ‘http-multiple-get’ (from commit
205833b72c5517915a47a50dbe28e7024dc74e57).  I don’t think it’s needed,
is it?  (But if it is, let’s find another approach, this one is
prohibitively expensive.)

A simple performance test is:

  rm -rf ~/.cache/guix/substitute/
  time ./pre-inst-env guix weather $(guix package -A|head -500| cut -f1)

After removing this ‘catch’ in ‘http-multiple-get’, the profile is
flatter:

--8<---cut here---start->8---
% cumulative   self 
time   seconds seconds  procedure
  8.33  0.07  0.07  string-index
  8.33  0.07  0.07  regexp-exec
  5.56  0.05  0.05  anon #x154af88
  5.56  0.05  0.05  write
  5.56  0.05  0.05  string-tokenize
  5.56  0.05  0.05  read-char
  5.56  0.05  0.05  set-certificate-credentials-x509-trust-data!
  5.56  0.05  0.05  %read-line
--8<---cut here---end--->8---

There’s also this ‘call-with-connection-error-handling’ call in (guix
substitute), around an ‘http-multiple-get’ call, that may not be
justified.

Attached is a diff of the tweaks I made to test this.

WDYT, Chris?

Ludo’.

diff --git a/guix/http-client.scm b/guix/http-client.scm
index 4b4c14ed0b..a28523201e 100644
--- a/guix/http-client.scm
+++ b/guix/http-client.scm
@@ -219,42 +219,21 @@ returning."
  (remainder
   (connect p remainder result
   ((head tail ...)
-   (catch #t
- (lambda ()
-   (let* ((resp   (read-response p))
-  (body   (response-body-port resp))
-  (result (proc head resp body result)))
- ;; The server can choose to stop responding at any time,
- ;; in which case we have to try again.  Check whether
- ;; that is the case.  Note that even upon "Connection:
- ;; close", we can read from BODY.
- (match (assq 'connection (response-headers resp))
-   (('connection 'close)
-(close-port p)
-(connect #f   ;try again
- (drop requests (+ 1 processed))
- result))
-   (_
-(loop tail (+ 1 processed) result) ;keep going
- (lambda (key . args)
-   ;; If PORT was cached and the server closed the connection
-   ;; in the meantime, we get EPIPE.  In that case, open a
-   ;; fresh connection and retry.  We might also get
-   ;; 'bad-response or a similar exception from (web response)
-   ;; later on, once we've sent the request, or a
-   ;; ERROR/INVALID-SESSION from GnuTLS.
-   (if (or (and (eq? key 'system-error)
-(= EPIPE (system-error-errno `(,key ,@args
-   (and (eq? key 'gnutls-error)
-(eq? (first args) error/invalid-session))
-   (memq key
- '(bad-response bad-header bad-header-component)))
-   (begin
- (close-port p)
- (connect #f  ; try again
-  (drop requests (+ 1 processed))
-  result))
-   (apply throw key 

bug#47258: guix pull bug: the program '/gnu/store/...-compute-guix-derivation' failed to compute the derivation for Guix

2021-03-20 Thread Pierre Neidhardt
Ludovic Courtès  writes:

> This is fixed in ef2b9322fae1d03bf639924d12214b0f58c11054 (it was
> introduced hours before).

Thanks!

-- 
Pierre Neidhardt
https://ambrevar.xyz/


signature.asc
Description: PGP signature


bug#47271: guix graph --path results in backtrace

2021-03-20 Thread Mark H Weaver
Julien Lepiller  writes:
> Sounds like you might have stale .go files somewhere maybe?

Indeed, make clean-go fixed it.

Thanks,
  Mark





bug#47154: ungoogled-chromium@88.0.4324.182 package vulnerable to various severe CVEs

2021-03-20 Thread Marius Bakke
Hello!

Sorry for not seeing this earlier.

Léo Le Bouter  skriver:

> I am not sure how to undertake this upgrade, I tried a little bit but
> it failed at failing to delete some bundled third_party directories.
>
> Would love to know in more detail what is the process for upgrading
> ungoogled-chromium, license checking and patch rebasing if necessary.

For major upgrades such as 88->89, I usually comment out the pruning
script from the snippet, and add a phase such as...

  (add-after 'unpack 'prune
(lambda _
  (apply invoke "python"
 "build/linux/unbundle/remove_bundled_libraries.py"
 "--do-remove" (list ,@%preserved-third-party-files

...to avoid having to repack for every change to
%preserved-third-party-files.

Then just run './pre-inst-env guix build ...' as usual, see what the
configure phase reports, and adjust %preserved-third-party-files
accordingly.

Each "third_party" directory contains a README.chromium with license
information.  That file is not always correct (i.e. listing a single
license when multiple are involved), so I typically check the source
files too.

For patch rebasing, sometimes I make the necessary adjustments manually
and use plain old "diff"; other times I'll create a git repository from
the vanilla Chromium source, apply patches, branch out and try to
cherry-pick the patches to the new version in order to benefit from
git's conflict markers.

I also keep an eye on the Arch and Gentoo Chromium packages for
"inspiration" (that's how I found the recent Opus patch).

Hope this helps, and thanks for the interest in helping out with
maintaining this package.  :-)


signature.asc
Description: PGP signature


bug#37289: XDG Portals missing

2021-03-20 Thread Anon via web
Portals are now there 
https://git.savannah.gnu.org/cgit/guix.git/commit/gnu/packages/freedesktop.scm?id=e4b8feaf5c503ae5e13efa565a2b6c7b4bb2f946
 






bug#47115: Redundant library grafts leads to breakage

2021-03-20 Thread Ludovic Courtès
Hi Mark,

Mark H Weaver  skribis:

> I think that my last hypothesis was on the right track, but not quite
> right:
>
> * Instead of 'libcairo' being loaded twice, I now suspect that
>   "libguile-cairo.so" is being loaded twice.
>
> * Instead of the original and replacement libraries being loaded, I now
>   suspect that two different variants of the replacement "guile-cairo"
>   are being loaded.
>
> * Instead of libcairo type tags being duplicated, I now suspect that
>   duplicated smob tags are being allocated.
>
> However, *if* deduplication is enabled, two redundant replacements
> created by grafting _should_ occupy the same inodes, assuming that the
> replacement mappings are the same (modulo ordering), and assuming that
> /gnu/store/.links doesn't hit a directory size limit (which can happen
> on ext3/4, leading to missed deduplication opportunities).

Woow, thanks for the investigation!  You wouldn’t think that
deduplication can have an effect on this kind of bug.

> I've known about these redundant replacements in Guix for many years,
> but was not aware of any significant practical problems arising from
> them until now.

Do you know why the two guile-cairo grafts differ in this case?

I’m aware of one case that can lead to that: the grafting code can
create grafts for just one output of the original derivation, or for all
of them (commit 482fda2729c3e76999892cb8f9a0391a7bd37119).  Maybe that’s
what’s happening here?

Thank you!

Ludo’.





bug#47258: guix pull bug: the program '/gnu/store/...-compute-guix-derivation' failed to compute the derivation for Guix

2021-03-20 Thread Ludovic Courtès
Pierre Neidhardt  skribis:

> I just noticed these curious warnings:
>
> building 
> /gnu/store/q5p09vryxv0r2hp0a9caaz9kmfcflsh5-compute-guix-derivation.drv...
> Computing Guix derivation for 'x86_64-linux'...  ;;; Failed to autoload 
> repository-discover in (git repository):
> ;;; no code for module (git repository)
> ;;; Failed to autoload repository-discover in (git repository):
> ;;; no code for module (git repository)
> ;;; Failed to autoload repository-open in (git repository):
> ;;; no code for module (git repository)
> ;;; Failed to autoload repository-open in (git repository):
> ;;; no code for module (git repository)

This is fixed in ef2b9322fae1d03bf639924d12214b0f58c11054 (it was
introduced hours before).

Ludo’.





bug#47258: guix pull bug: the program '/gnu/store/...-compute-guix-derivation' failed to compute the derivation for Guix

2021-03-20 Thread Ludovic Courtès
Hi Pierre,

Pierre Neidhardt  skribis:

> @ substituter-started 
> /gnu/store/d2c09h0xxlmql98yd989hibi347arwqv-libtiff-4.2.0-doc substitute
> guix substitute: error: host name lookup error: Name or service not known
> @ substituter-failed 
> /gnu/store/d2c09h0xxlmql98yd989hibi347arwqv-libtiff-4.2.0-doc  fetching path 
> `/gnu/store/d2c09h0xxlmql98yd989hibi347arwqv-libtiff-4.2.0-doc' (empty 
> status: '')

Looks like you experienced a transient networking error.

Now, error reporting is terrible.  :-/

Ludo’.





bug#47266: guix pull: error (substituter)

2021-03-20 Thread Ludovic Courtès
Hi,

Christoph Schumacher 
skribis:

>> I suspect the bug is the same as .

[...]

> $ guix describe -p /var/guix/profiles/per-user/root/current-guix
> Generation 136Mar 12 2021 20:25:23(current)
>   guix 35b3ab8
> repository URL: https://git.savannah.gnu.org/git/guix.git
> branch: master
> commit: 35b3ab8e5748d9911ae7a0189065d0c25392895b
>
> $ /var/guix/profiles/per-user/root/current-guix/bin/guix-daemon --version
> guix-daemon (GNU Guix) 1.2.0-15.f8953be

OK, that confirms my guess.  To get the fix for
, you need to pull (concretely, it
was fixed by the ‘guix’ package upgrade in
94f03125463ee0dba2f7916fcd43fd19d4b6c892).

Could you upgrade and restart your daemon?

  https://guix.gnu.org/manual/en/html_node/Upgrading-Guix.html

(If you get a bad-response crash while attempting to upgrade, just retry
the operation; it’s transient so every retry makes progress.)

Let us know if the problem still appears after that.  I’m closing the
bug meanwhile.

Thanks for your feedback!

Ludo’.





bug#47253: network-manager shepherd services does not wait to be online

2021-03-20 Thread raid5atemyhomework via Bug reports for GNU Guix
Hello MArk,

> [] I'll note, however, that merely waiting up to 30 seconds (orwhatever 
> timeout you choose) is not, in itself, a robust solution. What
> happens if the network is down for more than 30 seconds? What if it
> goes down after 'nm-online' checks, but before the dependent service has
> finished starting?

The sysad has to go look at what is wrong and fix it, then restart services 
manually as needed.  Presumably the sysad is competent enough to care for the 
hardware so this doesn't occur (too often).

What this avoids is if everything in the hardware setup (cables, NIC, router, 
hub, router config, etc.) is 100% fine but a reboot of the system for any 
reason causes services starting at boot to fail to start properly.  Competent 
sysads will put alarm bells if an important daemon is not running.  But if such 
alarm bells keep getting set off during a server restart, it gets annoying and 
makes the sysad pay less attention to alarm bells that *are* important enough 
for them to check the hardware setup.

So the common 30-second timeout used in SystemD is a fairly good compromise 
anyway.  Probably your alarm bells checks things hourly or so, and exiting 
after 30 seconds allows other services (e.g. a direct X server on the server, 
perhaps?) to start as well so a sysad can sit at the console and work the issue 
directly.  It's not perfect, but it's good enough for most things.

> Also, if a service fails to handle lack of network
> when it starts, it makes me wonder whether it properly handles a
> prolonged network failure while its running. It seems to me that the
> only fully satisfactory solution is for each service to robustly handle
> network failures at any time, although I acknowledge that workarounds
> are needed in the meantime.

Indeed, and the Guix substituter for example is fairly brittle against internet 
connectivity problems, not just at the local networking level, but from issues 
from the local network connection all the way to ci.guix.gnu.org.

Thanks
raid5atemyhomework





bug#47253: network-manager shepherd services does not wait to be online

2021-03-20 Thread Mark H Weaver
Hi,

Earlier, I wrote:
>> How about leaving "networking" as it is now, and instead adding a new
>> service called "network-online" or similar, that requires "networking"
>> and then waits until a network connection is established?

I withdraw my proposal for a separate "network-online" service.  It was
a half-baked idea made in haste.  Now that I've looked, I see that
almost every service in Guix that requires 'networking' should
arguably[*] wait until the network comes up before starting up.
Moreover, now that I think about it, I'm not sure what the use case
would be for requiring 'networking', if not to wait for the network to
come up.

My immediate concern here is to avoid blocking the startup of a typical
Guix desktop or laptop system for 30 seconds if there's no network
connection, and more generally to keep Guix working well for users like
myself who are not "always online".

I haven't yet looked into the details, but at first glance, I'm inclined
to agree with you that the right place to fix this is in Shepherd.
Somehow, it ought to be possible to delay the startup of services that
require 'networking', without delaying anything else.

   Mark


[*] I'll note, however, that merely waiting up to 30 seconds (or
whatever timeout you choose) is not, in itself, a robust solution.  What
happens if the network is down for more than 30 seconds?  What if it
goes down after 'nm-online' checks, but before the dependent service has
finished starting?  Also, if a service fails to handle lack of network
when it starts, it makes me wonder whether it properly handles a
prolonged network failure while its running.  It seems to me that the
only fully satisfactory solution is for each service to robustly handle
network failures at any time, although I acknowledge that workarounds
are needed in the meantime.