Re: Using G-Expressions for public keys (substitutes and possibly more)

2021-10-21 Thread Liliana Marie Prikler
Hi Ludo,

Am Donnerstag, den 21.10.2021, 22:13 +0200 schrieb Ludovic Courtès:
> Hi!
> 
> Liliana Marie Prikler  skribis:
> 
> > let's say I wanted to add my own substitute server to my
> > config.scm. 
> > At the time of writing, I would have to add said server's public
> > key to
> > the authorized-keys of my guix-configuration like so:
> >   (cons* (local-file "my-key.pub") %default-authorized-guix-keys)
> > or similarily with append.  This local-file incantation is however
> > pretty weak.  It changes based on the current working directory and
> > even if I were to use an absolute path, I'd have to copy both that
> > file
> > and the config.scm to a new machine were I to use the same
> > configuration there as well.
> 
> Note that you could use ‘plain-file’ instead of ‘local-file’ and
> inline the key canonical sexp in there.
Yes, but for that I'd have to either write a (multi-line) string
directly, which visibly "breaks" indentation of the rest of the file,
or somehow generate a string which adds at least one layer of
indentation.  The former is imo unacceptable, the latter merely
inconvenient.

> > However, it turns out that the format for said key files is some
> > actually pretty readable Lisp-esque stuff.  For instance, an ECC
> > key reads like
> >   (public-key (ecc (curve CURVE) (q #Q#)))
> > with spaces omitted for simplicity.
> > Were it not for the (q #Q#) bit, we could construct it using
> > scheme-file.  In fact, it is so simple that in my local config I
> > now do exactly that.
> 
> Yeah it’s frustrating that canonical sexps are almost, but not quite,
> Scheme sexps.  :-)
> 
> (gcrypt pk-crypto) has a ‘canonical-sexp->sexp’ procedure:
> 
> --8<---cut here---start->8---
> scheme@(guile-user)> ,use(gcrypt pk-crypto)
> scheme@(guile-user)> ,use(rnrs io ports)
> scheme@(guile-user)> (string->canonical-sexp
> (call-with-input-file
> "etc/substitutes/ci.guix.info.pub"
>   get-string-all))
> $18 = #
> scheme@(guile-user)> ,pp (canonical-sexp->sexp $18)
> $19 = (public-key
>   (ecc (curve Ed25519)
>(q #vu8(141 21 111 41 93 36 176 217 168 111 165 116 26 132 15
> 242 210 79 96 247 182 196 19 72 20 173 85 98 89 113 179 148
> --8<---cut here---end--->8---
> 
> > (define-record-type*  ...)
> > (define-gexp-compiler (ecc-key-compiler (ecc-key ) ...)
> > ...)
> > 
> > (ecc-key
> >   (name "my-key.pub")
> >   (curve 'Ed25519)
> >   (q "ABCDE..."))
> > 
> > Could/should we support such formats out of the box?  WDYT?
> 
> With this approach, we’d end up mirroring all the canonical sexps
> used by libgcrypt, which doesn’t sound great from a maintenance POV.
Given that we can use canonical sexps, what about a single canonical-
sexp compiler then?  I'd have to think about this a bit more when I
have the time to, but having a way of writing the canonical sexp
"directly" would imo be advantageous.

> Would providing an example in the doc that uses ‘canonical-sexp-
> >sexp’ and its dual help?
I'm not sure whether it'd be in the doc or as a cookbook entry, but
providing an example would in my opinion definitely help.

I'll take a closer look at guile-gcrypt later.  Hopefully they have
scheme-ified constructors for everything, which would make this quite
simple.

Thanks,
Liliana




Re: Incentives for review

2021-10-21 Thread Artem Chernyak
> On Wed, Oct 20, 2021 at 4:37 PM Thiago Jung Bauermann 
>  wrote:

[...]

> One thing that would help me would be some way to “subscribe” to changes in
> certain areas of Guix. That way, when a patch is submitted which touches
> those areas I would be automatically copied on the emails that go to the
> guix-patches mailing list. “areas of Guix” could be defined by paths in the
> repo, guile modules or regexps matching package names, for example.

You could handle it by filtering the email in your email client.

One thing that could help this, is to include a little more info in
the subject line for patches. Right now we include the broad area
(e.g. "gnu: ..."). But we could go on level deeper and include the
specific file (e.g. "gnu: docker: ..."). This becomes important
because gnu as an area of guix spans a lot of different packages and
languages. With the extra file level information, we can easily filter
it down to only the areas we know about.



Re: Public guix offload server

2021-10-21 Thread jbranso
October 21, 2021 12:44 PM, "Tobias Geerinckx-Rice"  wrote:

> Joshua Branson 写道:
> 
>> I've got an old Dell Optiplex 7020 with 30 gigs of RAM with a
>> 3TB
>> hard-drive just sitting around. My landlord and ISP is ok with
>> me
>> running a server. I just set everything up. Would this be
>> powerful/interesting to some?
> 
> Well, not going to lie: yes.
> 
> I've heard that US power is relatively cheap, but are you sure?
> :-)
 
I guess we'll find out how expensive it gets...if I can't get enough 
donations to keep it running, I could always shut it off.

> Kind regards,
> 
> T G-R



Re: Public guix offload server

2021-10-21 Thread Tobias Geerinckx-Rice

All,

zimoun 写道:

Do you mean that trusted users would try WM-escape exploits?
The world has been formed by warewolves inside communities 
purposely

causing harm. Looking further back, Oliver the Spy is a classic
examplar of trust networks being hollowed out.


So…

I cannot assume that on one hand one trusted person pushes to 
the main
Git repo in good faith and on other hand this very same trusted 
person

behaves as a warewolves using a shared resource.


…li'l' sleepy here, bewarned, but before this gets out of hand: I 
never implied direct abuse of trust by committers.  I don't 
consider it the main threat[0].


There are the people you meet at FOSDEM and the users who log into 
machines.  Both can be compromised, but the latter are much easier 
and more likely to be.


Such compromise is not laughable or hypothetical: it happens 
*constantly*.  It's how kernel.org was utterly owned[1].


Trusting people not to be evil is not the same as having to trust 
the opsec habits of every single one of them.  Trust isn't 
transitive.  Personally, I don't think a rogue zimoun will 
suddenly decide to abuse us.  I think rogues will abuse zimoun the 
very first chance they get.


That's not a matter of degree: it's a whole different threat 
model, as is injecting arbitrary binaries vs. pushing malicious 
code commits.  Both are bad news, but there's an order of 
magnitude difference between the two.


For sure, one can always abuse the trust.  Based on this 
principle, we
could stop any collaborative work right now.  The real question 
is the
evaluation of the risk of such abuse by trusted people after 
long period

of collaboration (that’s what committer usually means).


Isn't that the kind of hands-up-in-the-air why-bother 
nothing's-perfect fatalism I thought your Python quote (thanks!) 
was trying to warn me about?  ;-)


Zzz,

T G-R

[0]: That's probably no more than an optimistic human flaw on my 
part and ‘disgruntled ex-whatevers’ are probably more of a threat 
that most orgs dare to admit.


[1]: I know, ancient history, but I'm working from memory here. 
I'm sure it would be trivial to find a more topical example.


signature.asc
Description: PGP signature


Re: Public guix offload server

2021-10-21 Thread zimoun
Hi,

On Thu, 21 Oct 2021 at 21:15, "Jonathan McHugh"  
wrote:
> October 21, 2021 8:10 PM, "zimoun"  wrote:

>>> Now, we could spin up a separate VM for each user, and just take
>>> the efficiency hit… Users would be safe from anything but
>>> VM-escape exploits (which exist but are rare).
>> 
>> Do you mean that trusted users would try WM-escape exploits?
>
> The world has been formed by warewolves inside communities purposely
> causing harm. Looking further back, Oliver the Spy is a classic
> examplar of trust networks being hollowed out.

I cannot assume that on one hand one trusted person pushes to the main
Git repo in good faith and on other hand this very same trusted person
behaves as a warewolves using a shared resource.

For sure, one can always abuse the trust.  Based on this principle, we
could stop any collaborative work right now.  The real question is the
evaluation of the risk of such abuse by trusted people after long period
of collaboration (that’s what committer usually means).

Various examples exist on this kind of abused trust.  Oliver the Spy is
one, Mark Kennedy/Stone is another recent one.

Anyway! :-)

All the best,
simon



Re: Incentives for review

2021-10-21 Thread zimoun
Hi,

I mainly agree with the words of the message I am replying and my
intent is to provide numbers about what we are speaking.

>> It’s not about urgency but rather about not contributing to the growth
>> of our patch backlog, which is a real problem.

While I disagree for submitting new package – I do not understand why or
how it is a problem to submit to guix-patches, wait 15 days, and then
push – I would like to put numbers about this backlog…

> I have often seen folks on various projects worried about the size of
> various backlogs: bugs, issues, etc. I think it is human to want to
> try and contain something that appears to be growing, unbounded. 

…about patches only.  Bug is another story. :-)


Patch 49993 is from Aug. 2021.  Between this patch and now (patch
51319), there are 164 patches still open.  Other said, 164 still open
submission for 2 months – I have not counted how many closed.

Patch 48999 is from Jun 2021.  Because the Debbugs numbering is shared
by many GNU projects, it is hard to know how many patches over this
thousand are Guix only.  However, today still 83 patches open on this
thousand range (49093–49993) for ~2months. And on this 83 still open
submission, there are 17 submissions that have not received any reply.
And I bet they will not receive one and they are falling in the cracks.

Patch 47997 is from Apr 2021. Still 75 patch opens on this thousand
range (48006–48999) for ~2months.

Therefore, from Apr 2021 to now (~6months), it is ~320 patches still
open.  From Dec. 1rst 2020 (patch 45000) to the bottom Mar 2017 (patch
25849), it is 282 still open patches.  And I do not count how many
without any reply.

Just pick a random patch, say 47932 proposing the addition of package
’xqilla’.  First, there is no reply, And second, the patch does not
apply, thus it requires manual work.

Ok, so many are “just” triage.  For instance, last year over one month,
we did a bug squashing [1,2].  And I closed one per day over the month;
something like between 5min per report to half hour.  Bug was easier
than patch.  Considering that many of these 282 still open submissions
require: look if it is compliant, apply (manual work), build, etc. say
half hour on average, it means 141 hours which is basically a full month
full time for one person – and not a funny work – only for triaging old
submissions.

Here I speak about 282 old submissions (before Dec 2020) and for recent
ones (after Jan 2021), it is 438 still open submissions.  Other said, it
is ~720 submissions to deal with.  Considering 50 active people, it
means deal with 14 submissions per person, assuming half hour per
submission, it means a full day working only on that per person.

On the top of that, there are bugs, systems and new features. :-)

Do not take me wrong, it is great to have these numbers!  It means Guix
is used and people contribute.  So no complaint! :-)

Just number to fix the idea about large backlog.


1: 
2: 


> I think the thing that bothers us is a sense that the backlog is
> becoming unmanageable, or too large to triage. I submit that this is
> actually a tooling and organizational issue, and not an intrinsic
> issue to be solved. Bugs may still be valid; patches may still have
> valuable bones to modify.

This is the point.  What do you do?  What could we improve about tooling
and organisation to better scale and deal with this “becoming
unmanageable backlog”?

>From my point of view, it is good to have this issue.  It means that
Guix is becoming more popular.  And we – regular user, contributor,
committer – have to adapt to this increasing workload, IMHO.

The question is how.  And how to invite people to complete review. :-)


> I think the real issue is that as a backlog grows, the tools we're
> used to using cannot answer the questions we want to ask: what is most
> relevant to me or the project right now?

If it is relevant to the project then it is also relevant to me as an
user.  And vice-versa. ;-)

When something relevant to me is not making progress, it often means
people are busy elsewhere, so I try to comment (review?) about patches
or bugs.  It is a Sisyphean task because the workload never
decreases. :-)  Or maybe structured procrastination. ;-)


Cheers,
simon



Re: Incentives for review

2021-10-21 Thread Ricardo Wurmus



Hi Arun,

Thiago’s idea to allow people to subscribe to certain *kinds* 
of

issues when they are reported is also good.


I agree this is a great idea. Recently, I unsubscribed from
guix-patches. It's just too high volume. These days, I prefer to 
just

search for issues using emacs-debbugs and mumi.

Here's another idea for mumi: mumi should have a JSON 
API. Debbugs' SOAP
API is quite terrible, and doesn't even expose such things as 
the number
of emails in an issue. Mumi can offer its own API which does 
these
things properly. That way, we can write new clients (say, a CLI 
client)
for mumi, that can filter more intelligently. If we had a good 
CLI
client, our contributors wouldn't have to set up an email client 
or

emacs just to participate.


These are all excellent ideas, and seeing you articulate them 
makes me happy, because in my experience there’s always good code 
around the corner whenever you have good ideas :)


The way I see it, we are outgrowing general purpose bug trackers 
like
debbugs. We need a special purpose bug tracker specifically for 
Guix
with its special requirements. We are a big enough community for 
this to

be important.

I might be able to find some time to implement a simple JSON API 
for

mumi. Would there be interest in such a contribution?


Absolutely, yes, please!

Regarding, hacking on mumi, I understand that 
issues.guix.gnu.org is on
an IP whitelist with the GNU debbugs server. How do I hack on 
mumi if
simply running it on my local machine, and pulling data from GNU 
debbugs

would alarm the debbugs admins?


That’s correct, but mumi itself doesn’t directly talk to debbugs 
any longer.  We just periodically sync all debbugs data from the 
GNU debbugs server and have mumi work on these files locally.  So 
to hack on mumi I’d suggest downloading a copy of the raw debbugs 
data from issues.guix.gnu.org.  I could put an archive somewhere 
where you can fetch it.


--
Ricardo



Re: --with-source version not honored?

2021-10-21 Thread zimoun
Hi,

On Thu, 21 Oct 2021 at 22:22, Ludovic Courtès  wrote:

> For historical reasons, ‘--with-source’ only applies to leaf packages,
> unlike most (all?) other transformation inputs.

Oh, good to know! :-)

Therefore, what I wrote before is partially wrong.

Cheers,
simon



Re: Disarchive and SHA

2021-10-21 Thread zimoun
Hi,

On Thu, 21 Oct 2021 at 22:28, Ludovic Courtès  wrote:

>> That’s why «Disarchive entry refers to non-existent SWH directory».
>
> However, some time ago, the zabbix.com URL was 200-OK, and at that point
> SWH would have ingested it, no?

Timothy pointed [1] then click to «Show all visits».

1: 



> Or were we unlucky and the URL was already 404 by the time SWH read its
> first ‘sources.json’?

Well, I do not know how to check.  Somehow, the ’sources.json’ should
contains the fixed-outputs substitutes in addition to the upstream
content.  And probably from both build farms.

It is something I wanted to do but I never took the time to complete
because it was not clear at the time how to compute the hash in
address.  Now fixed-output is cleared for me. :-)

>> Help welcome for improving this ’sources.json’. :-)  Especially, turning
>> the current way using the website builder into derivation-style usable
>> by the CI.
>
> Yes, it’s probably a good idea to eventually turn ‘sources.json’ into a
> ‘computed-file’, and make a one-element manifest under etc/.

It is not clear how to deal with packages into a ’computed-file’.  For
instance, etc/disarchive-manifest.scm create one store item per package
(somehow) then build a directory union.  Here the union is a file.
Well, it is not clear for me how to process.


Cheers,
simon



Re: Public guix offload server

2021-10-21 Thread Jonathan McHugh
October 21, 2021 8:10 PM, "zimoun"  wrote:

> 
>> Now, we could spin up a separate VM for each user, and just take
>> the efficiency hit… Users would be safe from anything but
>> VM-escape exploits (which exist but are rare).
> 
> Do you mean that trusted users would try WM-escape exploits?
> 

The world has been formed by warewolves inside communities purposely causing 
harm. Looking further back, Oliver the Spy is a classic examplar of trust 
networks being hollowed out.


Jonathan



Re: Incentives for review

2021-10-21 Thread Jonathan McHugh
If I recall, you can request Debbugs content if you email them.


Jonathan McHugh
indieterminacy@libre.brussels

October 21, 2021 8:22 PM, "Arun Isaac"  wrote:

> Hi,
> 
>> Thiago’s idea to allow people to subscribe to certain *kinds* of
>> issues when they are reported is also good.
> 
> I agree this is a great idea. Recently, I unsubscribed from
> guix-patches. It's just too high volume. These days, I prefer to just
> search for issues using emacs-debbugs and mumi.
> 
> Here's another idea for mumi: mumi should have a JSON API. Debbugs' SOAP
> API is quite terrible, and doesn't even expose such things as the number
> of emails in an issue. Mumi can offer its own API which does these
> things properly. That way, we can write new clients (say, a CLI client)
> for mumi, that can filter more intelligently. If we had a good CLI
> client, our contributors wouldn't have to set up an email client or
> emacs just to participate.
> 
> The way I see it, we are outgrowing general purpose bug trackers like
> debbugs. We need a special purpose bug tracker specifically for Guix
> with its special requirements. We are a big enough community for this to
> be important.
> 
> I might be able to find some time to implement a simple JSON API for
> mumi. Would there be interest in such a contribution?
> 
> Regarding, hacking on mumi, I understand that issues.guix.gnu.org is on
> an IP whitelist with the GNU debbugs server. How do I hack on mumi if
> simply running it on my local machine, and pulling data from GNU debbugs
> would alarm the debbugs admins?
> 
> Regards,
> Arun



Re: Preservation of Guix Report

2021-10-21 Thread Ludovic Courtès
Hi Timothy!

Timothy Sample  skribis:

> Early this summer I did a bunch of work trying to figure out which Guix
> sources are preserved by the SWH archive.  I’m finally ready to share
> some preliminary results!
>
> https://ngyro.com/pog-reports/2021-10-20/
>
> This report is already quite outdated, though.  It only covers commits
> up to the end of May, and sometime in June is when the sources were
> checked against the SWH archive.  I’m sharing it now to avoid any
> further delays.

This is truly awesome!  (Did you manage to grab all that info with the
default rate limit?!)

I can’t wait for the updated report now that Simon and yourself have
identified that SWHID computation bug!

Some of our  refer to tags, not commits.  How do you
determine whether they’re saved?

‘guix lint -c archival’ uses ‘lookup-origin-revision’, which is a good
approximation, but it’s not 100% reliable because tags can be modified
and that procedure only tells you that a same-named tag was found, not
that it’s the commit you were expecting.  (And really, we should stop
referring to tags.)

Thank you! 

Ludo’.



Re: Public guix offload server

2021-10-21 Thread Arun Isaac

Hi,

>> Currently, guix offload requires mutual trust between the master and
>> the build machines. If we could make the trust only one-way, security
>> might be less of an issue.
>
> It might!  It's easy to imagine a second, less powerful offload
> protocol where clients can submit only derivations to be built by the
> remote daemon, plus fixed-output derivations.  None of the ‘let me
> send the entire binary toolchain so you don't have to build it from
> scratch’ of the current protocol.  This at least removes their control
> over the source hash.

I just realized we might already have something close to this second,
less powerful offload protocol that needs only one-way trust. According
to the NEWS file, since Guix 0.13.0, the GUIX_DAEMON_SOCKET environment
variable lets us specify remote daemons. See "(guix) The Store" in the
manual for detailed documentation. The only thing missing is some way to
retrieve the built output from the remote store.

Regards,
Arun


signature.asc
Description: PGP signature


Re: --with-source version not honored?

2021-10-21 Thread Ludovic Courtès
Hi,

Phil  skribis:

> Any ideas if I can create a new package with --with-source and then
> substitute it in the same command for an input of another package? 

For historical reasons, ‘--with-source’ only applies to leaf packages,
unlike most (all?) other transformation inputs.  Concretely, this works:

  guix build guile --with-source=guile=…

but this has no effect (Guix depends on Guile, but that Guile is left
unchanged):

  guix build guix --with-source=guile=…

Could it be that it’s what you’re stumbling upon?

Jesse Gibbons provided a patch last year (!) to address that:

  https://issues.guix.gnu.org/43193

Maybe it’s time to pick it up where we left off.

Thanks,
Ludo’.



Re: Incentives for review

2021-10-21 Thread Ludovic Courtès
Hi,

Ricardo Wurmus  skribis:

> I was thinking in the opposite direction: not incentives to recognize
> reviewers but a closer relationship to the patch submitters,
> i.e. “patch buddies” or mentorship.  If I made myself officially
> responsible for reviewing commits by Simon and all commits touching R
> then I’m more likely to actually do the review for these patches.

Yeah.  Apparently several people made similar comments: that identifying
who’s knowledgeable about a certain area of the code would help.

I’ve seen GitHub bots that automatically ping people who’ve touched the
same area of code that you’re submitting a pull request for; that’s a
similar idea (I learned about it because for years I’d receive an
occasional notification for Nixpkgs pull requests :-)).

A notification/subscription mechanism along these lines for mumi would
be a step forward in that direction.

Ludo’.



Re: Incentives for review

2021-10-21 Thread Ludovic Courtès
Hi!

Arun Isaac  skribis:

>> Thiago’s idea to allow people to subscribe to certain *kinds* of
>> issues when they are reported is also good.
>
> I agree this is a great idea. Recently, I unsubscribed from
> guix-patches. It's just too high volume. These days, I prefer to just
> search for issues using emacs-debbugs and mumi.
>
> Here's another idea for mumi: mumi should have a JSON API. Debbugs' SOAP
> API is quite terrible, and doesn't even expose such things as the number
> of emails in an issue. Mumi can offer its own API which does these
> things properly. That way, we can write new clients (say, a CLI client)
> for mumi, that can filter more intelligently. If we had a good CLI
> client, our contributors wouldn't have to set up an email client or
> emacs just to participate.
>
> The way I see it, we are outgrowing general purpose bug trackers like
> debbugs. We need a special purpose bug tracker specifically for Guix
> with its special requirements. We are a big enough community for this to
> be important.
>
> I might be able to find some time to implement a simple JSON API for
> mumi. Would there be interest in such a contribution?

Definitely, but I think the JSON API is a means, not an end, so what
matters is what we’ll build with it.

Example that comes to mind: debbugs.el could use it for features Debbugs
lacks; we could have a client Scheme module and a command-line tool to
perform certain query, with the goal of being able to do things like:

  guix review apply 1234
  guix review search bioinformatics
  …

That could be a game-changer.

Of course we should start small and focus on specific features such as
searching, listing, and retrieving.

> Regarding, hacking on mumi, I understand that issues.guix.gnu.org is on
> an IP whitelist with the GNU debbugs server. How do I hack on mumi if
> simply running it on my local machine, and pulling data from GNU debbugs
> would alarm the debbugs admins?

Ricardo may know the answer.  :-)

Thanks,
Ludo’.



Re: Disarchive update

2021-10-21 Thread zimoun
Hey,

On Thu, 21 Oct 2021 at 21:41, Ludovic Courtès  wrote:

> Really cool of the SWH folks to give you a higher rate limit.

It is not to me particularly. :-)
Anyone can create an account via Software Heritage Authentication service.



Then anyone can enjoy the 1200 rate limit.

Cheers,
simon

PS: I am in the process to ask a special rate limit higher than 1200,
but that's another story. :-)



Re: Disarchive and SHA

2021-10-21 Thread Ludovic Courtès
Hi,

zimoun  skribis:

> Along the process, I also notice,
>
> $ guix download 
> https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz
>
> Starting download of /tmp/guix-file.rcYxyF
> From https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz...
> download failed 
> "https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz; 404 
> "Not Found"
>
> Starting download of /tmp/guix-file.rcYxyF
> From 
> https://web.archive.org/web/20211020115955/https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz...
> following redirection to 
> `https://web.archive.org/web/20210410075108/https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz'...
>  …2.6.tar.gz350KiB/s 
> 00:57 | 19.6MiB transferred
> /gnu/store/3w8mp25gyrkq3dngaw27kvqaggrx9qp0-zabbix-5.2.6.tar.gz
> 100n1rv7r4pqagxxifzpcza5bhrr2fklzx7gndxwiyq4597p1jvn
>
>
> And this fallback is not done by the file ’sources.json’.
>
> $ wget https://guix.gnu.org/sources.json
> $ cat sources.json | jq | grep zabbix
>   "git_url": "https://github.com/lukecyca/pyzabbix;,
> "https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz;
>   "git_url": "https://github.com/unioslo/zabbix-cli;,
> "https://cdn.zabbix.com/zabbix/sources/stable/5.2/zabbix-5.2.6.tar.gz;
>
> That’s why «Disarchive entry refers to non-existent SWH directory».

However, some time ago, the zabbix.com URL was 200-OK, and at that point
SWH would have ingested it, no?

Or were we unlucky and the URL was already 404 by the time SWH read its
first ‘sources.json’?

> Help welcome for improving this ’sources.json’. :-)  Especially, turning
> the current way using the website builder into derivation-style usable
> by the CI.

Yes, it’s probably a good idea to eventually turn ‘sources.json’ into a
‘computed-file’, and make a one-element manifest under etc/.

Thanks,
Ludo’.



Re: Using G-Expressions for public keys (substitutes and possibly more)

2021-10-21 Thread Ludovic Courtès
Hi!

Liliana Marie Prikler  skribis:

> let's say I wanted to add my own substitute server to my config.scm. 
> At the time of writing, I would have to add said server's public key to
> the authorized-keys of my guix-configuration like so:
>   (cons* (local-file "my-key.pub") %default-authorized-guix-keys)
> or similarily with append.  This local-file incantation is however
> pretty weak.  It changes based on the current working directory and
> even if I were to use an absolute path, I'd have to copy both that file
> and the config.scm to a new machine were I to use the same
> configuration there as well.

Note that you could use ‘plain-file’ instead of ‘local-file’ and inline
the key canonical sexp in there.

> However, it turns out that the format for said key files is some
> actually pretty readable Lisp-esque stuff.  For instance, an ECC key
> reads like
>   (public-key (ecc (curve CURVE) (q #Q#)))
> with spaces omitted for simplicity.
> Were it not for the (q #Q#) bit, we could construct it using scheme-
> file.  In fact, it is so simple that in my local config I now do
> exactly that.

Yeah it’s frustrating that canonical sexps are almost, but not quite,
Scheme sexps.  :-)

(gcrypt pk-crypto) has a ‘canonical-sexp->sexp’ procedure:

--8<---cut here---start->8---
scheme@(guile-user)> ,use(gcrypt pk-crypto)
scheme@(guile-user)> ,use(rnrs io ports)
scheme@(guile-user)> (string->canonical-sexp
  (call-with-input-file "etc/substitutes/ci.guix.info.pub"
get-string-all))
$18 = #
scheme@(guile-user)> ,pp (canonical-sexp->sexp $18)
$19 = (public-key
  (ecc (curve Ed25519)
   (q #vu8(141 21 111 41 93 36 176 217 168 111 165 116 26 132 15 242 210 79 
96 247 182 196 19 72 20 173 85 98 89 113 179 148
--8<---cut here---end--->8---

> (define-record-type*  ...)
> (define-gexp-compiler (ecc-key-compiler (ecc-key ) ...) ...)
>
> (ecc-key
>   (name "my-key.pub")
>   (curve 'Ed25519)
>   (q "ABCDE..."))
>
> Could/should we support such formats out of the box?  WDYT?

With this approach, we’d end up mirroring all the canonical sexps used
by libgcrypt, which doesn’t sound great from a maintenance POV.

Would providing an example in the doc that uses ‘canonical-sexp->sexp’
and its dual help?

Thanks,
Ludo’.



Re: Disarchive update

2021-10-21 Thread Ludovic Courtès
Hi!

Ludovic Courtès  skribis:

> Then there’s the mcron job that runs it once a day on berlin:
>
>   
> https://git.savannah.gnu.org/cgit/guix/maintenance.git/commit/?id=27dc74fbe33a9d929b37994e825dc202385f87c0
>
> We could run it as well on bayfront so we have a backup.

I did that without thinking much but it won’t work: as written,
sync-disarchive-db.scm assumes ci.guix.gnu.org substitutes are
authorized, which is not the case on bayfront.

So I suppose we need to do things differently there, such as
fetching/unpacking substitutes straight from sync-disarchive-db.scm
instead of going through the daemon.

I’ll take a look sometime, but it’d be great if someone else did.  :-)

Thanks,
Ludo’.



Re: Disarchive update

2021-10-21 Thread Ludovic Courtès
Hi,

zimoun  skribis:

> Using, the Authentication mode from SWH [1] and this trivial patch, the
> rate limit is at 1200 which allows to check and archive some packages.
> For instance, now,
>
> for p in $(guix package -A | cut -f1 | grep "julia-");
> do
>./pre-inst-env guix lint -c archival $p
> ;done
>
> passes.  The remaining work is to check with SWH folks for an higher
> value than this 1200 limit and have a token associated to an account to
> the Software Heritage Authentication service.  And set a cron task
> “somewhere” running:
>
>./pre-inst-env guix lint -c archival
>
> WDYT?

I think you made progress on this in the meantime: this is great!
Really cool of the SWH folks to give you a higher rate limit.

Thanks,
Ludo’.



Re: Test parallelism with CMake

2021-10-21 Thread Ludovic Courtès
Hi Greg,

Greg Hogan  skribis:

> As I read the source, cmake-build-system should by default both build and
> check with parallelism enabled. When I locally build a package only the
> build phase runs with parallelism and tests are being run in serial.
>
> When I run a manual build (stopping an in-process build run with '-K', then
> removing all files under the build directory, then copying and running the
> commands from the stopped build) I do not see a parallel build, nor do I
> see any parallelism passed by command or environment arguments (no '-j' or
> CMAKE_BUILD_PARALLEL_LEVEL).

Does CMake generate makefiles targets that would allow tests to run in
parallel?  How does that even work in CMake?

Thanks,
Ludo’.



Re: Authenticated Boot and Disk Encryption

2021-10-21 Thread Ludovic Courtès
Hi Reza,

Reza Housseini  skribis:

> I came across this blog post
> 
> and was wondering what is the state of authenticated boot and encryption in
> Guix System?

Nothing’s been done wrt. to “authenticated boot” AFAIK (I have
reservations about the concept).

Full disk encryption works but it’s done like in other distros, as
described in the article.  One big failure IMO is the fact that
nothing’s done upon suspend (when closing the laptop lid).  I believe
systemd-homed addresses that properly.

There’s a lot in this article, I’d suggest identifying specific bits to
see whether/how we can implement them in Guix!

Thanks,
Ludo’.



Re: Guix+Jenkins slides/video

2021-10-21 Thread zimoun
Hi Reza,

On Fri, 15 Oct 2021 at 08:59, Reza Housseini  wrote:

> @zimoun I would also prefer to use Cuirass, could you sketch a similar setup 
> with Cuirass?

Well, it is really low on my TODO list.  Do not hold your breath. :-)
Maybe, you could give a try and report your progress to
help-g...@gnu.org or guix-devel.  I am sure you will get valuable
feedback to reach the goal.

All the best,
simon



Re: Guix+Jenkins slides/video

2021-10-21 Thread Ludovic Courtès
Hello!

Phil  skribis:

> A few of you might know me from guix-help or the Guix reddit channel.
>
> I did a talk last week and there's been some interest so thought I'd
> post the link here:
> https://www.devopsworld.com/agenda/session/617842

I discover this pretty late, the slides are insightful!

> Slides are available at the bottom without registration.  The video
> requires registration (and needs full screen).
>
> This is still a work in progress, so any suggestions, comments,
> questions very welcome.

You mention functional overlap between Jenkins and Guix somewhere, and I
can see that, mostly when it comes with scheduling (the same sort of
problem shows up in Cuirass and in the Build Coordinator, mostly because
guix-daemon makes opaque scheduling/offloading decisions.)

The motivating example is interesting.  Do you find that ‘guix pull’ and
‘guix time-machine’ serve you well for CI, or are they too slow?
Perhaps you pin the ‘guix’ channel?

> Current work is focusing on generating AWS Lambda functions in Jenkins
> using Guix profiles, and making Guix-Juptyer available on our Jenkins
> deployments - depending on progress we could offer to do talks at Guix
> Days 2021 on these.

Oh so you also use Guix-Jupyter?  I’m interested in whatever feedback
you may have.  :-)

Thanks for sharing!

Ludo’.



Re: Incentives for review

2021-10-21 Thread Arun Isaac

Hi,

> Thiago’s idea to allow people to subscribe to certain *kinds* of
> issues when they are reported is also good.

I agree this is a great idea. Recently, I unsubscribed from
guix-patches. It's just too high volume. These days, I prefer to just
search for issues using emacs-debbugs and mumi.

Here's another idea for mumi: mumi should have a JSON API. Debbugs' SOAP
API is quite terrible, and doesn't even expose such things as the number
of emails in an issue. Mumi can offer its own API which does these
things properly. That way, we can write new clients (say, a CLI client)
for mumi, that can filter more intelligently. If we had a good CLI
client, our contributors wouldn't have to set up an email client or
emacs just to participate.

The way I see it, we are outgrowing general purpose bug trackers like
debbugs. We need a special purpose bug tracker specifically for Guix
with its special requirements. We are a big enough community for this to
be important.

I might be able to find some time to implement a simple JSON API for
mumi. Would there be interest in such a contribution?

Regarding, hacking on mumi, I understand that issues.guix.gnu.org is on
an IP whitelist with the GNU debbugs server. How do I hack on mumi if
simply running it on my local machine, and pulling data from GNU debbugs
would alarm the debbugs admins?

Regards,
Arun


signature.asc
Description: PGP signature


Re: Public guix offload server

2021-10-21 Thread zimoun
Hi Tobias,

On Thu, 21 Oct 2021 at 18:31, Tobias Geerinckx-Rice  wrote:
> zimoun 写道:

>> If I understand correctly, if a committer offloads to say Berlin 
>> or
>> Bayfront, your concern is that the output will be in the 
>> publicly
>> exposed store.  Right?
>
> No, that would be far worse.  I'm considering only a ‘private’ 
> offload server shared by several trusted users, where one 
> compromised (whether technically or mentally :-) user can easily 
> ‘infect’ other contributors in a way that's very hard to detect. 
> ‘Trusting trust’ comes to mind.

Thanks for explaining.

Unseriously, I do not the see the difference when several trusted users
push to a Git repo, where one compromised user can easily ’infect’ other
contributors in a way that’s very hard to detect. ;-)

If a compromised user offloads something, how other users of the same
server would get this compromised stuff?  Maybe I miss something.

Considering trusted users (i.e., not conscientiously malicious), the
surface of the attack is reproducible builds; similarly to the current
situation of substitutes by CI.  What do I miss?

Well, I do not see the difference between a remote offload server and a
shared store on cluster (although probably worse because many users – at
least some of I know – of clusters often do not really understand what
they do when using Guix ;-)).


> Now, we could spin up a separate VM for each user, and just take 
> the efficiency hit…  Users would be safe from anything but 
> VM-escape exploits (which exist but are rare).

Do you mean that trusted users would try WM-escape exploits?


>> A minimal job submission API with token would be ideal, IMHO. 
>> But it
>> falls into:
>>
>> Now is better than never.
>> Although never is often better than *right* now.
>>
>> – python -c 'import this' –
>
> What does this mean?

It is The Zen of Python. :-) These sentences express the complexity of
the right balance, IMHO.  Sorry if it was unclear.  Otherwise, the
complete Zen reads:

--8<---cut here---start->8---
$ python -c 'import this'
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
--8<---cut here---end--->8---


Cheers,
simon



Re: Incentives for review

2021-10-21 Thread Vagrant Cascadian
On 2021-10-19, Ludovic Courtès wrote:
> zimoun  skribis:
>
>> On Tue, 19 Oct 2021 at 14:56, Ludovic Courtès  
>> wrote:

>> One question is “encouragement” for reviewing, somehow.  Asking for new
>> package additions to go via guix-patches is a call making kind of
>> equality between contributors.  As someone without commit access, I can
>> tell you that it is often demotivating to send a trivial addition, wait
>> forever, ping people (aside I know who I have to ping :-)).  Usually, it
>> means people are busy elsewhere, so I try to help to reduce the workload
>> by reviewing stuff or by doing bug triage.  However, in the same time, I
>> see committers push their own trivial additions.  It appears to me
>> “unfair”.
>
> I understand and sympathize (I also see us slipping off-topic :-)).
>
>> Why are committer’s trivial additions more “urgent” than mine?
>
> Yeah, I see what you mean.
>
> I would like to see us committers do more review work.  But I also view
> things from a different angle: everyone contributes in their own way,
> and each contribution is a gift.  We can insist on community
> expectations (reviewing other people’s work), but we should also welcome
> contributions as they come.

I must admit, I don't review patches unless they're in an area of
expertise (e.g. u-boot, arm-trusted-firmware, reproducible builds
tooling, etc.); I just don't have sufficient skill with guile to review
arbitrary packages in a meaningful way, other than the most trivial of
packages...

Before I was granted commit access, I *really* appreciated getting
review... but was also frustrated by how long it took to get a
contribution in; having limited time available for guix, spending that
energy checking if something I'd already "finished" was actually merged
was a bit demotivating.

I have added a small number of trivial packages without review; maybe I
shouldn't have... but it was also a bit of a sigh of relief once I could
push directly to no have to get caught up in the waiting game; I had
more time to actually contribute other improvements to guix.


> There’s a balance to be found between no formal commitment on behalf of
> committers, and a strict and codified commitment similar to what is
> required for participation in the distros list¹.

So yeah, it is a quite balancing act!


Would a workflow of pushing to a "wip-pending" branch in guix.git that
then gets merged and/or cherry-picked into master/staging/core-updates
help at all?

A cursory review could commit to "wip-pending", with the
plan/hope/expectation that it would get some other review and/or a
timeout before it gets merged.

I guess it would be hard to avoid having to constantly rebase with the
latest updates... "wip-pending" might just add more work to an already
needs-more-resources process...


live well,
  vagrant


signature.asc
Description: PGP signature


Re: Incentives for review

2021-10-21 Thread Katherine Cox-Buday
Ricardo Wurmus  writes:

> Katherine Cox-Buday  writes:
>
>>> It’s not about urgency but rather about not contributing to the growth
>>> of our patch backlog, which is a real problem.
>>
>> I have often seen folks on various projects worried about the size of
>> various backlogs: bugs, issues, etc. I think it is human to want to
>> try and contain something that appears to be growing, unbounded.
>>
>> I think the thing that bothers us is a sense that the backlog is
>> becoming unmanageable, or too large to triage. I submit that this is
>> actually a tooling and organizational issue, and not an intrinsic
>> issue to be solved. Bugs may still be valid; patches may still have
>> valuable bones to modify.
>>
>> I think the real issue is that as a backlog grows, the tools we're used to
>> using cannot answer the questions we want to ask: what is most relevant to
>> me or the project right now?
>>
>> To me, this sounds like a search and display problem.

> I would be happy if people used this opportunity to change mumi (the tool
> behind issues.guix.gnu.org) to present the backlog in more helpful ways.

I don't have time to work on this, but here are some ideas. Some of these 
capabilities are present, but maybe not discoverable or a pre-built clickable 
link while viewing a patch/issue.

- Contextual search based on a path.
  - Show me issues/patches for this file/directory
  - Show me the rate of change of this file/directory

- Contextual search based on a patch
  - Show me bugs which mention any top-level public symbols changing in this
patch, or if packages, the package name.
  - Show me patches which conflict with this one.
  
- Contextual search based on author.
  - Show me other patches by this author
  - Show me the median time-to-commit for this author's patches

- Show me patches/issues, grouped by author, sorted by median time-to-commit,
  descending.
- Show me the paths/files with the highest number of bugs reported.

A lot of this requires static analysis which may not be trivial to implement. 
Still, I think being able to say "we don't have time to build what would fix 
this" is a helpful progression from "we don't know how to manage this backlog".

Thanks for pointing out the source code to mumi!
  
-- 
Katherine



Re: Public guix offload server

2021-10-21 Thread Vagrant Cascadian
On 2021-10-21, Joshua Branson wrote:
> Leo Famulari  writes:
>
>> On Thu, Oct 21, 2021 at 02:23:49AM +0530, Arun Isaac wrote:
>>> WDYT? How does everyone else handle big builds? Do you have access to
>>> powerful workstations?
>>
>> Now I have access to a very powerful system on which I can test builds.
>>
>> I agree that the Guix project should offer some powerful compute
>> resources to Guix developers. Efficiency is important for staying
>> motivated.
>
> I've got an old Dell Optiplex 7020 with 30 gigs of RAM with a 3TB
> hard-drive just sitting around.  My landlord and ISP is ok with me
> running a server.  I just set everything up.  Would this be
> powerful/interesting to some?

I wonder if OSUOSL (or maybe other organizations) would be willing to
provide a nice big server with access restricted to guix committers or
something?

  https://osuosl.org/services/hosting/

I know they provide some very capable machines for reproducible builds
and numerous other projects... (machines capable of running many virtual
machines, FWIW)

(And some POWER9 machines for guix, if I recall...)

Then it could run "guix publish" and people could log in and run builds
as they wished, and download the results via typical methods... still a
little danger of poisoning the store if people use the binaries on
production hardware... but... can only do so much hand-holding, right?
:)


live well,
  vagrant


signature.asc
Description: PGP signature


Re: Public guix offload server

2021-10-21 Thread Tobias Geerinckx-Rice

Leo,

Leo Famulari 写道:
Interesting... I'm not at all familiar with how `guix offload` 
works,
because I've never used it. But it's surprising to me that this 
would be
possible. Although after one minute of thought, I'm not sure why 
it

wouldn't be.


Very quickly:

- You send an offload request to the offload server, but you also 
 get so send any remotely missing store items that you already 
 have, as opaque binaries (icecat could be tetris instead).


This is why the offload server has to trust your key.  It's 
valuable and shouldn't be removed, but making it optional[0] 
shouldn't be ‘too hard’.


- The offload sends back one or more store items, which is why you 
 trust it.  This part is just substitution in a different form 
 (SSH vs. HTTPS etc.)


However, the Guix security model trusts committers 
implicitly. So, if
the committers' shared offload server had proper access control, 
one

might consider it "good enough" in terms of security.


The two are *SO* different as to be incomparable IMO.

You do point out the difference, so I guess we just assess it very 
differently:



Although the
possibility of spreading malicious binaries is much scarier than 
what
could be achieved by committing to guix.git, because of the 
relative

lack of transparency.


Gotta run,

T G-R

[0]: Which would create the

 “second, less powerful offload protocol where clients can submit
  only derivations to be built by the remote daemon, plus
  fixed-output derivations”

I imagined.  But this is still hand-waving at this point.  :-)


signature.asc
Description: PGP signature


Re: Public guix offload server

2021-10-21 Thread Tobias Geerinckx-Rice

Joshua Branson 写道:
I've got an old Dell Optiplex 7020 with 30 gigs of RAM with a 
3TB
hard-drive just sitting around.  My landlord and ISP is ok with 
me

running a server.  I just set everything up.  Would this be
powerful/interesting to some?


Well, not going to lie: yes.

I've heard that US power is relatively cheap, but are you sure? 
:-)


Kind regards,

T G-R


signature.asc
Description: PGP signature


Re: Public guix offload server

2021-10-21 Thread Tobias Geerinckx-Rice

Hi Simon,

zimoun 写道:
If I understand correctly, if a committer offloads to say Berlin 
or
Bayfront, your concern is that the output will be in the 
publicly

exposed store.  Right?


No, that would be far worse.  I'm considering only a ‘private’ 
offload server shared by several trusted users, where one 
compromised (whether technically or mentally :-) user can easily 
‘infect’ other contributors in a way that's very hard to detect. 
‘Trusting trust’ comes to mind.


For instance, one could imagine a dedicated VM for all the 
committers

who require some CPU power.


Right, that's what I'm describing in my previous mail.

Now, we could spin up a separate VM for each user, and just take 
the efficiency hit…  Users would be safe from anything but 
VM-escape exploits (which exist but are rare).


A minimal job submission API with token would be ideal, IMHO. 
But it

falls into:

Now is better than never.
Although never is often better than *right* now.

– python -c 'import this' –


What does this mean?

Kind regards,

T G-R


signature.asc
Description: PGP signature


Re: Incentives for review

2021-10-21 Thread zimoun
Hi,

On Thu, 21 Oct 2021 at 16:06, Ricardo Wurmus  wrote:

> Perhaps issues.guix.gnu.org could offer atom feeds for certain 
> keywords (e.g. the name of the module touched by the commits?).

For instance, let have some number over the last year and half:

File:  gnu/packages/bioconductor.scm
   1103 Ricardo Wurmus
 24 Roel Janssen
  7 Tobias Geerinckx-Rice

File:  gnu/packages/bioinformatics.scm
450 Ricardo Wurmus
 58 Efraim Flashner
 28 Roel Janssen

File:  gnu/packages/cran.scm
   1687 Ricardo Wurmus
 35 Lars-Dominik Braun
 34 Roel Janssen

Well, it means Ricardo you are the one who pushes modifications of these
files.  Therefore, it could be nice to automatically keep you in touch
to your convenience (instead of X-Debbugs-CC or annoy you via IRC ;-)).

For instance with a feed (Atom or RSS).  Because these days, the traffic
of guix-patches is more or less high so I am not convinced all
committers subscribe. :-)

I mean, such feed would help “specialized” reviewers to filter the
volume and would probably be more incentive.  (Well, for sure a good
email client with the right filter is already doing the same. :-))


Cheers,
simon



Re: Preservation of Guix Report

2021-10-21 Thread Timothy Sample
Hi zimoun,

zimoun  writes:

>  2. For still unknown reasons, the bridge between SWH and Disarchive has
> some holes.  For instance,
>
> $ guix lint -c archive znc
> gnu/packages/messaging.scm:996:12: znc@1.8.2: Disarchive entry refers 
> to non-existent SWH directory '33a3b509b5ff8e9039626d11b7a800281884cf2a'
>
> [...]
>
> Therefore, something is wrong somewhere.  Because of #1, I detect
> many of such examples.  I do not know if SWH-ID computed by
> Disarchive is incorrect [...].

Bingo!

According to SWH (emphasis mine):

SWHIDs for contents, directories, revisions, and releases are, *at
present*, compatible with the Git way of computing identifiers for
its objects.

This is not true anymore.  As they go on to say:

Note that Git compatibility is incidental and is not guaranteed to
be maintained in future versions of this scheme (or Git).

Disarchive does it the Git way, and SWH does something slightly
different.  The SWH hash is 4e58dc09b8362caf1265102130a593b070562a68,
but the Git hash is 33a3b509b5ff8e9039626d11b7a800281884cf2a.  The
difference is that Disarchive, like Git, ignores empty directories.  It
makes sense that an archival project like SWH would not do that, and
they indeed don’t.

Fixing this in Disarchive is going to make a *huge* difference, so that
is now high priority for me (it’s a one line change, but I want to fix
it, release it, update Guix, and recompute the report).

> And answering to your question [3] about “sources.json”, I think the
> ingestion started after this commit
> 35bb77108fc7f2339da0b5be139043a5f3f21493 from guix-artwork.  Other said,
> SWH started to ingest from “sources.json” after July 2020; probably
> around September 2020.
>
> 3: 

Thanks!  While investigating the above problem, I found a page that
lists what SWH is getting from us [1] and another showing when they are
scanning “sources.json” [2].  I don’t know if you’ve seen them before,
but they will be invaluable for figuring this stuff out.

[1] 
https://archive.softwareheritage.org/browse/origin/branches/?origin_url=https://guix.gnu.org/sources.json
[2] 
https://archive.softwareheritage.org/browse/origin/visits/?origin_url=https://guix.gnu.org/sources.json

> For the Missing and Unknown fields, could you distinguish the kind of
> origin?  Is it mainly git-fetch or url-fetch or others?

Good idea.  I think I can do this easily enough.  I might shelve it for
a bit, because I’m too excited to update the report with the Disarchive
hash fix.  :)


-- Tim



Re: Incentives for review

2021-10-21 Thread Ricardo Wurmus



Katherine Cox-Buday  writes:

It’s not about urgency but rather about not contributing to the 
growth

of our patch backlog, which is a real problem.


I have often seen folks on various projects worried about the 
size of
various backlogs: bugs, issues, etc. I think it is human to want 
to

try and contain something that appears to be growing, unbounded.

I think the thing that bothers us is a sense that the backlog is
becoming unmanageable, or too large to triage. I submit that 
this is
actually a tooling and organizational issue, and not an 
intrinsic
issue to be solved. Bugs may still be valid; patches may still 
have

valuable bones to modify.

I think the real issue is that as a backlog grows, the tools 
we're used to using cannot answer the questions we want to ask: 
what is most relevant to me or the project right now?


To me, this sounds like a search and display problem.


I agree with your analysis.  And with your earlier comments about 
the vagueness of what review means and how it can lead to a 
failure to communicate.


At least on the side of presenting submitted issues I’m sure we 
can do better.  One attempt in the past was to automatically bring 
up “forgotten” issues to the fore; Thiago’s idea to allow people 
to subscribe to certain *kinds* of issues when they are reported 
is also good.


I would be happy if people used this opportunity to change mumi 
(the tool behind issues.guix.gnu.org) to present the backlog in 
more helpful ways.


Here’s the code for reference: 
https://git.elephly.net/gitweb.cgi?p=software/mumi.git


--
Ricardo



Re: Incentives for review

2021-10-21 Thread Ricardo Wurmus



Thiago Jung Bauermann  writes:

2. Going through the guix-patches mailing list looking for 
submissions that 
touch the few areas of Guix where I have at least some 
experience. I don’t 
think I found an effective method yet (in part the problem is on 
my side 
because the search function of the email client I use isn’t very 
reliable).


If a browser fits into your workflow you may want to search for 
submissions on https://issues.guix.gnu.org.  It also shows 
“forgotten” issues that would likely benefit from some poking.


One thing that would help me would be some way to “subscribe” to 
changes in 
certain areas of Guix. That way, when a patch is submitted which 
touches 
those areas I would be automatically copied on the emails that 
go to the 
guix-patches mailing list. “areas of Guix” could be defined by 
paths in the 
repo, guile modules or regexps matching package names, for 
example.


Perhaps issues.guix.gnu.org could offer atom feeds for certain 
keywords (e.g. the name of the module touched by the commits?).


--
Ricardo



Re: Public guix offload server

2021-10-21 Thread Joshua Branson
Leo Famulari  writes:

> On Thu, Oct 21, 2021 at 02:23:49AM +0530, Arun Isaac wrote:
>> WDYT? How does everyone else handle big builds? Do you have access to
>> powerful workstations?
>
> Now I have access to a very powerful system on which I can test builds.
>
> I agree that the Guix project should offer some powerful compute
> resources to Guix developers. Efficiency is important for staying
> motivated.

I've got an old Dell Optiplex 7020 with 30 gigs of RAM with a 3TB
hard-drive just sitting around.  My landlord and ISP is ok with me
running a server.  I just set everything up.  Would this be
powerful/interesting to some?

-- 
Joshua Branson (jab in #guix)
Sent from Emacs and Gnus
  https://gnucode.me
  https://video.hardlimit.com/accounts/joshua_branson/video-channels
  https://propernaming.org
  "You can have whatever you want, as long as you help
enough other people get what they want." - Zig Ziglar
  



Re: Incentives for review

2021-10-21 Thread Katherine Cox-Buday
Ludovic Courtès  writes:

>> On Tue, 19 Oct 2021 at 14:56, Ludovic Courtès 

> But I also view things from a different angle: everyone contributes in their
> own way, and each contribution is a gift.

Maybe selfishly, but I really agree with this. I think this is just the nature 
of community-based projects: people are going to scratch their own itch, and 
when time allows, do altruistic things for other people.

Some people (e.g. me) don't have very much time at all to do the altruistic 
things (which gnaws at me). I do what I can, when I can, and hope that someone 
else benefits.

> A good middle ground may be to provide incentives for review.  How?  I’m
> not sure exactly, but first by making it clear that review is makes the
> project move forward and is invaluable.
>
>>> I think it’s about finding the right balance to be reasonably efficient
>>> while not compromising on quality.
>>
>> I totally agree.  And I do not see nor understand where is the
>> inefficiency here when asking to go via guix-patches and wait two weeks
>> for adding a new package.

Often I find that people on projects/teams have fundamentally different 
understandings of what reviews are for. Are they quality control? Mentoring 
opportunities? Opportunities to teach others how something new works? A way to 
encourage cohesiveness in the project?

It can help to publicly state the intent somewhere. I think the word "review" 
is mentioned in the manual 11 times, but nowhere does it say what the review's 
purpose is.

Large, public projects like Guix are a bit different, so I'm not sure this 
applies, but reviews meant to be gates on quality are my least favorite:

(Please note: these are general observations about the industry and not 
necessarily specific to Guix)

- The reviewers are human too, and there are various circumstances where they
  will miss things. Some of the most insidious forms of this is are: tragedy
  of the commons, i.e.

  > Submitter: They always do such a good job catching things. I think this is
  > good, but I know they'll catch any issues.

  > Reviewer: I feel bad this has sat for so long, this person usually does a
  > good job. +1 without a detailed review.

  > Submitter: A +1! It must not have had any issues.

- Unavoidably, because of human nature, groups form, and certain people
  experience less friction getting patches in. See the last point.

- There is a feedback loop present: those who have committed earlier have
  control and are more likely to reject later commits which don't do things as
  they would have. Sometimes "quality" is abused as a cover for opinion. Very
  few people are doing this maliciously, but it still happens.

- As I mentioned in another thread[1], trying to chase the ideal of quality
  may actually be worse in the end, or be a local maxima for quality or
  utility. Focusing on velocity may help achieve the global maxima for both.
  As always, there is a balance.

> It’s not about urgency but rather about not contributing to the growth
> of our patch backlog, which is a real problem.

I have often seen folks on various projects worried about the size of various 
backlogs: bugs, issues, etc. I think it is human to want to try and contain 
something that appears to be growing, unbounded.

I think the thing that bothers us is a sense that the backlog is becoming 
unmanageable, or too large to triage. I submit that this is actually a tooling 
and organizational issue, and not an intrinsic issue to be solved. Bugs may 
still be valid; patches may still have valuable bones to modify.

I think the real issue is that as a backlog grows, the tools we're used to 
using cannot answer the questions we want to ask: what is most relevant to me 
or the project right now?

To me, this sounds like a search and display problem.

[1] - https://lists.gnu.org/archive/html/guix-devel/2021-10/msg00081.html

-- 
Katherine



Re: --with-source version not honored?

2021-10-21 Thread zimoun
Hi Phil,

On Wed, 20 Oct 2021 at 20:46, Phil  wrote:

> guix environment --with-source=foobar@9.5.0=/path/to/package
> some-package-that-depends-on-foobar --ad-hoc foobar

Well, I do not know what you are trying to achieve.


> This gives me the warning that with-source will have no effect on
> some-package-that-depends-on-foobar, but that's not strictly true here.
>
> In this case the depending package's dependencies are installed into the
> environment including the original version of foobar (9.0.1) first, but this
> is then overwritten by foobar@9.0.5 - I can see files from
> both packages under the site-packages for foobar.

Usually, the transformation reads:

  guix build foobar --with-source=foobar=/path/to/foobar/v9.5.0

or

  guix build barbaz --with-inputs=foobar=foobar@9.5.0


The transformation ’with-source’ overwrites the origin of the target
package.  Target package means the one specified after ’--with-source=’.

If you have 2 packages ’foobar’, one at version 9.0.1 and one at 9.5.0,
and you want to overwrite the source of 9.5.0, from my understanding,
you should write:

 guix build foobar@9.5.0 --with-source=foobar@9.5.0=/path/to/foobar/v9.5.0-modif

Does it not work?


The transformation ’with-inputs’ overwrites the inputs.  The package
’barbaz’ depends on the package ’foobar’ at one specific version; in the
input lists, there is ("foobar" ,foobar-9.0.1) and here ,foobar-9.0.1
refers to a symbol defining a package at a specific version.  This
symbol is probably just ’foobar’, but if all is named foobar, the
explanations are made harder. ;-)

If you have another version of foobar, you have probably another symbol,
say foobar-9.5.0.  Then ’with-inputs=foobar=foobar@9.5.0’ replaces the
symbol listed as inputs of ’barbaz’ by the symbol referring to
“foobar@9.5.0“.  In the example, somehow, the symbol ’foobar-9.0.1’ will
be replaced by the symbol ’foobar-9.5.1’.

Now, be careful with ("identifier" ,symbol).  I do not remember if
’with-inputs’ uses ‘identifier’ or the “name” that ’symbol’ defines.  I
guess it is “name”.  I agree that it can appear confusing; a change in
core-updates should remove this confusing ’identifier’, I guess. :-)

> Any ideas if I can create a new package with --with-source and then
> substitute it in the same command for an input of another package? 

Is this command-line not working

  guix build barbaz --with-source=foobar=/path/to/replaced/foobar

?  Assuming the package ’barbaz’ depends on the package “foobar”.


Otherwise, maybe you can provide an concrete example defining the
package ’barbazz’ and ’foobar’. :-)


Cheers,
simon



Re: Tricking peer review

2021-10-21 Thread zimoun
Hi,

On Wed, 20 Oct 2021 at 19:03, Leo Famulari  wrote:
> On Tue, Oct 19, 2021 at 10:39:12AM +0200, zimoun wrote:
>> Drifting from the initial comment.  One could name “tragic” commits are
>> commits which break “guix pull”.  It is rare these days but there are
>> some reachable ones via “guix time-machine”.
>
> That's a good point. Is it a good idea to teach Guix to help (not force)
> users to avoid these commits?

Somehow, it means extend the mechanism behind
’channel-with-substitutes-available’ [1], I guess.

1: 


Cheers,
simon





Re: Public guix offload server

2021-10-21 Thread zimoun
Hi Tobias,

On Wed, 20 Oct 2021 at 23:06, Tobias Geerinckx-Rice  wrote:

> Giving access only to people with commit access is a given, but 
> any shared offload server is a huge shared security risk.
>
> Guix is not content-addressed.  Any [compromised] user can upload 
> arbitrary malicious binaries with store hashes identical to the 
> legitimate build.  These malicious binaries can then be downloaded 
> by other clients, which presumably all have commit access.
>
> Now the attacker almost certainly has covert access to one or more 
> user accounts that can push signed commits to Guix upstream.

If I understand correctly, if a committer offloads to say Berlin or
Bayfront, your concern is that the output will be in the publicly
exposed store.  Right?

If yes, one could imagine two stores: one populated by CI as it is
currently done and another one mounted elsewhere considered as sandbox
and regularly garbage collected.

For instance, one could imagine a dedicated VM for all the committers
who require some CPU power.

I mean, it is some system administration work, but is it not technically
feasible?


> At that point, one might consider dropping SSH account-based 
> access in favour of a minimal job submission API, and just return 
> the results through guix publish or so…?  OTOH, that's yet another 
> code path.

A minimal job submission API with token would be ideal, IMHO.  But it
falls into:

Now is better than never.
Although never is often better than *right* now.

– python -c 'import this' –


> By waiting, and planning.  I'm lucky to have a ridiculously 
> overpowered ThinkPad for its age and a newer headless tower at 
> home that can run builds 24/7, but nothing close to a ‘powerful 
> workstation’ by industry standards.

I sympathize with Arun’s requests.  For instance, it is impossible to
review Julia packages using my old laptop.  Even, it takes ages to just
compile Guix from sources and it is becoming worse and worse.
Hopefully, I am lucky and I have remote access to some workstations at
work.

Yes, we can wait and plan for a better solution for helping committers
to do their review. ;-)


Cheers,
simon



Re: Preservation of Guix Report

2021-10-21 Thread zimoun
Hi Timothy,

On Wed, 20 Oct 2021 at 15:48, Timothy Sample  wrote:

> Early this summer I did a bunch of work trying to figure out which Guix
> sources are preserved by the SWH archive.  I’m finally ready to share
> some preliminary results!
>
> https://ngyro.com/pog-reports/2021-10-20/

Cool!  Really interesting.


> What’s cool is that the report is automated.  Next on my list is to
> update the database and generate a new report.  Then, we can compare the
> results and see if we are improving.  (My read on the results so far is
> that improving “sources.json” will yield big improvements, but we might
> not be able to get to that before the next report.)

Here two minor comments:

 1. Since a couple of days, I run:

$ GUIX_SWH_TOKEN=$TOKEN guix lint -c archival

where $TOKEN is provided by the SWH Authentication service [1].
Instead of a rate limit at 120, it is 1200.  Therefore, more
’git-fetch’ packages are added.  I am in the process to automate
that but do not hold your breath. :-)

 2. For still unknown reasons, the bridge between SWH and Disarchive has
some holes.  For instance,

$ guix lint -c archive znc
gnu/packages/messaging.scm:996:12: znc@1.8.2: Disarchive entry refers 
to non-existent SWH directory '33a3b509b5ff8e9039626d11b7a800281884cf2a'

$ wget https://guix.gnu.org/sources.json
$ cat sources.json | jq | grep znc
 "integrity": "sha256-IwbxlQzncsWlmlf1SG1Zu5yrmEl8RfxJy8RawN7BGbs="
 "integrity": "sha256-q0jatpd+j0PW//szIo0ViGX2jd5wJtEjxpPXcznc8rs="
   "https://znc.in/releases/archive/znc-1.8.2.tar.gz;

$ guix download https://znc.in/releases/archive/znc-1.8.2.tar.gz
Starting download of /tmp/guix-file.hnjWTE
From https://znc.in/releases/archive/znc-1.8.2.tar.gz...
 znc-1.8.2.tar.gz  2.0MiB 599KiB/s 
00:03 [##] 100.0%
/gnu/store/58khbiwp2ghhzg00gnzdy2jlfv49vajm-znc-1.8.2.tar.gz
03fyi0j44zcanj1rsdx93hkdskwfvhbywjiwd17f9q1a7yp8l8zz

Therefore, something is wrong somewhere.  Because of #1, I detect
many of such examples.  I do not know if SWH-ID computed by
Disarchive is incorrect or if SWH has not ingested.  Investigations
required. :-)


1: 


> It’s surprising to me that SWH is not already getting these from
> “sources.json”.  I picked an arbitrary one, “rust-quote-0.6”, and it’s
> simply not in “sources.json”.  On the other hand, I bet SWH would like a
> crates.io (and CRAN, etc.) loader, too.

>From the SWH doc, there is a CRAN lister [2] but I have not checked what
they ingest concretely.  Because on our side, we are using ’url-fetch’
and it appears to me possible to have a tiny mismatch between what is
inside the release tarball (what we concretely use) vs what SWH ingests
directly from CRAN.

2: 



And answering to your question [3] about “sources.json”, I think the
ingestion started after this commit
35bb77108fc7f2339da0b5be139043a5f3f21493 from guix-artwork.  Other said,
SWH started to ingest from “sources.json” after July 2020; probably
around September 2020.

3: 

> One other way to help would be to suggest improvements to the report.  I
> don’t want to fiddle with it too much, but if there is some simple graph
> or table or list that should be there, I’m happy to give it a go.

For the Missing and Unknown fields, could you distinguish the kind of
origin?  Is it mainly git-fetch or url-fetch or others?

It would help to spot the issues to work on it (sources.json, SWH side,
Disarchive, etc.).


Cheers,
simon



Re: Tricking peer review

2021-10-21 Thread Ludovic Courtès
Hi,

Leo Famulari  skribis:

> On Fri, Oct 15, 2021 at 08:54:09PM +0200, Ludovic Courtès wrote:
>> The trick is easy: we give a URL that’s actually 404, with the hash of a
>> file that can be found on Software Heritage (in this case, that of
>> ‘grep-3.4.tar.xz’).  When downloading the source, the automatic
>> content-addressed fallback kicks in, and voilà:
> [...]
>> Thoughts?
>
> It's a real risk... another illustration that our security model trusts
> committers implicitly (not saying that's a bad thing or even avoidable).
>
> In years past I mentioned a similar technique but based on using
> old/vulnerable versions of security-critical packages like OpenSSL. The
> same approach would have worked since we started using Nix's
> content-addressed mirror.

Right.  Like zimoun wrote, the SWH fallback makes this even more
stealthily exploitable.

>> It’s nothing new, it’s what I do when I want to test the download
>> fallbacks (see also ‘GUIX_DOWNLOAD_FALLBACK_TEST’ in commit
>> c4a7aa82e25503133a1bd33148d17968c899a5f5).  Still, I wonder if it could
>> somehow be abused to have malicious packages pass review.
>
> Nice feature! Sorry if this was already suggested, but is it possible to
> create an argument to this variable that disallows use of the fallback
> mechanisms? I would certainly use that while reviewing and testing my
> own patches.

Yes, you can do “GUIX_DOWNLOAD_FALLBACK_TEST=none” (added in
bd61d62182bfda4a695757ec66810b28e8e1a6d0).

Thanks,
Ludo’.