Re: “Building a Secure Software Supply Chain with GNU Guix”

2022-06-30 Thread bokr
On +2022-06-30 16:13:10 +0200, Ludovic Courtès wrote:
> Hello Guix!
> 
> I’m happy to announce the publication of a refereed paper in the
> Programming journal:
> 
>   https://doi.org/10.22152/programming-journal.org/2023/7/1
> 
> It talks about the “secure update” mechanism used for channels and how
> it fits together with functional deployment, reproducible builds, and
> bootstrapping.  Comments from reviewers showed that explaining the whole
> context was important to allow people not familiar with Guix or Nix to
> understand why The Update Framework (TUF) isn’t a good match, why
> Git{Hub,Lab} “verified” badges aren’t any good, and so on.
> 
> What’s presented there is not new if you’ve been following along, but
> hopefully it puts things in perspective for outsiders.
> 
> I also think that one battle here is to insist on verifiability when a
> lot of work about supply chain security goes into “attestation” (with
> in-toto, sigstore, Google’s SLSA, and the likes.)
> 
> Enjoy!
> 
> Ludo’.

Congratulations!

And thank you! I needed that assurance that guix really takes trust
seriously, and has a convincing internals story to back it up.

The "artifact" at [1] has a README.md [2] that's IMO definitely
also worth a read even if you aren't going to execute the image.

[1] 
[2] 

About this example (I like documentation that provides
things you can try):
--8<---cut here---start->8---
  5. Going back to our target revision, we can see that `gpg` can indeed
 verify signatures now: `git checkout
 20303c0b1c75bc4770cdfa8b8c6b33fd6e77c168 && git log --oneline
 --show-signature`.  `gpg` warns about expired keys but, as the
 paper notes, OpenPGP key expiration dates are ignored for our
 purposes (and the authentication code in Guix does *not* use
 `gpg`).
--8<---cut here---end--->8---

I think IWBN to have some kind of trust code come with that git output,
like gpg's 1-5 but indicating how well the committer/signer trusts
that using the code will *not* cause a problem.

I would like it if every commit had to have a code like that.

Even if it was "0," indicating that the committer judged
security to be irrelevant, I'd feel better knowing it was
part of committer workflow to be nudged into thinking
about the security aspect of the commit.

(Code alternative: an answer to the old real-opinon-extractor:
"How much money at what odds will you bet me this 
commit will not cause me problems?" ;-)


Hm, actually I think a 3-digit LTS code is required for reviewing:
with L indicating trust that the contribution is Legally ok,
and  T indicating trust in Technical competence of contributors
snd  S indicating trust in the Social aspect of the contribution crim/saint

OTTOMH encoding: digits 0-9: 0=NO Info, 1-9: subtract 5 =-> -4..+4,
with negatives meaning un-good opposites of positives. So code 191 would
be -4,+4,-4 for e.g.
  L-4: certain to have patent problems,
  T+4: contributed by a professional hacker,
  S-4: known criminal in supply chain.


--
Regards,
Bengt Richter



Re: Experimental nar-herder support for serving fixed output files by hash

2022-06-30 Thread Christopher Baines

Ludovic Courtès  writes:

>>> IWBN to share as much code as possible with ‘guix publish’, which has
>>> great test suite coverage and is being hammered every day.  Clearly the
>>> bit about extracting nars is specific to the nar-herder though, so that
>>> may prove difficult.
>>
>> I'm going to look at the Guile Fibers web server, hopefully that can be
>> improved to support streaming responses, which would allow removing a
>> lot of custom code from guix publish.
>
> By “streaming responses”, do you mean pipelining?

Streaming might not be the best word, but I'm referring to not having to
have the entire response body in memory before sending the first byte.

I've now done an initial implementation of this:

  https://github.com/wingo/fibers/pull/63

> How would that affect ‘guix publish’?

My reading of the concurrent-http-server in the publish script suggests
it could be replaced by the fibers web server, if it supports
"streaming" response bodies, as I've described above.

That might introduce some fibers related issues as I'm guessing there
might be some native code used for compression, but it would be a way to
remove the workarounds related to the web server part.

>> There isn't all that much code to the nar-herder though, and most of
>> waht is there is doing different things to guix publish, so I'm not sure
>> there's all that much to share.
>>
>> What I was getting at here though, ignoring the implementation, was
>> whether this is worthwhile to do? As in, is there benefit to having this
>> and being able to extend the content addressed mirrors that Guix uses?
>
> Having more content-addressed mirrors is worthwhile IMO, yes.

Cool :)

> Having two different implementations of the same interfaces may not be
> ideal, though, in terms of long-term maintenance cost.

Indeed, and I'm not set on keeping the nar-herder separate from guix
publish, but the nar-herder does seem to be doing a good job, and I
haven't seen a way of unifying the tooling yet.

Thanks,

Chris


signature.asc
Description: PGP signature


Re: Rust in the kernel

2022-06-30 Thread Leo Famulari
On Thu, Jun 30, 2022 at 12:37:54PM -0400, Leo Famulari wrote:
> Within Guix, we'll need to adapt our kernel build processes in order to
> support this.

An update on GCC support for Rust:

https://lwn.net/Articles/899385/


signature.asc
Description: PGP signature


Rust in the kernel

2022-06-30 Thread Leo Famulari
The effort to use the Rust programming language within the Linux kernel
is progressing and may be realized in the next few months:

https://lwn.net/SubscriberLink/899182/6c831b90eaee015e/
https://www.memorysafety.org/blog/memory-safety-in-linux-kernel/

Within Guix, we'll need to adapt our kernel build processes in order to
support this.

Although I help with updating and configuring the kernel builds, I won't
be able to participate in the "Rust in the kernel" effort for Guix.

So, interested volunteers should begin organizing :)


signature.asc
Description: PGP signature


Re: r-mathjaxr

2022-06-30 Thread Guillaume Le Vaillant
Ricardo Wurmus  skribis:

> Ricardo Wurmus  writes:
>
>> unfortunately I had to revert commits
>> 9078c651e8d50e08b46e3b2da1c532c15af5ddb6 (Add r-mathjaxr) and
>> 00056eafaefed0af8535f219760fbbe01dd6f240 (updating r-metafor).
> […]
>> The good news is that we can soon build a
>> slightly degraded version of mathjaxr completely from source.
>
> I have restored the commits, changing r-mathjaxr to use the files we’ve
> built in the js-mathjax-for-r-mathjaxr package.
>
> I decided not to delete the a11y directory from r-mathjaxr (which we
> couldn’t build in js-mathjax-for-r-mathjaxr), because these files are
> not minified and rather short, so it doesn’t hurt to keep them.

Ok. Thanks for fixing it.


signature.asc
Description: PGP signature


“Building a Secure Software Supply Chain with GNU Guix”

2022-06-30 Thread Ludovic Courtès
Hello Guix!

I’m happy to announce the publication of a refereed paper in the
Programming journal:

  https://doi.org/10.22152/programming-journal.org/2023/7/1

It talks about the “secure update” mechanism used for channels and how
it fits together with functional deployment, reproducible builds, and
bootstrapping.  Comments from reviewers showed that explaining the whole
context was important to allow people not familiar with Guix or Nix to
understand why The Update Framework (TUF) isn’t a good match, why
Git{Hub,Lab} “verified” badges aren’t any good, and so on.

What’s presented there is not new if you’ve been following along, but
hopefully it puts things in perspective for outsiders.

I also think that one battle here is to insist on verifiability when a
lot of work about supply chain security goes into “attestation” (with
in-toto, sigstore, Google’s SLSA, and the likes.)

Enjoy!

Ludo’.


signature.asc
Description: PGP signature


Re: r-mathjaxr

2022-06-30 Thread Ricardo Wurmus


Ricardo Wurmus  writes:

> unfortunately I had to revert commits
> 9078c651e8d50e08b46e3b2da1c532c15af5ddb6 (Add r-mathjaxr) and
> 00056eafaefed0af8535f219760fbbe01dd6f240 (updating r-metafor).
[…]
> The good news is that we can soon build a
> slightly degraded version of mathjaxr completely from source.

I have restored the commits, changing r-mathjaxr to use the files we’ve
built in the js-mathjax-for-r-mathjaxr package.

I decided not to delete the a11y directory from r-mathjaxr (which we
couldn’t build in js-mathjax-for-r-mathjaxr), because these files are
not minified and rather short, so it doesn’t hurt to keep them.

-- 
Ricardo



Re: Experimental nar-herder support for serving fixed output files by hash

2022-06-30 Thread Ludovic Courtès
Hi,

Christopher Baines  skribis:

> Ludovic Courtès  writes:
>
>>> The code isn't great, there's some difficulty in extracting the single
>>> file from the nar, but the biggest problem is a limitation in the guile
>>> fibers web server. Currently, responses have to be read in to memory,
>>> which is fine for we pages, but not great if you're trying to serve
>>> files which can be multiple gigabytes in size. This also means that the
>>> first byte of the response is available when all the bytes are
>>> available, so the download is slow to start.
>>
>> That, and in practice a cache (with some eviction mechanism) would be
>> necessary so nars don’t need to be extracted every time and so we can
>> use sendfile(2).
>
> I'd actually imagined that this would be used infrequently, but yeah, if
> the decompression does become a bottleneck, then some caching reverse
> proxy could help reduce that.

Relying on a proxy may be insufficient, because you still have incoming
requests that can trigger unbounded peaks of I/O and CPU usage, and
these requests may not be satisfied in time (the client may hang up
before the server is done processing the nar.)

>> IWBN to share as much code as possible with ‘guix publish’, which has
>> great test suite coverage and is being hammered every day.  Clearly the
>> bit about extracting nars is specific to the nar-herder though, so that
>> may prove difficult.
>
> I'm going to look at the Guile Fibers web server, hopefully that can be
> improved to support streaming responses, which would allow removing a
> lot of custom code from guix publish.

By “streaming responses”, do you mean pipelining?

How would that affect ‘guix publish’?

> There isn't all that much code to the nar-herder though, and most of
> waht is there is doing different things to guix publish, so I'm not sure
> there's all that much to share.
>
> What I was getting at here though, ignoring the implementation, was
> whether this is worthwhile to do? As in, is there benefit to having this
> and being able to extend the content addressed mirrors that Guix uses?

Having more content-addressed mirrors is worthwhile IMO, yes.

Having two different implementations of the same interfaces may not be
ideal, though, in terms of long-term maintenance cost.

Thanks,
Ludo’.



Re: Rust reprodubility -- .rmeta and shadow-rs

2022-06-30 Thread Ludovic Courtès
Hello!

Maxime Devos  skribis:

> There was some mail about irreproducibility in Rust, but I couldn't
> find it anymore.  Anyway, I found a potential cause: rust-shadow-rs
> embeds timestamps (even though it nominally respects
> SOURCE_DATE_EPOCH???) and the ordering of definitions it generates is
> based on a hash map (and hence, irreproducible).

I found these:

  https://issues.guix.gnu.org/50015
  https://issues.guix.gnu.org/55928

> The crate id is based on a hash over the source code, so this
> irreproducibility can cause build failures if substitutes are used.
>
> By removing the time stamp and sorting the definitions, 'nushell'
> successfully built on ci.guix.gnu.org whereas it previously failed to
> build on ci.guix.gnu.org but built successfully locally (with
> antioxidant), and IIRC the (antioxidated) 'rust-nu-command' is now
> reproducible:
>
> Anyway, here the patch I used:
>
> ("rust-shadow-rs"
>  ,#~((add-after 'unpack 'fixup-source-date-epoch
>  (lambda _
>;; TODO: it nominally supports SOURCE_DATE_EPOCH, yet something 
> things go wrong,
>;; as the shadow.rs still contains the unnormalised time stamp ...
>;; For now, do a work-around.
>(substitute* '("src/lib.rs" "src/env.rs")
>  (("BuildTime::Local\\(Local::now\\(\\)\\)\\.human_format\\(\\)")
>   (object->string "[timestamp expunged for reproducibility]"))
>  (("time\\.human_format\\(\\)")
>   "\"[timestamp expunged for reproducibility]\".to_string()")
>  (("time\\.to_rfc3339_opts\\(SecondsFormat::Secs, true)")
>   "\"[timestamp expunged for reproducibility]\".to_string()")
>  (("time\\.to_rfc2822\\(\\)")
>   "\"[timestamp expunged for reproducibility]\".to_string()"
>(add-after 'unpack 'more-reproducibility ;; by default, it uses a 
> hashmap, leading to an irreproducible ordering in shadow.rs and hence an 
> irreproducible .rmeta (TODO: upstream?)
>  (lambda _
>(substitute* "src/lib.rs" ; sort
>  (("\\(k, v\\) in self\\.map\\.clone\\(\\)")
>   "(k, v) in 
> std::collections::BTreeMap::from_iter(self.map.clone().iter())")
>  (("self\\.write_const\\(k, v\\)") "self.write_const(k, 
> v.clone())")
>  (("self\\.map\\.keys\\(\\)") 
> "std::collections::BTreeSet::from_iter(self.map.keys())"))
>
> Maybe that was the cause?

You mean this issue you identified could have been the cause of
reproducibility issues found in other Rust packages?

Anyway, it looks like the snippet above should be applied to
‘rust-shadow-rs’ in current ‘master’, no?

Thanks,
Ludo’.



Re: RISCV porting effort

2022-06-30 Thread Ludovic Courtès
Howdy,

Efraim Flashner  skribis:

> (ins)efraim@3900XT ~/workspace/guix$ time ./pre-inst-env guix weather -s 
> riscv64-linux --substitute-urls="http://localhost:3000; -c100
> computing 15,205 package derivations for riscv64-linux...
> looking for 15,948 store items on http://localhost:3000...
> http://localhost:3000
>   14.3% substitutes available (2,274 out of 15,948)

Not bad!

Was it all built on a HiFive, or through emulated builds?

> Some notes:
> * rust is definitely TODO
> * GHC shouldn't be there on the list.
> * gccgo should replace go@1.4. Currently I can't use gccgo@10 to build
>   go@1.16.15, 1.17.9 or 1.17.11 on riscv64. gccgo@10 works for
>   go@1.16.15 and 1.17.11.
> * postgresql@13.6 I think is missing a patch currently
> * libunwind isn't supported until 1.6.*
> * valgrind isn't supported
> * classpath@0.93 is the java bootstrap path
> * openlibm, tbb and libunwind-julia are for julia
> * node@10 doesn't (yet) recognize riscv64
>
> After that I don't remember offhand. I'm not sure I've tried yet to
> build anything after ~170 so those can be ignored.

My guess is that upstream doesn’t go much further than you did, so
thumbs up!

Ludo’.



Re: guix refresh to a specific version?

2022-06-30 Thread Ludovic Courtès
Hartmut Goebel  skribis:

> FYI: I'm working on this. Given the number of importers and updaters
> is just takes some time.

Excellent.  If you want, you can ping me for early comments on the
 API for this.

Ludo’.



Re: Rust reprodubility -- .rmeta and shadow-rs

2022-06-30 Thread Maxime Devos
> https://issues.guix.gnu.org/50015

I don't think that shadow-rs changes Cargo.toml, so I think that one is
a separate issue ...

> https://issues.guix.gnu.org/55928

Maybe related, though .rmeta isn't mentioned there so maybe not.
Seems debugging information related, so maybe the report at (and fix
at)
https://github.com/rust-lang/rust/issues/34902#issuecomment-565557076
is important there.

Ludovic Courtès schreef op do 30-06-2022 om 13:35 [+0200]:
> > Anyway, here the patch I used:
> > 
> > [...]
> > 
> > Maybe that was the cause?
> 
> You mean this issue you identified could have been the cause of
> reproducibility issues found in other Rust packages?

I don't know if it applies to leaf packages or only to dependencies,
but perhaps! 

> Anyway, it looks like the snippet above should be applied to
> ‘rust-shadow-rs’ in current ‘master’, no?

Yes -- also, upstream has a patch now!

That's only for the ordering, not the timestamps, though.
The timestamp issue might be caused due to how antioxidant replaces
some inputs and fiddles with ‘features’ so I haven't reported that yet


Greetings,
Maxime.


signature.asc
Description: This is a digitally signed message part


Re: Dealing with upstream issues

2022-06-30 Thread Ludovic Courtès
Hi!

Just to be clear, I started this thread as discussion on the kind of
interaction we reviewers should offer to submitters.  I am not
suggesting changes in our “quality standards”, nor am I suggesting that
one group of people do more work.

Maxime Devos  skribis:

> Ludovic Courtès schreef op ma 27-06-2022 om 12:10 [+0200]:
>> Regarding the review process, I think we should strive for a predictable
>> process—not requesting work from the submitter beyond what they can
>> expect.  Submitters can be expected to follow the written rules¹ and
>> perhaps a few more rules (for example, I don’t think we’ve documented
>> the fact that #:tests? #f is a last resort and should come with a
>> comment). 
>> 
>> However, following that principle, we reviewers cannot
>> reasonably ask for work beyond that. [...]
>
> We can ask to do send the notice upstream, if it's in the rules (I
> consider this to be part of the unwritten rules).

Yes, that’s a reasonable thing to ask for.  However, we can only ask for
it if the submitter identified a problem themselves; if I’m the one
finding a problem, it’s inappropriate to ask someone else to report it
on my behalf.

> And I don't often have time for addressing the noticed issues and I
> have other things to do as well -- I usually limit myself to just
> reviewing.  I do not intend to take up work beyond that.

Of course, and I don’t mean reviewers should do more work!  I think the
few people that dedicate time to patch review already have more than
enough on their plate; the last thing I’d want is to put more pressure
on them.  We have to care for one another, and that starts by making
sure none of us feels a pressure to always do more.

> I also consider them to not be rules as in ‘if they are followed, it
> WILL be accepted’ but more guidelines like ‘these things are important,
> they usually need to be followed, but it's not exhaustive, at times it
> might be discovered the list is incomplete’.

Agreed.

> I don't think that patch submitters can reasonably expect reviewers to
> do all the work.

Agreed.

> Also, previously in discussions about the review process, weren't there
> points about a reviewer not having to do everything all at once, they
> could choose to review parts they know how to review and have time for
> and leave the rest for others?

I don’t remember discussions along these lines.  I think it can be
helpful sometimes, and tricky other times.

For example, for a package, I find it hard to split review work.  But
for a patch series that touches many different things (documentation,
build system, importer, whatever), splitting review work among different
people may work better.

>> Related to that, I think it’s important to have a consistent review
>> process.  In other words, we should be equally demanding for all
>> patches to avoid bad surprises or a feeling of double standard.
>
> I think I've been consistent in asking to inform upstream of the issues
> (*), with no distinction of whether it's a new submitter or an
> established one or whatever.

I think our standards should be the same whether the submitter is a
newcomer or not.

The difference is in how we reviewers reply.  To a newcomer, reviewers
should IMO do the “last kilometer” themselves on behalf of submitters:
tweaking the commit log, synopsis/description, indentation, that kind of
thing.  It’s important because that gives submitters a good experience,
it helps them learn, and it’s also low-friction for the reviewer.
(That’s also the whole point of guix-mentors.)

Naturally, one can be more demanding of seasoned contributors, and I
think it’s OK to leave it up to them to fix such things.

Thanks,
Ludo’.

PS: Sorry for the wall of text!



Re: emacs-jedi package bug missing crucial python server file

2022-06-30 Thread Munyoki Kilyungi
jgart  writes:

> On Wed, 22 Jun 2022 12:14:10 +0300 Munyoki Kilyungi  
> wrote:
>> but do you have all your Emacs configs in Guix?  
>
> I get my emacs plugin dependencies with guix but I write my config in elisp:
>
> https://git.sr.ht/~whereiseveryone/jgart-dots/tree/master/item/dot_emacs
>

Thanks for this!

> Not sure how I feel yet about quoting elisp code in guile code. On first
> thought it seems like a nice party trick but not as nice to work with
> in the day to day when you just need to get the elisp code to work and
> to extend emacs. If I could write guile directly to confgure emacs that
> would be simpler, I think. Also, one less level of abstracting if you
> need to debug the elisp code quoted in your guile code but maybe there
> is a way around that?...
>
> The above emacs config is bound to change. It's just what I'm using on this
> particular thinkpad that I'm writing this email from at the moment. Mind
> the funny spacing of the code. That said, I'm hoping to keep my .emacs
> to a minimum number of lines for now. Just exploring that for now and
> really milking `execute-extended-command` for all it's worth, which I
> have bound to space d, btw.
>
> Emacs packages to note in the above config:
>
> https://codeberg.org/akib/emacs-corfu-terminal
> https://codeberg.org/akib/emacs-corfu-doc-terminal
>
> ```
> $ guix install emacs-corfu-terminal emacs-corfu-doc-terminal
> ```
>
> I'm mostly using emacs-no-x (terminal only).
>
> I'm not using guix-home yet. It fails to build my config on void linux
> with some arcane error I haven't had the time to properly debug yet/get
> help with ;)
>

Hmmm... I think this is something I'll eventually
do, i.e move to guix-home for all my dot files.
It has a nice premise of immutability; and I
reckon I could safely migrate away from
"use-package."

>> Also, how does your python set-up look like in Guix/Emacs?
>
> I mostly have been using eglot without any configuration. I just run
> M-x eglot and use it with python-lsp-server or jedi-language-server.
>

Is there a way to configure it to use a specific
guix-python-path?  A possible use-case for me
would be to have different python-paths for
different projects that use different
guix-profiles.  I don't want to pollute my global
space with python packages ;)

> I haven't explored the emacs-jedi package much that I sent an update for in
> this thread ;) It's on my TODO list.
>

:)

> If you want to read my favourite guix-home managed config out in the wild
> that I've found, I highly recommend unmatched-parens' config:
>
> https://git.sr.ht/~unmatched-paren/conf
>

Thanks!  I'll have a look at this later.

> I appreciate the minimalism and simplicity of the way paren has set up
> their dots with guix-home. It's very clear how to copy it and take it
> for your own purposes.
>

\m/\m/

-- 
Munyoki Kilyungi
D4F09EB110177E03C28E2FE1F5BBAE1E0392253F (hkp://keys.gnupg.net)
Free Software Activist
Humble GNU Emacs User | Bearer of scheme-y parens


signature.asc
Description: PGP signature


r-mathjaxr

2022-06-30 Thread Ricardo Wurmus
Hi,

unfortunately I had to revert commits
9078c651e8d50e08b46e3b2da1c532c15af5ddb6 (Add r-mathjaxr) and
00056eafaefed0af8535f219760fbbe01dd6f240 (updating r-metafor).

mathjaxr contains vast amounts of minified JavaScript that cannot be
built from source.  The Guix R team has been working on packaging this
properly for a very long time, but it’s still work in progress.  We
cannot include packages that merely wrap megabytes of minified
JavaScript — we need to actually build this from source.

There are a couple more R package updates that I’ve had to hold back
because of issues like that.  The good news is that we can soon build a
slightly degraded version of mathjaxr completely from source.

-- 
Ricardo



Re: Shall updaters fall back to other updaters?

2022-06-30 Thread Maxime Devos
Hartmut Goebel schreef op do 30-06-2022 om 10:58 [+0200]:
> Hi,
> 
> while working on refreshing to a specific version (see 
> https://lists.gnu.org/archive/html/guix-devel/2022-06/msg00222.html) I 
> discovered that the updaters fall back to another updater. Is this intended?
> 
> Concrete example (using refresh to a specific version): Package "xlsxio" 
> has no version 0.2.30. When trying to refresh to this version, the 
> github updater comes first and of course fails to get this version. Then 
> the generic-git updater is triggered and tries to get the version.
> 
> IMHO each package should be handled by a single updater.

I think that, in the absence of pre-existing knowledge which updater is
the right one, trying out several until we get the ‘right’ one is
reasonable.

However, to avoid non-determinism, some CPU time, some I/O and
messiness, I think that somehow annotating packages to write down
_which_ updater applies would be reasonable (maybe with some defaults,
e.g. for minetest mods, ContentDB would be considered authoritive)

Anyway, this idea of ‘authoritive updaters’ as it has been raised
before (I think by Liliana, in the context of the Minetest updater and
generic-git), but I couldn't find the mail again.

Greetings,
Maxime.


signature.asc
Description: This is a digitally signed message part


Shall updaters fall back to other updaters?

2022-06-30 Thread Hartmut Goebel

Hi,

while working on refreshing to a specific version (see 
https://lists.gnu.org/archive/html/guix-devel/2022-06/msg00222.html) I 
discovered that the updaters fall back to another updater. Is this intended?


Concrete example (using refresh to a specific version): Package "xlsxio" 
has no version 0.2.30. When trying to refresh to this version, the 
github updater comes first and of course fails to get this version. Then 
the generic-git updater is triggered and tries to get the version.


IMHO each package should be handled by a single updater.

What do you think?

BTW 1: There are other packages which are handled be several updaters: 
If you sum up the percent valued of "guix refresh --list-updaters" you 
will get about 140%. Anyhow the generic-git updater contributed with 
about 30% to this amount.


BTW 2: Which updater is used for each package is non-deterministic.

--
Regards
Hartmut Goebel

| Hartmut Goebel  | h.goe...@crazy-compilers.com   |
| www.crazy-compilers.com | compilers which you thought are impossible |




Re: Why is greetd greeter user in so many groups?

2022-06-30 Thread Lars-Dominik Braun
Hi,

> Sounds good, thanks for the fix!
d921516f50a946e92f9d5dc6d3bd49aca9788ac2 services: greetd: Remove unnecessary 
user groups.

Cheers,
Lars