Re: State of core-updates

2023-03-15 Thread Felix Lechner
Hi Andreas,

On Wed, Mar 15, 2023 at 7:57 AM Andreas Enge  wrote:
>
> Somehow the sending of store items when offloading poses problems

Somehow I think I see a similar problem with 'guix deploy,' which I
have to restart manually several times in order to complete the store
transfer. In my case, the error presents as an SSH error ("parent
process is not connected" I think) though, instead of a Cuirass
timeout.

Kind regards
Felix



Re: Notes from the Guix Days

2023-03-15 Thread Pjotr Prins
On Wed, Mar 15, 2023 at 05:34:24PM +0100, Ludovic Courtès wrote:
> Pjotr, can you add Andreas’ notes linked above?  Does anyone else have
> notes to share?

Added.

https://gitlab.com/pjotrp/guix-days-fosdem-2023



Re: State of core-updates

2023-03-15 Thread Kaelyn
Hi,

On the topic of the state of core-updates, I wanted to mention two things 
affecting i686-linux builds (and by extension some x86_64 packages like wine64):

1) glib-networking has a 32-bit-only patch left over from the upgrade from 
2.70.0 to 2.72.2, which does not apply against the newer version, and which 
seems unneeded. I just sent in https://issues.guix.gnu.org/62209 to fix the 
package.

2) libaio 0.3.113 does not build on core-updates, though the previous version 
0.3.112 does. I'm not sure how to handle this one, as the failure is a compile 
error from one of the test cases:

gcc -Wall -Werror -I../src -g -O2 -DTEST_NAME=\"cases/23.t\" -o cases/23.p 
main.c ../src/libaio.a -lpthread
mkdir testdir
rm -f testdir/rofile
echo "test" >testdir/rofile
chmod 400 testdir/rofile
rm -f testdir/rwfile
rm -f testdir/wofile
echo "test" >testdir/rwfile
echo "test" >testdir/wofile
chmod 600 testdir/rwfile
chmod 200 testdir/wofile
In file included from main.c:24:
cases/23.t: In function ‘thrproc2’:
cases/23.t:82:35: error: passing argument 2 of ‘splice’ from incompatible 
pointer type [-Werror=incompatible-pointer-types]
   82 | if (splice(tmpfd, , pipefds[1], NULL, 1, 0) != 1)
  |   ^~~
  |   |
  |   off_t * {aka long int *}
In file included from 
/gnu/store/0hr9jpczkcgpgqkhf4q4868xd57h5a62-glibc-2.35/include/bits/fcntl.h:61,
 from 
/gnu/store/0hr9jpczkcgpgqkhf4q4868xd57h5a62-glibc-2.35/include/fcntl.h:35,
 from main.c:9:
/gnu/store/0hr9jpczkcgpgqkhf4q4868xd57h5a62-glibc-2.35/include/bits/fcntl-linux.h:398:49:
 note: expected ‘__off64_t *’ {aka ‘long long int *’} but argument is of type 
‘off_t *’ {aka ‘long int *’}
  398 | extern __ssize_t splice (int __fdin, __off64_t *__offin, int __fdout,
  |  ~~~^~~
In file included from main.c:24:
cases/23.t: In function ‘thrproc3’:
cases/23.t:106:35: error: passing argument 2 of ‘splice’ from incompatible 
pointer type [-Werror=incompatible-pointer-types]
  106 | if (splice(tmpfd, , pipefds[1], NULL, 1, 0) != 1)
  |   ^~~
  |   |
  |   off_t * {aka long int *}
In file included from 
/gnu/store/0hr9jpczkcgpgqkhf4q4868xd57h5a62-glibc-2.35/include/bits/fcntl.h:61,
 from 
/gnu/store/0hr9jpczkcgpgqkhf4q4868xd57h5a62-glibc-2.35/include/fcntl.h:35,
 from main.c:9:
/gnu/store/0hr9jpczkcgpgqkhf4q4868xd57h5a62-glibc-2.35/include/bits/fcntl-linux.h:398:49:
 note: expected ‘__off64_t *’ {aka ‘long long int *’} but argument is of type 
‘off_t *’ {aka ‘long int *’}
  398 | extern __ssize_t splice (int __fdin, __off64_t *__offin, int __fdout,
  |  ~~~^~~
cc1: all warnings being treated as errors
make[1]: *** [Makefile:24: cases/23.p] Error 1
make[1]: Leaving directory 
'/tmp/guix-build-libaio-0.3.113.drv-0/libaio-0.3.113/harness'
make: *** [Makefile:23: partcheck] Error 2

Test suite failed, dumping logs.
error: in phase 'check': uncaught exception:
%exception #< program: "make" arguments: ("partcheck" "-j" "12" 
"prefix=/gnu/store/xr6s773c3d62g9aynydp1h6231p42ixn-libaio-0.3.113" "CC=gcc") 
exit-status: 2 term-signal: #f stop-signal: #f> 
phase `check' failed after 0.3 seconds


Cheers,
Kaelyn

P.S. For context, I hit the libaio error trying to build icecat (x86_64) on 
core-updates the other day; I hit the glib-networking error this morning trying 
to build wine, and then hit the libaio error again when retrying the wine 
(i686) build with my glib-networking change applied. The same build error 
affects both x86_64 and i686 builds of libaio.



Building more of ‘core-updates’ on ci.guix

2023-03-15 Thread Ludovic Courtès
Hello!

Andreas Enge  skribis:

> So it would be nice if someone could set up a more complete job for
> core-updates on cuirass or QA, and maybe write up a how-to to see which
> packages work and which ones need more love, preferably by architecture.

I’ve just changed it the ‘core-updates’ job to build
‘etc/release-manifests.scm’ (you can check what’s in there).  So
everything goes well (a big “if” :-)), we’ll soon have substitutes for
Emacs, GTK, and whatnot.

For the record, anyone with (1) SSH access to berlin, or (2) a “TLS user
certificate” for use by Cuirass¹ can do it.  For method #1, set up a
tunnel to the Cuirass web server, like so:

  ssh -L 8081:localhost:8081 berlin.guix.gnu.org

Then visit , click on “Edit” in the vegan-burger
menu on the ‘core-updates’ line, adjust accordingly, and save.  (You can
see that form at
, you can can’t
submit changes.)

HTH!

Ludo’.

¹ 
https://git.savannah.gnu.org/cgit/guix/maintenance.git/tree/doc/release.org#n205



March update on qa.guix.gnu.org

2023-03-15 Thread Christopher Baines
Hey,

I think it's been quite a while since sending out an update on QA stuff
that I'm working on, the last update to guix-devel looks to be back in
December [1].

1: https://lists.gnu.org/archive/html/guix-devel/2022-12/msg00093.html

Since then quite a lot of stuff has happened, including meeting people
in Brussels again which was great.

# Infra stuff

There's now a proper system service [2] for the qa-frontpage running on
bayfront.

2: 
https://git.savannah.gnu.org/cgit/guix/maintenance.git/commit/?id=8c17ac564447aa5448fc6eca40001c5b68c17d61

The Git repository is now on Savannah [3] allowing other committers to
push changes.

3: https://git.savannah.gnu.org/cgit/guix/qa-frontpage.git/

I've also spent some time fixing and improving the data.qa.guix.gnu.org
upkeep stuff, as it needs to regularly clear out the unused data.

At some point in the near future, I'd like to not be solely responsible
for the financial and administrative part of keeping
data.qa.guix.gnu.org (plus Patchwork and Gitolite) running. There's a
thread about this on the guix-sysadmin mailing list, but if anyone has
any thoughts, please let me know.

# Data service

The data service provides much of the data to make qa.guix.gnu.org work,
and I've been trying to improve it in a few areas recently.

I've added something [4] in which seems to make computing the system
test derivations happen faster. I'm not quite sure of the impact, but
the main benefit at the moment is that revisions are being processed
more quickly, which reduces the time to give feedback on patches and
branches.

4: 
https://git.savannah.gnu.org/cgit/guix/data-service.git/commit/?id=bf41c6ebb1c12ec15ee77e727a1ae0d7a1466aef

I've also done another deep dive in to PostgreSQL query performance to
get the blocking-builds page working. This page still needs a bit more
work, as I'm unsure about some of the data it's giving back, and
currently it's difficult to see what builds are blocked and why, but I'm
hoping that this'll be helpful when it comes to branches like
core-updates. Here's the page for armhf-linux builds for a recent
core-updates revision [5].

5: 
https://data.qa.guix.gnu.org/revision/62ff0b90864c8a4484aa2f14856ff33d05e00b0c/blocking-builds?system=armhf-linux=none_results=50

One issue I made surprisingly fast progress on after the Guix Days was
the interaction of grafting and the channel instance derivations,
there's a patch series needing a second round of review here
[6]. Currently data.qa.guix.gnu.org often can't compute the channel
instances for all systems, and these changes should resolve that.

6: https://issues.guix.gnu.org/61363

# QA Frontpage

I've made various small changes to try and make things faster and take
less memory. The memory usage is still an issue, but things like the
branch pages now work which is good.

Substitute availability information is also now shown on the pages for
branches.

# Next steps

Since starting to build core-updates and increasing the number of patch
series tested (both in terms of number and increasing the build limit),
the bordeaux build farm has been running flat out. That's good, but it's
now even more important to tune how builds are prioritised to try and
make better use of the build resources. I think the missing data to do
that is some good picture of what's recently been built, currently being
built, and queued up. This information is in the build coordinator, so
it just needs exposing somewhere so people can easily see it.

Another key area for improvement is trying to make any feedback given
clearer and more actionable. For example, currently for patches and
branches, more failing builds for a particular system will be
highlighted, but there's no easy way of just seeing the things that were
failing to build that weren't before.

It would be good to know what other people submitting or reviewing
patches would benefit from though, what things would be most helpful?

As always, if you have any comments or questions, just let me know!

Thanks,

Chris


signature.asc
Description: PGP signature


Notes from the Guix Days

2023-03-15 Thread Ludovic Courtès
Maxim Cournoyer  skribis:

> Thanks for the explanations.  I'm out of the loop, not having been able
> to attend physically the last Guix Days event.  Was there a recap of the
> discussions posted somewhere?  Was the freeze announce somewhere?  I
> wholly missed that, and I try to pay attention.

Notes on the release and branching discussions are here:

  https://lists.gnu.org/archive/html/guix-devel/2023-02/msg00066.html

Pjotr stored additional notes here (thanks!):

  https://gitlab.com/pjotrp/guix-days-fosdem-2023/-/tree/main/

Pjotr, can you add Andreas’ notes linked above?  Does anyone else have
notes to share?

Ludo’.



Re: Brainstorming ideas for define-configuration

2023-03-15 Thread Ludovic Courtès
Hi,

Bruno Victal  skribis:

> User-specified sanitizer support

Yay!

> ;; Suggestion #2
> ;; A user-supplied procedure ('procname' below) would work just like the
> ;; procedure in option #1.
> ;; There is some similiarity to the Guix record-type*.
> ;; This could be extended more easily in the future should it be required.
> (define-type typename; maybe call this 'define-configuration-type' ?
>   (sanitizer procname)
>   (maybe-type? #t)
>   ;; The properties below are service specific.
>   ;; If this is implemented with Guix record-type* then we could have a
>   ;; module containing generic types and do something along the lines of:
>   ;; (define-type foo-ip-address
>   ;;(inherit generic-ip-address)
>   ;;(serializer ...))
>   (serializer procname)  ; define-type/no-serialization = sets this field 
> to #f ?
>   (prefix ...))

I think we should implement contracts at this point, and have per-field
contracts.  We need to look at what Racket is doing.

> Record Validator
> ===
>
> There is also a need to validate records. Matching fields alone do not 
> actually
> ensure that the configuration is coherent and usable. For example, some fields
> may be mutually incompatible with others.

Does that require special support though?  Currently that can be done at
serialization time, for example.

> Coalesced documentation
> ===
>
> Currently, we manually edit the texinfo output from 
> configuration->documentation
> if we're unsatisfied with the generated result.
> For instance, substituting @item with an @itemx marker for fields whose
> documentation is similar.

Good idea.

> Serializer access to other fields
> ===
>
> Serialization procedures generally only have access to the values of its own 
> field. That
> may be insufficient in some cases as whether a field can be serialized
> or how that is done, for example, can depend on the value of other fields.

Overall, I find it nice that serializers have access to nothing but
their own field value; that makes it easier to reason about the whole
process.

> mympd-service-type is one example where each serialized field depends on the 
> value of
> another field. Our standard serializer procedures were useless for that case.

It’d be interesting to look more closely at this example and see if this
can be solved in some other way or if we really need
‘define-configuration’ support.  Would be nice to see if similar
situations arise with other records.

> Inheritable record-type definition

I’d like to support type inheritance in ‘define-record-type*’, and
‘define-configuration’ could build on top of that.

> Generic serialize-configuration
> ===
>
> The procedure serialize-configuration inherently assumes that the serialized
> configuration must be a single string. This assumption needn't always hold, 
> especially
> if the service in question is not a shepherd service.

Hmm, no opinion on that one.

Thanks for the grocery list!  :-)

Ludo’.



Re: Feedback on indentation rules

2023-03-15 Thread Ludovic Courtès
Hi,

Simon Tournier  skribis:

> On Mon, 06 Mar 2023 at 17:56, Ludovic Courtès  wrote:
>> Maxim Cournoyer  skribis:
>>
>>> Thanks for the feedback.  I wonder if some are of the opinion that since
>>> gexp->derivation is a plain function rather than a syntax having a
>>> special form for its 2nd argument, we should leave the default
>>> indentation rules untouched for it?
>>
>> Yes, that’s my take and current practice so far: special rules for
>> special forms (macros), not for procedures.
>
> What is the rationale?  Being able to know directly at the location when
> it is a plain function or a special form?

Yes.

Now, it’s aesthetics so there’s no “rationale” per se but rather
established practice: in the project, but also from what I can see in
Guile and more generally Scheme (info "(guix) Formatting Code").

Ludo’.



Re: bug#61894: [PATCH RFC] Team approval for patches

2023-03-15 Thread Ludovic Courtès
Hello!

Maxim Cournoyer  skribis:

[...]

>> “Pacify” in the sense that, by being explicit, we avoid
>> misunderstandings that could turn into unpleasant experiences.
>>
>> Like you I’m glad collaboration is nice and friendly; yet, over the past
>> few months I’ve experienced misunderstandings that seemingly broke the
>> consensus-based process that has always prevailed.
>
> I'm sorry that you feel that way.  I don't think consensus was willfully
> broken,

That’s my point: by being explicit about approval, we would avoid such
misunderstandings.

> and perhaps by studying some actual examples of these occurrences we
> can better understand what went wrong and how the new suggested policy
> would have helped or could be modified to help avoid such problems in
> the future.

I don’t want to rehash past occurrences of this problem.  It boils down
to: changes where pushed despite consensus evidently not being met, at
least not in the mind of every involved party.

To some extent, that’s bound to happen due to an increase of the number
of contributors, scope of the project, and diversity of backgrounds.  By
making it clear that lack of “LGTM” from another team member equates
with lack of consensus, we would avoid those misunderstandings.

A good reference on consensus-based decision making is
.

> It's also worth noting that this consensus-based process has always
> been implicit; for example, it is not defined/mentioned anywhere in
> our documentation.  Perhaps it should?

Those who’ve followed the project long enough, such as part of the
current maintainer collective, are certainly aware of that; it’s also
spelled out in
.

That said, again in the spirit of improving legibility, writing it down
would be much welcome.

>> In a way, that’s probably bound to happen as the group grows, and I
>> think that’s why we must be explicit about what the process is and about
>> whether one is expressing consent or dissent.
>>
>> With so many things happening in Guix (yay!), it’s also easy to overlook
>> a change and realize when it’s too late.  By having a rule that at least
>> one other person on the team must approve (consent to) a change, we
>> reduce that risk.
>>
>> Being on a team, then, is a way to express interest on a topic and to be
>> “in the loop”.
>
> That's already what teams can do!

Yes and no.  With the amount of activity going on, it’s easy to overlook
something.  The explicit synchronization point could mitigate that.

> I'd argue giving them the extra powers that would be conferred to
> teams in this is not needed/desirable.  Some committer not a regular
> member of X team may still be confident enough to push a patch sitting
> on the tracker, and I think they should be able to.

Self-assessment becomes tricky that this scale; I might be confident and
yet someone will point out a problem (that literally happened to me two
days ago in ).  That’s when review
really helps.

For “core” work, I insist that explicit approval (and thus peer review)
is necessary.  I doubt anyone would seriously challenge that.

Now, I agree, as I wrote before, that this may be overkill for “random
packages”.

Thus we need to find the right balance.

What about team/scope-specific rules?  As in: “Changes covered by teams
X, Y, and Z need to be explicitly approved by at least one other member
of the team.”

>> It is not about asserting power or building a hierarchy;
>> it’s about formalizing existing relations and processes.
>
> OK; I think in practice it would amount to that though (building a
> hierarchy which has some form power).

I disagree: just because power relations are not spelled out doesn’t
mean they don’t exist.  I don’t know where you’re talking from; one
thing that to me shed light on these matters is “The Tyranny of
Structurelessness” (I’m sure I mentioned it before, I certainly did
during Q on this very topic at the Ten Years event; apologies if I
sound like a broken record!).

Thanks,
Ludo’.



Re: State of core-updates

2023-03-15 Thread Andreas Enge
Am Wed, Mar 15, 2023 at 02:33:36PM +0100 schrieb Andreas Enge:
> I more or less tried this by building things on berlin; however, big
> packages (?, precisely: openjdk checkout (not even building!), ghc@9.2.5
> and llvm-for-mesa) failed complaining about a 3600s timeout because of
> silence. All of them build locally; hopefully this problem will disappear
> once we pass through cuirass.

Actually it also helped to build a few packages one by one from the command
line, instead of many of them at once. So now there should be mesa on berlin.

ghc however still fails with a message like this:
building /gnu/store/i7b6ffy3jp8adanvkj7zbv3mb5cdsxnq-ghc-9.2.5.drv...
guix offload: sending 66 store items (1,884 MiB) to '141.80.167.166'...
exporting path 
`/gnu/store/8smlivxisga3y7fz3q5qkvrnpsbvr6c4-ghc-8.10.7-testsuite.tar.xz-builder'
...
exporting path `/gnu/store/fgr6i273g6wa88w6h95sgpb5c1yr751c-ghc-8.10.7'
exporting path 
`/gnu/store/grd39lms6g8pg3r1zlvz7a5qwlymybhy-ghc-9.2.5-src.tar.xz'

building of `/gnu/store/i7b6ffy3jp8adanvkj7zbv3mb5cdsxnq-ghc-9.2.5.drv' timed 
out after 3600 seconds of silence
build of /gnu/store/i7b6ffy3jp8adanvkj7zbv3mb5cdsxnq-ghc-9.2.5.drv failed
View build log at 
'/var/log/guix/drvs/i7/b6ffy3jp8adanvkj7zbv3mb5cdsxnq-ghc-9.2.5.drv.gz'.
guix build: error: build of 
`/gnu/store/i7b6ffy3jp8adanvkj7zbv3mb5cdsxnq-ghc-9.2.5.drv' failed

Somehow the sending of store items when offloading poses problems; I do not
understand where the problem lies. But the Guix Build Coordinator approach
of letting the compiling machine fetch the inputs from a substitute server
seems to be more robust. No idea what to do here!

Andreas




Re: KDE in core-updates

2023-03-15 Thread Andreas Enge
A quick update on KDE in core-updates:

kcodecs also fails

I tried to update extra-cmake-files and kwayland to the next version 5.99,
which also forced me to update plasma-wayland-protocols to 1.9.0.
But then kwayland shows three test failures instead of one...

The latest KDE version is 5.104, so we could try to update further,
but I did not have time to pursue the tests for the time being.
If someone would like to have a look, that would be most welcome.

Andreas




Re: Python

2023-03-15 Thread Andreas Enge
Hello,

Am Sat, Feb 25, 2023 at 05:56:59PM +0100 schrieb Lars-Dominik Braun:
> > Right now I am left with a number of test failures that look real and cannot
> > easily be solved by an upgrade (either because we are already on the latest
> > version or because the tests still fail): python-sgmllib3k, python-typeguard
> > and python-coveralls. See messages below.
> I don’t know for sure why any of these packages’ tests fail. typeguard
> looks like it expects specific strings from Python or one of its libraries
> – safe to ignore.

would you mind having a look how we should proceed? Fix tests or disable
specific ones that you deem safe to ignore?

> sgmllib3k looks pretty dead upstream. Perhaps it’s
> not even needed any more? Updates to Python packages (via `guix refresh`)
> do not update dependencies and thus the list of inputs/native-inputs
> are most likely outdated.

This one is used for python-feedparser, used for calibre and quodlibet.
The feedparser author is not enclined to work on it:
   https://github.com/kurtmckee/feedparser/issues/328
I would suggest to try compiling python-sgmllib3k (and potentially
python-feedparser) without the tests, and see whether calibre still works.
But for this, we first need to get all other inputs of calibre into working
shape.

Andreas




Zig on core-updates

2023-03-15 Thread Andreas Enge
Just a one-package-failure-report: zig fails on core-updates, and I do not
see why from the error message. If someone who knows the package could have
a look, that would be great.

Andreas




Re: Branch and release process

2023-03-15 Thread Andreas Enge
Am Wed, Mar 15, 2023 at 09:32:44AM -0400 schrieb Maxim Cournoyer:
> I see; so it'd be useful for the integration of package changes
> impacting multiple others; some kind of staging or core-updates
> topic-focused branches.  Simple leaf package updates could still be
> merged directly to master without going through the go-team branch,
> right?

Exactly!

Andreas




Re: gnu: inetutils: Update to 2.4.

2023-03-15 Thread Maxim Cournoyer
Hi Andreas,

Andreas Enge  writes:

> Am Tue, Mar 14, 2023 at 09:10:33PM -0400 schrieb Maxim Cournoyer:
>> Could you share the reference of that?  I'm not against it, but our
>> currently documented process still mention the good old staging and
>> core-updates branches.
>
> It has not been documented yet, we should do it.
> Here is the relevant excerpt from my notes, sent to guix-devel on
> February 9 under the title "Discussion notes on releases and
> branches":

I've now read the full notes, which were well written.  Thank you!
Hopefully I've now caught up with the latest branch process changes
proposals :-).

-- 
Thanks,
Maxim



Re: State of core-updates

2023-03-15 Thread Maxim Cournoyer
Hi Efraim,

Efraim Flashner  writes:

> On Wed, Mar 15, 2023 at 08:54:55AM +0100, Andreas Enge wrote:
>> Am Tue, Mar 14, 2023 at 08:56:38PM -0400 schrieb Maxim Cournoyer:
>> > OK!  We could probably merge staging into master and be done already.
>> 
>> We should build it first. The last time I tried, there was a showstopper bug.
>> 
>> Here it is:
>> I tried to build staging for my profile on x86_64, but it failed with
>> a dependency of ffmpeg, rust-rav1e-0.5.1. There is a newer version 0.6.3,
>> but I did not simply update it, since the package looks particularly
>> complicated, containing a phase:
>>  (add-after 'configure 'force-rust-edition-2018
>>(lambda* (#:key vendor-dir #:allow-other-keys)
>>  ;; Force all the dependencies to not be higher than edition 
>> 2018.
>>  (with-fluids ((%default-port-encoding #f))
>>(substitute* (find-files vendor-dir "Cargo.toml")
>>  (("edition = \\\"2021\\\"") "edition = \"2018\"")
>> and many other changes.
>> 
>> Leo suggested to remove the dependency as non-essential. On the other
>> hand, I think the problem does not occur in core-updates, so one should
>> have a look.
>> 
>> And there may be other problems, too.
>> 
>> And of course this implies merging the merged into master staging back to
>> core-updates, which may create problems we will have to deal with later.
>> But yes, I would be happy to merge staging first; this was my initial
>> suggestion, before we somehow collectively flocked to core-updates :)
>> 
>
> This and other bugs are fixed on the rust-team branch. Once I figure out
> how to compare it against master and force rust to build for aarch64
> we'll be nearly there.

OK, I'll keep my hands from trying to fix the above myself then, to avoid
duplicate work :-).

Thanks for the heads-up!

-- 
Thanks,
Maxim



Re: State of core-updates

2023-03-15 Thread Andreas Enge
Am Wed, Mar 15, 2023 at 01:33:52PM +0200 schrieb Efraim Flashner:
> This and other bugs are fixed on the rust-team branch. Once I figure out
> how to compare it against master and force rust to build for aarch64
> we'll be nearly there.

That sounds great! Then the order of merging might end up
rust-team -> staging -> core-updates (which might mean rebuilding more or
less all of core-updates, rust is everywhere nowadays...).

Andreas




Re: State of core-updates

2023-03-15 Thread Andreas Enge
Am Tue, Mar 14, 2023 at 11:50:12AM -0400 schrieb Maxim Cournoyer:
> Some things that may help that I use:
> - Offloading

I more or less tried this by building things on berlin; however, big
packages (?, precisely: openjdk checkout (not even building!), ghc@9.2.5
and llvm-for-mesa) failed complaining about a 3600s timeout because of
silence. All of them build locally; hopefully this problem will disappear
once we pass through cuirass.

Andreas




Re: Branch and release process

2023-03-15 Thread Maxim Cournoyer
Hi Efraim,

Efraim Flashner  writes:

> On Tue, Mar 14, 2023 at 11:30:52PM -0400, Maxim Cournoyer wrote:
>> Hi,
>> 
>> Leo Famulari  writes:
>> 
>> > On Tue, Mar 14, 2023 at 09:10:33PM -0400, Maxim Cournoyer wrote:
>> >> Felix Lechner  writes:
>> >> > With the core-updates process now abandoned, I retitled the issue to
>> >> 
>> >> Could you share the reference of that?  I'm not against it, but our
>> >> currently documented process still mention the good old staging and
>> >> core-updates branches.
>> >
>> > At the Guix Days in February, we discussed the branching workflow and
>> > reached a rough consensus that for non-core packages (defined in
>> > %core-packages), we should try to adopt a more targeted "feature branch"
>> > workflow. That's actually what we used to do, before we outgrew our old
>> > build farm, after which we were barely able to build one branch at a
>> > time (IIRC, we would stop building master in order to build core-updates
>> > or staging).
>> >
>> > The discussion was summarized by Andreas here:
>> >
>> > https://lists.gnu.org/archive/html/guix-devel/2023-02/msg00066.html
>> 
>> Thanks!  I had missed it.  It sounds promising!
>> 
>> > Currently we are demo-ing this workflow in the wip-go-updates branch and
>> > go-team Cuirass jobset.
>> 
>> So the review happens first on the ML, then the changes land to the team
>> branch, and then finally the feature branch gets merged to master?  If
>> the review has already happened and the package been tested (and built
>> by QA), why is a feature branch needed?
>
> So we can group a couple of larger related changes together.

I see; so it'd be useful for the integration of package changes
impacting multiple others; some kind of staging or core-updates
topic-focused branches.  Simple leaf package updates could still be
merged directly to master without going through the go-team branch,
right?

-- 
Thanks,
Maxim



Re: State of core-updates

2023-03-15 Thread Maxim Cournoyer
Hi Andreas,

Andreas Enge  writes:

> Am Tue, Mar 14, 2023 at 08:56:38PM -0400 schrieb Maxim Cournoyer:
>> OK!  We could probably merge staging into master and be done already.
>
> We should build it first. The last time I tried, there was a showstopper bug.
>
> Here it is:
> I tried to build staging for my profile on x86_64, but it failed with
> a dependency of ffmpeg, rust-rav1e-0.5.1. There is a newer version 0.6.3,
> but I did not simply update it, since the package looks particularly
> complicated, containing a phase:
>  (add-after 'configure 'force-rust-edition-2018
>(lambda* (#:key vendor-dir #:allow-other-keys)
>  ;; Force all the dependencies to not be higher than edition 2018.
>  (with-fluids ((%default-port-encoding #f))
>(substitute* (find-files vendor-dir "Cargo.toml")
>  (("edition = \\\"2021\\\"") "edition = \"2018\"")
> and many other changes.
>
> Leo suggested to remove the dependency as non-essential. On the other
> hand, I think the problem does not occur in core-updates, so one should
> have a look.

Fun, I think I may have a wip stashed entry trying to address that very
problem; last time I was stopped by libgit2 being too old but that was
updated since on master, IIRC.  I'll try taking a new look when I have
the chance.

-- 
Thanks,
Maxim



Re: Follow-up on julia import script

2023-03-15 Thread Development of GNU Guix and the GNU System distribution.


Hi all!

Took me quite more time than I would've liked, but I have a usable
juliahub scheme import script!

It seems there's still one edge case that isn't covered and revolves
around when Julia packagers don't properly tag their git repos (I've
only seen the case with SnoopPrecompile). There's the possibility to
rely on tree commit hashes from the General repository (since this is a
valid way to identify/store a git repo), but that needs some major
changes in the way latest-repository-commit works. Otherwise, it needs
to be done by hand. It might also not work for subpackages in
directories that are up-to-date on juliahub but not yet on github, I
haven't met this case yet.

I'm sending a patch series in the coming minutes.

-- 
Best regards,
Nicolas Graves



Re: Branch and release process (was: gnu: inetutils: Update to 2.4.)

2023-03-15 Thread Christopher Baines

"Leo Famulari"  writes:

> On Tue, Mar 14, 2023, at 23:30, Maxim Cournoyer wrote:
>> Hi,
>>
>> Leo Famulari  writes:
>>
>>> On Tue, Mar 14, 2023 at 09:10:33PM -0400, Maxim Cournoyer wrote:
 Felix Lechner  writes:
 > With the core-updates process now abandoned, I retitled the issue to
 
 Could you share the reference of that?  I'm not against it, but our
 currently documented process still mention the good old staging and
 core-updates branches.
>>>
>>> At the Guix Days in February, we discussed the branching workflow and
>>> reached a rough consensus that for non-core packages (defined in
>>> %core-packages), we should try to adopt a more targeted "feature branch"
>>> workflow. That's actually what we used to do, before we outgrew our old
>>> build farm, after which we were barely able to build one branch at a
>>> time (IIRC, we would stop building master in order to build core-updates
>>> or staging).
>>>
>>> The discussion was summarized by Andreas here:
>>>
>>> https://lists.gnu.org/archive/html/guix-devel/2023-02/msg00066.html
>>
>> Thanks!  I had missed it.  It sounds promising!
>>
>>> Currently we are demo-ing this workflow in the wip-go-updates branch and
>>> go-team Cuirass jobset.
>>
>> So the review happens first on the ML, then the changes land to the team
>> branch, and then finally the feature branch gets merged to master?  If
>> the review has already happened and the package been tested (and built
>> by QA), why is a feature branch needed?
>
> Because QA currently cannot process changes that cause more than 200 rebuilds.

Note that this has been changing recently, and it's currently 600 builds
per system [1].

1: 
https://git.savannah.gnu.org/cgit/guix/qa-frontpage.git/tree/guix-qa-frontpage/manage-builds.scm#n100

We can still increase it, but there's still more work needed on managing
the builds and increasing the hardware available.


signature.asc
Description: PGP signature


Re: Branch and release process (was: gnu: inetutils: Update to 2.4.)

2023-03-15 Thread Leo Famulari
On Tue, Mar 14, 2023, at 23:30, Maxim Cournoyer wrote:
> Hi,
>
> Leo Famulari  writes:
>
>> On Tue, Mar 14, 2023 at 09:10:33PM -0400, Maxim Cournoyer wrote:
>>> Felix Lechner  writes:
>>> > With the core-updates process now abandoned, I retitled the issue to
>>> 
>>> Could you share the reference of that?  I'm not against it, but our
>>> currently documented process still mention the good old staging and
>>> core-updates branches.
>>
>> At the Guix Days in February, we discussed the branching workflow and
>> reached a rough consensus that for non-core packages (defined in
>> %core-packages), we should try to adopt a more targeted "feature branch"
>> workflow. That's actually what we used to do, before we outgrew our old
>> build farm, after which we were barely able to build one branch at a
>> time (IIRC, we would stop building master in order to build core-updates
>> or staging).
>>
>> The discussion was summarized by Andreas here:
>>
>> https://lists.gnu.org/archive/html/guix-devel/2023-02/msg00066.html
>
> Thanks!  I had missed it.  It sounds promising!
>
>> Currently we are demo-ing this workflow in the wip-go-updates branch and
>> go-team Cuirass jobset.
>
> So the review happens first on the ML, then the changes land to the team
> branch, and then finally the feature branch gets merged to master?  If
> the review has already happened and the package been tested (and built
> by QA), why is a feature branch needed?

Because QA currently cannot process changes that cause more than 200 rebuilds.



Re: Branch and release process (was: gnu: inetutils: Update to 2.4.)

2023-03-15 Thread Efraim Flashner
On Tue, Mar 14, 2023 at 11:30:52PM -0400, Maxim Cournoyer wrote:
> Hi,
> 
> Leo Famulari  writes:
> 
> > On Tue, Mar 14, 2023 at 09:10:33PM -0400, Maxim Cournoyer wrote:
> >> Felix Lechner  writes:
> >> > With the core-updates process now abandoned, I retitled the issue to
> >> 
> >> Could you share the reference of that?  I'm not against it, but our
> >> currently documented process still mention the good old staging and
> >> core-updates branches.
> >
> > At the Guix Days in February, we discussed the branching workflow and
> > reached a rough consensus that for non-core packages (defined in
> > %core-packages), we should try to adopt a more targeted "feature branch"
> > workflow. That's actually what we used to do, before we outgrew our old
> > build farm, after which we were barely able to build one branch at a
> > time (IIRC, we would stop building master in order to build core-updates
> > or staging).
> >
> > The discussion was summarized by Andreas here:
> >
> > https://lists.gnu.org/archive/html/guix-devel/2023-02/msg00066.html
> 
> Thanks!  I had missed it.  It sounds promising!
> 
> > Currently we are demo-ing this workflow in the wip-go-updates branch and
> > go-team Cuirass jobset.
> 
> So the review happens first on the ML, then the changes land to the team
> branch, and then finally the feature branch gets merged to master?  If
> the review has already happened and the package been tested (and built
> by QA), why is a feature branch needed?

So we can group a couple of larger related changes together.

> > My hope is that we can rewrite the relevant documentation in the coming
> > months, as we learn from these early efforts.
> 
> OK!  Thanks for allowing me to catch up!
> 
> -- 
> Thanks,
> Maxim
> 

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: State of core-updates

2023-03-15 Thread Efraim Flashner
On Wed, Mar 15, 2023 at 08:54:55AM +0100, Andreas Enge wrote:
> Am Tue, Mar 14, 2023 at 08:56:38PM -0400 schrieb Maxim Cournoyer:
> > OK!  We could probably merge staging into master and be done already.
> 
> We should build it first. The last time I tried, there was a showstopper bug.
> 
> Here it is:
> I tried to build staging for my profile on x86_64, but it failed with
> a dependency of ffmpeg, rust-rav1e-0.5.1. There is a newer version 0.6.3,
> but I did not simply update it, since the package looks particularly
> complicated, containing a phase:
>  (add-after 'configure 'force-rust-edition-2018
>(lambda* (#:key vendor-dir #:allow-other-keys)
>  ;; Force all the dependencies to not be higher than edition 2018.
>  (with-fluids ((%default-port-encoding #f))
>(substitute* (find-files vendor-dir "Cargo.toml")
>  (("edition = \\\"2021\\\"") "edition = \"2018\"")
> and many other changes.
> 
> Leo suggested to remove the dependency as non-essential. On the other
> hand, I think the problem does not occur in core-updates, so one should
> have a look.
> 
> And there may be other problems, too.
> 
> And of course this implies merging the merged into master staging back to
> core-updates, which may create problems we will have to deal with later.
> But yes, I would be happy to merge staging first; this was my initial
> suggestion, before we somehow collectively flocked to core-updates :)
> 

This and other bugs are fixed on the rust-team branch. Once I figure out
how to compare it against master and force rust to build for aarch64
we'll be nearly there.

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: State of core-updates

2023-03-15 Thread Andreas Enge
Am Tue, Mar 14, 2023 at 08:56:38PM -0400 schrieb Maxim Cournoyer:
> OK!  We could probably merge staging into master and be done already.

We should build it first. The last time I tried, there was a showstopper bug.

Here it is:
I tried to build staging for my profile on x86_64, but it failed with
a dependency of ffmpeg, rust-rav1e-0.5.1. There is a newer version 0.6.3,
but I did not simply update it, since the package looks particularly
complicated, containing a phase:
 (add-after 'configure 'force-rust-edition-2018
   (lambda* (#:key vendor-dir #:allow-other-keys)
 ;; Force all the dependencies to not be higher than edition 2018.
 (with-fluids ((%default-port-encoding #f))
   (substitute* (find-files vendor-dir "Cargo.toml")
 (("edition = \\\"2021\\\"") "edition = \"2018\"")
and many other changes.

Leo suggested to remove the dependency as non-essential. On the other
hand, I think the problem does not occur in core-updates, so one should
have a look.

And there may be other problems, too.

And of course this implies merging the merged into master staging back to
core-updates, which may create problems we will have to deal with later.
But yes, I would be happy to merge staging first; this was my initial
suggestion, before we somehow collectively flocked to core-updates :)

Andreas




Re: gnu: inetutils: Update to 2.4.

2023-03-15 Thread Andreas Enge
Am Tue, Mar 14, 2023 at 09:10:33PM -0400 schrieb Maxim Cournoyer:
> Could you share the reference of that?  I'm not against it, but our
> currently documented process still mention the good old staging and
> core-updates branches.

It has not been documented yet, we should do it.
Here is the relevant excerpt from my notes, sent to guix-devel on
February 9 under the title "Discussion notes on releases and branches":
- Create branches with a few patches or patchsets; in any case with
  a "semantic" description of the changes. The branches could be
  shortlived. Feature branches are one incarnation of the concept.
- The numerical criteria for staging and core-updates is outdated:
  Even non-core packages may create an enormous number of rebuilds.
- Negative point: There is a risk of churn, by not regrouping world-
  rebuilding changes - but two non related world rebuilding changes
  might be difficult to review.
...
- There is discussion whether we need a core-updates branch.
  Core updates concern the toolchain, build phase changes, everything
  that has big ramifications all over the system. It would be important
  to not have several "parallel" branches with related (for instance,
  but not exclusively, core-update) changes, which in fact should come
  one after the other. Either they could be collected in one branch,
  or would require coordination of branch creation (inside a team, say).
- "Merge trains" of gitlab are mentioned, as a way of merging several
  branches at the same time.

There was a consensus on advancing in this direction, but details need to
be fleshed out.

The core argument I see against core-updates (and staging) is that they
regroup a mixed bag of unrelated changes, that noone is able to audit
after a while. By matching a focused set of changes with a dedicated team
of people competent in the area, we hope to advance faster and in a more
concentrated fashion. This comes in a context where our compile power in
the berlin build farm is much larger than in the past, and where on the
other hand a growing number of packages lead to non-core packages causing
a number of rebuilds that used to go to core-updates (like we just see
with inetutils).

> Until everybody has a good grasp of the new process, I think sticking to
> the documented one may be best for now, as it makes it clear that this
> cause a mass rebuild and shouldn't land to master.

Definitely in all procedures, old or new, mass rebuilds should not be done
in master!

Andreas