Re: Booth at FOSDEM (Brussels), 4-5 Feb 2023?

2022-10-21 Thread John Kehayias
On Fri, Oct 21, 2022 at 12:44 PM, Joshua Branson wrote:

> Julien Lepiller  writes:
>
>> I'll be happy to help!
>>
>> From my experience with LFS it's important to have enough to share with 
>> people. We had
>> stickers (obviously), and also
>> bookmarks and even printed versions of the book. Even with three people we 
>> could manage
>> the stand, but the more we are,
>> the easier it becomes. Remember you have to attend the devroom too!
>>
>> We had issues managing stocks of stickers. Having too many displayed at once 
>> will
>> incentivise people to take a huge pile to
>> share with friends at home, but it also means the stocks deplete quickly :)
>>
>> One nice thing to have is also a table cloth with our logo/graphics. Some 
>> distros have
>> live-CDs.
>>
>> I can take care of ordering that, stickers and anything else if I get 
>> reimbursed. I
>> can't come up with nice graphics by myself
>> though, so please share any ideas :)
>>
>
> Error in the finalization thread:
>
> Success.

There was talk on #guix about T-shirts... :-)




Get all closed tickets with kolam

2022-10-21 Thread jgart
Hi Arun,

Is there currently a way to get all closed tickets on mumi?

If so, is that endpoint currently deployed and accessible via mumi's graphql 
API?

I'm asking this on the guix-devel mailing list because I find others
might find this info useful.

all best,

jgart




Re: Pinning package inputs using inferiors?

2022-10-21 Thread Phil
Thanks Simon - I've given an example below.

zimoun writes:

> For an example, see python-numpy and python-numpy-next in (gnu packages
> python-xyz).

This was my original way of handling this but in what is perhaps a niche
use of Guix by my department - it ultimately doesn't scale well, for our
use-case.

Originally the department was small enough that there was only a handful
of applications sharing one or two common in-house libraries.

As we've scaled-up we now have the situation where 3 or 4 common
libraries are being used by say 10 applications.

We have rapid release schedules - and want to be able to release the
common libraries on a weekly basis.  But the time to sign-off on a
common library takes a few days per application, so it's not practical for
every project to bump version every week - they have other priorities.

In an ideal world automated unit and regression testing would be
comprehensive enough that we could move aggressively each week, but at
least for now that's not practical given the complex nature of signing
off the libraries and the applications which use the libraries.

So, ideally, what we'd like to do is just have each common library
churn-out releases every week, and have the releases available in Guix,
but without having an obligation on dependent applications to adopt
these changes if they don't want to.

Note all libraries and applications share the same channel - one
solution would be to have each library in their own channel, but this
feels ugly to me.

Our solution (somewhat tricky to explain without a whiteboard -
apologies!) is to co-locate the package definition of the common library
in the common library repo itself - we call it something like
.requirements.scm and it is naturally kept in lockstep with the code in
that repo that the definition will build.  This is very different to
traditional Guix where channels contain definitions separately in a
different repo to the code those definitions describe how to build. 

We then have a job in our CI/CD system that allows us to give a tag on the
common library repo, and the name of an application that uses the common
library.

The job will copy the .requirements.scm into our channel inside a
private module specific to the application that uses the common library.

The idea is that you can have many versions of .requirements.scm private
to every application package definition that references it.

You could even read .requirements.scm using a function that clones the
application repo on-the-fly rather than statically storing it in the
channel - we haven't gone this far yet, as it makes the system even more
complex to reason about.

This is basically the same idea as the python-numpy-next but allows for
many versions of python-numpy to co-exist by keeping them all in private
modules so they don't clash.

It's a cool idea and works pretty well, but requires us to augment Guix
with a set of extra tools to lift and shift these private definitions
around, which complicates our setup considerably.

It feels like wanting to make many versions of a library available at
once isn't an unreasonable way to work at-scale.  However, it also feels
like a departure from the philosophy of Guix to decentralise package
definitions and to allow for a potentially large range of versions to
co-exist in the same channel commit.

We could try to further integrate the idea into guix by writing new guix
commands to support it - we're still working out the details ourselves,
but if it works well we'd love to present it at a future Guix Days or
similar!

In the meantime I was wondering if anyone else had a similar use-case
for Guix and if they had tried something similar or different to handle
many versions in an automated way in the same channel commit?

Apologies that's more than I was intending to write - but hopefully that
makes some sense!  If it doesn't I can try to flesh out specific example?



Re: Booth at FOSDEM (Brussels), 4-5 Feb 2023?

2022-10-21 Thread Joshua Branson
Julien Lepiller  writes:

> I'll be happy to help!
>
> From my experience with LFS it's important to have enough to share with 
> people. We had stickers (obviously), and also
> bookmarks and even printed versions of the book. Even with three people we 
> could manage the stand, but the more we are,
> the easier it becomes. Remember you have to attend the devroom too!
>
> We had issues managing stocks of stickers. Having too many displayed at once 
> will incentivise people to take a huge pile to
> share with friends at home, but it also means the stocks deplete quickly :)
>
> One nice thing to have is also a table cloth with our logo/graphics. Some 
> distros have live-CDs.
>
> I can take care of ordering that, stickers and anything else if I get 
> reimbursed. I can't come up with nice graphics by myself
> though, so please share any ideas :)
>

Error in the finalization thread:

Success.



Re: Rust on aarch64-linux

2022-10-21 Thread Efraim Flashner
On Fri, Oct 21, 2022 at 10:51:59AM +0200, Ludovic Courtès wrote:
> Hello,
> 
> Efraim Flashner  skribis:
> 
> > I'm not sure there is a bug report, I didn't see it either. It looks
> > like when I bumped rust-bootstrap from 1.39 to 1.54 we lost aarch64
> > support. I've bumped mrustc on staging and successfully performed a
> > qemu-binfmt build of rust-bootstrap for aarch64 on my x86_64 box. I was
> > then able to use that to build rust-1.55 on actual aarch64 hardware, so
> > I assume it's good, I just don't have the hardware to build
> > rust-bootstrap for aarch64 natively.
> 
> So the presumed fix involves bumping rust-bootstrap from 1.54 to 1.55,
> is that right?

Not quite. We keep rust-bootstrap at 1.54, but we bump the mrustc commit
we use from the v0.10 tag to its current master (about 50 commits).
Among the commits there is one that doesn't get aarch64 to try to spit
out assembly, (or illegal assembly or something) and then we just have
to build out again.

> That means we’ll have to rebuild on all architectures.  This is
> happening here:
> 
>   https://ci.guix.gnu.org/eval/739823
> 
> Could you monitor it?  If things go well, can we aim for a merge by next
> Thursday?
> 
> Thanks,
> Ludo’.

rust-1.60 built just fine on x86_64, and it looks like of the ~6700
packages built 567 failed, with another ~3400 waiting to be built.

I also haven't seen any movement on aarch64 in the build farm.


-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: Status of armhf-linux and powerpc64le-linux

2022-10-21 Thread Mathieu Othacehe


Hey,

> How frequently does that machine become unreachable?
>
> Its uptime right now is “only” 51 days, but it seems to have been
> reliably building things so far (surprisingly so!).

Oh so it must be available from the Cuirass point of view but not from
the guix offload point of view. I'll try to fix it today.

> In Cuirass, we should arrange to support partial evaluations or
> per-system evaluations so that a single missing offload machine doesn’t
> cause the whole evaluation to fail.

We can define a guix specification for x86_64-linux, i686-linux and
aarch64-linux and a different one for powerpc64le-linux. That can be
done really quickly. That can break some mechanisms relying on the fact
that the guix specification name is "guix" though.

> That’s radical, but maybe that’s the most reasonable option.
>
> How about a plan like this: until next Thursday, we try to address the
> infrastructure issues discussed above to estimate feasibility.  Then we
> decide on the way forward.  WDYT?

Alright, let's try that :). I agree that it is a pity not to release for
those architectures but on the other hand, we cannot offer fresh
substitutes reliably for them.

The recent outages of ci.guix.gnu.org have shown once again that the
infrastructure is maybe one of the most important Guix aspect. Without
it, Guix is almost unusable, in particular on architectures for which
it's hard to find powerful hardware.

Given the limited amount of people willing to help for
powerpc64le-linux and armhf-linux and the limited amount of hardware
resources available for those, I think it could be reasonable to focus
on a smaller set of architectures and provide stronger guarantees on
those in term of substitutes availability.

But let's discuss that again next week depending on our progress.

Thanks,

Mathieu



Status of armhf-linux and powerpc64le-linux

2022-10-21 Thread Ludovic Courtès
Moin!

Mathieu Othacehe  skribis:

>>  - armhf-linux is disabled on ci.guix due to improper offloading
>>setup (probably along the lines of
>>).  Should we try and reenable
>>it, or should we drop it?
>>
>>  - powerpc64le-linux is disabled on ci.guix since today
>>(maintenance.git commit
>>d641115e20973731555b586985fa81fbe293aeca).  However it did work
>>until recently and we have one machine to offload to.  Should we
>>fix it or drop it?  Mathieu?
>
> Yeah, we only have a single machine to offload to and each time it is
> not reachable, the "guix" specification fails on Cuirass.

How frequently does that machine become unreachable?

Its uptime right now is “only” 51 days, but it seems to have been
reliably building things so far (surprisingly so!).

> That's because we need to offload to a powerpc64le-linux machine in
> order to evaluate the guix derivation for that specific architecture
> (that's true for all the other architectures).

Maybe we should arrange to be more resilient to transient build machine
outage.

For that we need redundancy; we have it for ARM, but not for POWER9.  A
simple way to get redundancy today would be to set up transparent
emulation for POWER9 on one of the x86_64 boxes.  That’ll be
inefficient, but that’ll let Cuirass survive transient failures of that
one POWER9 box.

WDYT?

Longer-term, people interested in POWER9 should look into:

  • Purchasing, setting up, and hosting POWER9 hardware (funds held at
the FSF are probably sufficient for that!).

  • And/or: getting in touch with companies who could sponsor us by
providing hardware (the AArch64 port was started thanks to a
donation by ARM).

In Cuirass, we should arrange to support partial evaluations or
per-system evaluations so that a single missing offload machine doesn’t
cause the whole evaluation to fail.

> Given the lack of workers for powerpc64le-linux I think we should drop
> it.

We can do that, but I find embarrassing to drop the architecture after
all the work people have put it “just” because of infrastructure issues.

> Regarding armhf-linux we can in theory rely on the overdrives but we
> are already struggling on aarch64-linux, we I think we should also
> drop it for now.

In theory, ci.guix has at least 3 Honeycombs (2 are currently offline)
and 2 Overdrives, so it’s not that bad, and they don’t seem to be all
that busy.

So you’re right in a way, but at the same time this seems to be an
infrastructure issue.

> Focusing on x86_64-linux, i686-linux and aarch64-linux for this release
> seems more pragmatic.

That’s radical, but maybe that’s the most reasonable option.

How about a plan like this: until next Thursday, we try to address the
infrastructure issues discussed above to estimate feasibility.  Then we
decide on the way forward.  WDYT?

If we end up dropping architectures, we’ll have to:

  1. Update the documentation (and eventually the web site).

  2. Offer a clear plan as to what it would take to reinstate those
 architectures, and probably define clear criteria for architecture
 support going forward.

Thanks,
Ludo’.



Re: Pinning package inputs using inferiors?

2022-10-21 Thread zimoun
Hi Phil,

On Thu, 20 Oct 2022 at 22:37, Phil  wrote:

> A change in a package ("dependency" in the below example) in a channel I
> own has caused a conflict in another package in the same channel that depends
> on it ("test-package" in the below).  Whilst fixing the "test-package"
> package is the right solution, this is too complicated in do in the
> short-term.  I need to pin "dependency" to v1.0 in test-pacakge's
> propagated-inputs. Simultaneously, other packages need the new update to
> the "dependency" package to use this functionality to deliver new
> functionality that can't wait.
>
> This isn't a one-off situation; this happens frequently and I'm
> interested in how other Guixers resolve this with as little friction to
> users as possible?

Well, I answer to this question and I do not answer to your question
about inferior…

> One brainwave I had was to use inferiors - but this doesn't seem to
> work.  Continuing from the above example we could define access to a
> historical v1.0 of the dependency package for the test-package like so:
>
> (define dependency-inferior
>   ;; An inferior containing dependency v1.0.
>   (inferior-for-channels dependency-channels))

…instead of defining a complete inferior, why not just define 2
packages.  Something as,

(define-public foo
  (package
(name "foo")
(version "2.0")
[...]


(define-public foo-1.6
  (package
(inherit foo)
(name "foo")
(version "1.6")
[...]

For an example, see python-numpy and python-numpy-next in (gnu packages
python-xyz).

Note that most CLI as “guix install foo” will install the last version.
Hence the ’-next’ trick in the package name. :-)  Depending what you
would like to be the “default“.


Cheers,
simon



Re: Questions about Cuirass

2022-10-21 Thread Maxime Devos

On 20-10-2022 23:19, James Hobson wrote:

Hello!

Currently evaluating guix for embedded systems at work. But I have a few 
questions that I can’t quite work out from the docs. Please feel no obligation 
to answer!

Please note that my guix journey is at its very beginning. I’ve not even had a 
go at packaging!

Question 1
We would need to host the guix substitute server in an airgapped environment. 
The server would contain plain guix packages, our in house packages, and maybe 
patched guix packages. Would that be possible without having to rebuild the 
entire guix package set? We don’t have so many build machines, especially not 
for armv7.


You can tell Cuirass to only build a selection of packages (and their 
dependencies), by using a manifest, then not all of Guix is compiled but 
only what's necessary for your particular purpose.


Also, your Cuirass instance still needs access to the source code of the 
packages somehow, which will need to be somehow be squared with your 
'airgapped environment', though maybe 'copy over the result of guix 
build --sources=transitive" would be acceptable (*).


(*) except that this is after application of snippet; some kind of 
"--sources=raw,transitive" may be needed.



Question 2 [...]


I don't know the answer to this.


Question 3
Our software is sadly proprietary. Is there a way for guix build to selectively 
unpack and patch all non-proprietary sources so that we can provide it to 
anyone who asks? I feel like if this isn’t a thing already, I guess I can write 
it in scheme?


I assume you meant 'patch all non-proprietary' -> 'patch out all 
proprietary', such that at least the free parts can be used?


In that case, this is done already in some package definitions in Guix, 
by a 'snippet' removing parts that are non-free, such that they are not 
built and are not part of "guix build --source". (See: ‘Snippets versus 
Phases’ in the documentation, though it doesn't mention non-free things 
directly).


The Guix user can still access the unpatched source code though, by 
inspecting the package definition and removing the snippet, so it looks 
to me like that option is only good for 'you aren't allowed to modify 
this part of the source code + guix build --source must produce 
something free', not for 'you aren't allowed to see or distribute this' 
situations.


Alternatively, you could avoid all this complexity by making your 
software free.


Greetings,
Maxime.


OpenPGP_0x49E3EE22191725EE.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Re: Packages depending on (guix build syscalls)

2022-10-21 Thread Ludovic Courtès
Ludovic Courtès  skribis:

> Quite a few packages depend on (guix build syscalls), starting from
> ‘ant-bootstrap’ (since commit cded3a759356ff66b7df668bcdbdfa0daf96f4c5
> in 2018) up to GNOME-related packages such as ‘mutter’ (commit
> d1c2fe248a7a326189fb7dcae64a59ece96251ba a few months ago).

An issue is that GNOME now depends on Java:

--8<---cut here---start->8---
$ guix graph gnome --path icedtea
gnome@42.4
font-abattis-cantarell@0.303
python-cffsubr@0.2.9.post1
python-afdko@3.9.1
antlr4@4.10.1
antlr3@3.5.2
antlr2@2.7.7
icedtea@3.19.0
--8<---cut here---end--->8---

It is great to have ‘font-abattis-cantarell’ built from source (since
commit 97766323bc6e2b4dcfba4d6b46749a4280bca709), but it’s costly.

There’s probably not much we can do, unless the python-afdko -> icedtea
dependency is optional.

Ideas?

Ludo’.



Rust on aarch64-linux

2022-10-21 Thread Ludovic Courtès
Hello,

Efraim Flashner  skribis:

> I'm not sure there is a bug report, I didn't see it either. It looks
> like when I bumped rust-bootstrap from 1.39 to 1.54 we lost aarch64
> support. I've bumped mrustc on staging and successfully performed a
> qemu-binfmt build of rust-bootstrap for aarch64 on my x86_64 box. I was
> then able to use that to build rust-1.55 on actual aarch64 hardware, so
> I assume it's good, I just don't have the hardware to build
> rust-bootstrap for aarch64 natively.

So the presumed fix involves bumping rust-bootstrap from 1.54 to 1.55,
is that right?

That means we’ll have to rebuild on all architectures.  This is
happening here:

  https://ci.guix.gnu.org/eval/739823

Could you monitor it?  If things go well, can we aim for a merge by next
Thursday?

Thanks,
Ludo’.



Packages depending on (guix build syscalls)

2022-10-21 Thread Ludovic Courtès
Hello Guix!

(Resending to the right mailing list, oops!)

Quite a few packages depend on (guix build syscalls), starting from
‘ant-bootstrap’ (since commit cded3a759356ff66b7df668bcdbdfa0daf96f4c5
in 2018) up to GNOME-related packages such as ‘mutter’ (commit
d1c2fe248a7a326189fb7dcae64a59ece96251ba a few months ago).

It’s great that we can reuse this module in different contexts!  The
downside is that the module evolves quite often, because it’s a
foundation for Guix System and other things.  As a result, all these
packages get rebuilt every time we change it.

Maybe the only recommendation I would have is that we should make sure
we really need it before having a package deep down the graph depend on
it.  I wouldn’t want us to do ‘staging’ cycles when we need a change in
(guix build syscalls).

Thoughts?

Ludo’.



Questions about Cuirass

2022-10-21 Thread James Hobson
Hello!

Currently evaluating guix for embedded systems at work. But I have a few 
questions that I can’t quite work out from the docs. Please feel no obligation 
to answer!

Please note that my guix journey is at its very beginning. I’ve not even had a 
go at packaging!

Question 1
We would need to host the guix substitute server in an airgapped environment. 
The server would contain plain guix packages, our in house packages, and maybe 
patched guix packages. Would that be possible without having to rebuild the 
entire guix package set? We don’t have so many build machines, especially not 
for armv7.

Question 2
Does cuirass garbage collect? Or is that done through guix gc? We would need to 
host the binary packages for maybe 3 revisions at a time in this air gapped 
server.

Question 3
Our software is sadly proprietary. Is there a way for guix build to selectively 
unpack and patch all non-proprietary sources so that we can provide it to 
anyone who asks? I feel like if this isn’t a thing already, I guess I can write 
it in scheme?

Thanks

James