Re: Upgrading Shepherd services
> I see some services starting but no errors on the console. Also, there > is absolutely nothing in /var/log/messages. Would it help to diagnose > it using your Shepherd branch? yep, in two ways: my branch has extensive logging (and currently its default level is set to debug), and i also reworked and extended the error handling. my expectation is that your machine should both start up, and also emit some useful log why that specific service is failing. if that is not the case, then i'd really love to see a self-contained reproducer. if you want to dig deeper towards a reproducer, then one option is to try to write a guix system test that reproduces it (see gnu/tests/ for examples, and `make check-system`). to use my shepherd channel: (channel (name 'shepherd) (url "https://codeberg.org/attila-lendvai-patches/shepherd.git;) (branch "attila") (introduction (make-channel-introduction ;; note that this commit id changes whenever i rebase and force-push my commits "13557ba988f4976f6581149ecdc06fce031258c7" (openpgp-fingerprint "69DA 8D74 F179 7AD6 7806 EE06 FEFA 9FE5 5CF6 E3CD" and in your OS definition follow the instructions that are now in the shepherd README. HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Gradualism in theory is perpetuity in practice.” — Jared Howe
Re: [shepherd] several patches that i deem ready
> i've rebased my commits on top of the devel branch, and in the process i've > reordered them into a least controversial order for your cherry-picking > convenience: > > https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/various > > i just started a wave of deeper testing after the rebase, so the more complex > commits may change, but those need further work/negotiation anyway. Ludo, the first commit ('Replace stop with stop-service in power-off of the root service.') used to serve to avoid a warning, but on the 'devel' branch it is now essential: # halt halt: error: exception caught while executing 'power-off' on service 'root': Unbound variable: stop -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Tyranny is defined as that which is legal for the government but illegal for the citizenry.” — Thomas Jefferson (1743–1826)
Re: Upgrading Shepherd services
hi Felix, > Here is a small one for not booting, although the service activation > during 'guix deploy' succeeds. > > Please try the Guix timer below with the Shepherd development branch. > My equipment does not boot when the apparently erroneous (actions ...) > field in the shepherd-service record is present. i cannot reproduce this. maybe it fails for you due to some missing modules that are available in my test env? this below is with my shepherd branch, but later i double checked with vanilla 'devel', and it works the same. # herd trigger garbage-collector Triggering timer. # herd[210]: [debug] Got a reply, processing it shepherd[1]: [debug] fork+exec-command for (guix gc --free-space=1G), user #f, group #f, supplementary-groups (), log-file #f shepherd[1]: [debug] exec-command for (guix gc --free-space=1G), user #f, group #f, supplementary-groups (), log-file #f, log-port # shepherd[1]: Timer 'garbage-collector' spawned process 212. shepherd[1]: [debug] query-service-controller; message status, service #< provision: (garbage-collector) requirement: (guix shepherd[1]: [debug] query-service-controller; message running, service #< provision: (garbage-collector) requirement: (gui shepherd[1]: [guix] guix gc: already 30082.59 MiBs available on /gnu/store, nothing to do shepherd[1]: Process 212 of timer 'garbage-collector' terminated with status 0 after 1 seconds. HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “There are two ways to be fooled. One is to believe what isn't true; the other is to refuse to believe what is true.” — Søren Kierkegaard (1813–1855)
Re: [shepherd] several patches that i deem ready
hi Ludo, > nevertheless, i'll rebase my work on the devel branch eventually. it > will be a lot of pain in itself, but if i need to reimplement/rebase > stuff by hand anyway, then i'll try to further sort the commits in a > least-controversial order. i've rebased my commits on top of the devel branch, and in the process i've reordered them into a least controversial order for your cherry-picking convenience: https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/various i just started a wave of deeper testing after the rebase, so the more complex commits may change, but those need further work/negotiation anyway. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “We have been to the moon, we have charted the depths of the ocean and the heart of the atom, but we have a fear of looking inward to ourselves because we sense that is where all the contradictions flow together.” — Terence McKenna (1946–2000)
Re: A different way to build GCC to overcome issues, especially with C++ for embedded systems
hi Stefan, > > Does anyone know if this is available in a public repository? Or if it has > > been moved forward? > > > There is no public repository for it. this is a much more valuable piece of contribution than to allow it to hang in the void indefinitely! if the only reason that you have not made a channel for this yet is that you've never done it before, then i'd be happy to walk you through it off list. or i can help you setting up a guix fork where you can push your own signed commits, and guix pull from. or if you don't mind that the initial commit will be signed by me, then i can even set up a guix channel on codeberg, give you commit access, and then you can push any further changes there. just let me know. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Fear is not in the habit of speaking truth; when perfect sincerity is expected, perfect freedom must be allowed; nor has anyone, who is apt to be angry when he hears the truth, any cause to wonder that he does not hear it.” — Cornelius Tacitus (ca. 56–117)
Re: Upgrading Shepherd services
> I have a lot of custom Shepherd services. Every so often I make a > mistake that stalls the step in 'guix deploy' that upgrades Shepherd > services, but without any error messages. > > Unfortunately, I can also no longer run 'herd status', which likewise > hangs, or 'reboot'. How may I debug such issues in my operating-system > declaration, please? Ludo, this is the kind of issue for which extensive logging is needed. i.e. there's no self-contained reproducer (or is there, Felix?), and it requires a live environment to experience it. and i suspect that i may even have fixed this in one of the commits that cleans up shepherd's error handling. one of the issues i remember is that an exception from the start (or stop?) GEXP of a service sometimes brought shepherd into a non-responsive state (without any sign of it in its logs). Felix, i'm planning to rebase my branch on Ludo's devel branch. it's not trivial because Ludo continues hacking shepherd, but i'll hopefully do it in the next few days. after that you may give it a try and see if you experience this issue again, and if you do then you can have plenty of logs to give you a clue why/how it happens. if you do have a reproducer, then i'd be interested in adding it as a test in the shepherd codebase. https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/various -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “It is humiliating to realize that when you drive yourself underground, when you fake who you are, often you do so for people you do not even like or respect.” — Nathaniel Branden (1930–2014)
Re: branch master updated (2bea3f2562 -> 6745d692d4)
> I do not know what FIXUP commits are, but generally a Git history should a fixup is just a regular commit that was meant to be squashed on another commit before pushing. maybe the git hook could be extended to grep for 'fixup', 'squash me', 'KLUDGE', etc in the commit message? not sure whether to stick only to the formal annotations added by programs, or use a more heuristic set of words and then ask for a y/n (if feasible from the hook). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Uniquely in us, nature opens her eyes and sees that she exists.” — Raymond Tallis (1946–)
Re: Changing the defaults for --localstatedir and --sysconfdir?
> What do others think? i don't really understand all the consequences of this choice... but as a newcomer it surely was strange that i have to use a special incantation that i need to remember, and is added to the documentation, and is explained repeatedly on IRC... instead of making it the default. good defaults are important. and it's better if the default value causes the least surprise to the ones who know the least. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “[…] the real self is dangerous: dangerous for the established church, dangerous for the state, dangerous for the crowd, dangerous for the tradition, because once a man knows his real self, he becomes an individual.” — Osho (1931–1990)
Re: Value in adding Shepherd requirements to file-systems entries?
> P.S. The code above should read (requirements ...) in the plural. inside shepherd there's a bit of anomaly, but it's called requirement in the public API, and also in the guix side of the config; i.e. it's not plural. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Moderation in temper is always a virtue; but moderation in principle is always a vice.” — Thomas Paine (1737–1809)
Re: "Error: Could not set character size"
seems like a `guix pull` and a `guix home reconfigure` resolved it. sorry for the noise. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “If you wind up with a boring, miserable life because you listened to your mom, your dad, your teacher, your priest, or some guy on TV telling you how to do your shit, then YOU DESERVE IT.” — Frank Zappa (1940–1993), 'The Real Frank Zappa Book' (1989)
"Error: Could not set character size"
...when trying to build a guix checkout. to reproduce: $ make doc/images/service-graph.png DOT doc/images/service-graph.png Error: Could not set character size [error message repeated 12 more times] this seems to be an error coming from graphviz. this is all i could find online, which is not very useful: https://gitlab.com/graphviz/graphviz/-/issues/863 maybe this commit a few days ago changed something? looks innocent, though: https://git.savannah.gnu.org/cgit/guix.git/commit/?id=4099b12f9f4561d0494c7765b484b53d9073b394 any hints? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “You cannot transmit wisdom and insight to another person. The seed is already there. A good teacher touches the seed, allowing it to wake up, to sprout, and to grow.” — Thich Nhat Hanh (1926–)
Re: Guix bios installation: Grub error: unknown filesystem
> This should allow grub to recognise your filesystem during the > installation process. I think using a later version of grub would fix > this, but that hasn't happened yet. I think there's a patch to upgrade > it in `core-updates` somewhere, but I'm not sure. grub seems to be still v2.06 there: https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/bootloaders.scm?h=core-updates#n103 -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “In a democracy, mass opinion creates power. Power diverts funds to the manufacturers of opinion, who manufacture more, etc. […] This feedback loop generates a playing field on which the most competitive ideas are not those which best correspond to reality, but those which produce the strongest feedback.” — Mencius Moldbug
Re: [shepherd] several patches that i deem ready
hi Ludo, > > i have prepared the rest of my commits that were needed to hunt down the > > shepherd hanging bug. you can find them at: > > > > https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/attila > > > > there's some dependency among the commits, so sending them to debbugs would > > be either as one big series of commits, or a hopeless labirinth of patches > > otherwise. > > > Yes, but OTOH, piecemeal, focused changes sent to Debbugs are easier to > review for me. (There are 34 commits in this branch touching different > aspects.) i understand that, but cutting out some of the commits from this branch is a lot of work at best, and not possible at worst due to semantic dependencies. e.g. how shall i implement proper error handling without being able to inspect what's happening (i.e. proper logging)? nevertheless, i'll rebase my work on the devel branch eventually. it will be a lot of pain in itself, but if i need to reimplement/rebase stuff by hand anyway, then i'll try to further sort the commits in a least-controversial order. > I cherry-picked a couple of patches. > > Some notes: > > + 94c1143 shepherd: Add tests/startup-error.sh > > Redundant with ‘tests/startup-failure.sh’ I think? one of them just returns #f from its start lambda, while the new one throws an error. they exercise different code paths in shepherd. > + e802761 service: Add custom printer for records. > > > Good idea, but the goal is to remove GOOPS, so put aside for now. ok, i'll get rid of it (move it away into a local kludge branch). its main purpose is to be able to simply FORMAT some service objects into the log. > + af2ebec service: respawn-limit: make #f mean no limit. > > I’d rather not do that: one can use +inf.0 when needed. i found the respawn-limit API somewhat confusing (it requires a cons cell with two numbers). i thought #f could be a simple way to disable the respawn limit; simple both in implementation and as an API. FWIW, it's the first time i've ever met +inf.0 but as you wish, we can manage without this commit. > + 095e930 shepherd: Do not respawn disabled services. > > That’s already the case (see commit > 7c88d67076a0bb1d9014b3bc23ed9c68f1c702ab; maybe we hacked it > independently in parallel). err, hrm... i'm not sure anymore why i created that commit. "Respawning ~a." is printed before calling START-SERVICE (that then does honor the ENABLED? flag). maybe i recorded this commit without actually checking whether the service is respawned (as opposed to merely printing an inert log message). i'll get rid of this, but the incorrect respawning message will remain a source of confusion. > + dbc9150 shepherd: Increase the time range for the default respawn limit. > > This arbitrary and thus debatable, but I think the current setting > works well, doesn’t it? the current limit will not catch services whose start-to-fail time is not in the ballpark of 1 sec (5 times in 7 seconds). the startup-to-fail time of the service i'm working with is way above 1 sec. > + e03b958 support: Add logging operators. > + 39c2e14 shepherd: add call-with-error-handling > > I like the idea: we really need those backtraces to be logged! > There are mostly-stylistic issues that would need to be discussed > though. I’d like logging to be less baroque; I’m not convinced by: what do you mean by 'baroque' here? too verbose in the source code? > + 7183c9c shepherd: Populate the code with some log lines. > > This is exactly what I’d like to avoid—adding logging statements all > around the code base, possibly redundant with existing logging > statements that target users. > > What I do want though is to have “first-class logs”, pretty much like > what we see with ‘herd log’ etc. To me, that is much more useful than > writing the arguments passed each and every ‘fork+exec-command’ call. don't they serve two entirely different purposes? 1) logs meant for the users of shepherd (aka herd log) vs 2) logs that the shepherd and service developers need to understand shepherd's temporal behavior. i added every logging related code in the various pursuits of hunting down specific bugs. 1. bug gets triggered 2. stare at logs, have some questions 3. add some more log statements 4. goto 1. i'm not aware of any way to efficiently inspect the temporal behavior of a codebase other than adding explicit log statements. ideally using multiple, hierarchical log categories that can be turned on and off separately, both at runtime and at compile time. what i added to shepherd is a super simplified, local, mock version of that (short of porting/finding a proper logging library in scheme). > I’ll have to look further that branch. I admit I have limited bandwidth > available and,
Re: No default OpenJDK version?
> Currently, most java packages use the implicit jdk from the build > system (ant- or maven-build-system), which is… icedtea@8. We still > have quite a lot of old packages that don't build with openjdk9, so > I'm not sure when we can update the default jdk… does that prevent the introduction of a newer default JDK and annotating the old java dependency on the (hopefully few) packages that fail to build with a newer default? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Malthus was right. It's hard to see how the solar system could support much more than 10^28 people or the universe more than 10^50.” — John McCarthy (1927–2011), father of Lisp
Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)
> > I think we should gradually move to building everything from > > source—i.e., fetching code from VCS and adding Autoconf & co. as inputs. > > > the big drawback of this approach is that we would lose maintainers' > signatures, right? it's possible to sign git commits and (annotated) tags, too. it's good practice to enable signing by default. admittedly though, few people sign all their commits, and even fewer sign their tags. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Never appeal to a man's "better nature". He may not have one. Invoking his self-interest gives you more leverage.” — Robert Heinlein (1907–1988), 'Time Enough For Love' (1973)
policy for packaging insecure apps
the context: there's an app currently packaged in guix, namely gnome-shell-extension-clipboard-indicator, that has a rather questionable practice: by default it saves the clipboard history (passwords included) in clear text, and the preferences for it is called something obscure. its author actively defends this situation for several years now, rejecting patches and bug reports. a detailed discussion is available at: https://github.com/Tudmotu/gnome-shell-extension-clipboard-indicator/issues/138 the fact that its name suggests that it is *the* standard gnome clipboard app makes the situation that much worse. my question: how shall we deal with a situation like this? 1) shall i create a guix patch that makes the necessary changes in this app, and submit it to guix? this would be a non-trivial, and a rather hidden divergence from upstream, potentially leading to confusion. 2) is there a way to attach a warning message to a package to explain such situations to anyone who installs them? should there be a feature like that? should there be a need for a --force switch, or an interactive y/n, to force installing such apps? 3) is there a point where packages refusing to address security issues should be unpackaged? and also added to a blacklist until the security issue is resolved? where is that point? would this one qualify? 4) is this the responsibility of a project like guix to address situations like this? 5) do you know another forum where this dispute should be brought up instead of guix-devel? i'm looking forward to your thoughts, and/or any pointers or patches to the documentation that i should read. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- The price of liberty is eternal vigilance.
Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)
> Are there other issues (different from the "host cannot execute target > binary") that makes relesase tarballs indispensable for some upstream > projects? i didn't mean to say that tarballs are indispensible. i just wanted to point out that it's not as simple as going through each package definition and robotically changing the source origin from tarball to git repo. it costs some effort, but i don't mean to suggest that it's not worth doing. > So, while "almost all the world" is applying wrong solutions to the > source tarball reproducibility problem, what can Guix do? AFAIU the plan is straightforward: change all package definitions to point to the (git) repos of the upstream, and ignore any generated ./configure scripts if it happens to be checked into the repo. it involves quite some work, both in quantity, and also some thinking around surprises. i think a good first step would be to reword the packaging guidelines in the doc to strongly prefer VCS sources instead of tarballs. > Even if We™ (ehrm) find a solution to the source tarball reproducibility > problem (potentially allowing us to patch all the upstream makefiles > with specific phases in our packages definitions) are we really going to > start our own (or one managed by the reproducible build community) > "reproducible source tarballs" repository? Is this feaseable? but why would that be any better than simply building from git? which, i think, would even take less effort. > > but these generated man files are part of the release tarball, so > > cross compilation works fine using the tarball. > > > AFAIU in this case there is an easy alternative: distribute the > (generated) man files as code tracked in the DVCS (e.g. git) repo > itself. yes, that would work in this case (although, that man page is guaranteed to go stale). my proposal was to simply drop the generated man file. it adds very little value (although it's not zero; web search, etc). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “It is easy to be conspicuously 'compassionate' if others are being forced to pay the cost.” — Murray N. Rothbard (1926–1995)
Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)
> Are really "configure scripts containing hundreds of thousands of lines > of code not present in the upstream VCS" the norm? pretty much for all C and C++ projects that use autoconf... which is numerous, especially among the core GNU components. > If so, can we consider hundreds of thousand of lines of configure > scripts and other (auto)generated files bundled in release tarballs > "pragmatically impossible" to be peer reviewed? yes. > Can we consider that artifacts as sort-of-binary and "force" our > build-systems to regenerate all them? that would be a good practice. > ...or is it better to completely avoid release tarballs as our sources > uris? yes, and this^ would guarantee the previous point, but it's not always trivial. as an example see this: https://issues.guix.gnu.org/61750 in short: when building shepherd from git the man files need to be generated using the program help2man. this invokes the binary with --help and formats the output as a man page. the usefulness of this is questionable, but the point is that it breaks crosscompilation, because the host cannot execute the target binary. but these generated man files are part of the release tarball, so cross compilation works fine using the tarball. all in all, just by following my gut insctincts, i was advodating for building everything from git even before the exposure of this backdoor. in fact, i found it surprising as a guix newbie that not everything is built from git (or their VCS of choice). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “For if you [the rulers] suffer your people to be ill-educated, and their manners to be corrupted from their infancy, and then punish them for those crimes to which their first education disposed them, what else is to be concluded from this, but that you first make thieves [and outlaws] and then punish them.” — Sir Thomas More (1478–1535), 'Utopia', Book 1
Re: backdoor injection via release tarballs combined with binary artifacts (was Re: Backdoor in upstream xz-utils)
> Also, in (info "(guix) origin Reference") I see that Guix packages can have a > list of uri(s) for the origin of source code, see xz as an example [7]: > are they intended to be multiple independent sources to be compared in > order to prevent possible tampering or are they "just" alternatives to > be used if the first listed uri is unavailable? a source origin is identified by its cryptographic hash (stored in its sha256 field); i.e. it doesn't matter *where* the source archive was acquired from. if the hash matches the one in the package definition, then it's the same archive that the guix packager has seen while packaging. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “We’ll know our disinformation program is complete when everything the American public believes is false.” — William Casey (1913–1987), the director of CIA 1981-1987
Re: [shepherd] several patches that i deem ready
> i have prepared the rest of my commits that > were needed to hunt down the shepherd hanging bug. > you can find them at: https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/various Ludo, i would appreciate if you could give some feedback on this. these commits are extensive (line-diff-wise) due to the added log statements, and the error handling wrapper forms. the more you work on shepherd, the more these commits rot away, and the more avoidable work/frustration it is to keep rebasing them. i believe that these are valuable additions to shepherd, as was the reconfigure hang fix that these were needed for. as a first phase, maybe you could cherry pick some of the commits that you find agreeable. i'm looking forward to your feedback on how i could/should improve these to get them merged. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “When training beats education, civilization dies.” — C. S. Lewis (1898–1963)
Re: xz backdoor
> There's actually suspicious code by the xz attacker in one of our > packages right now: > > https://issues.guix.gnu.org/issue/70113 > > Please help review that patch! as for gpaste (one of the dependees of libarchive): it doesn't build since the recent gnome merge. i've filed a patch for the necessary version bump: https://issues.guix.gnu.org/70133 which also gets rid of the libarchive dependency. it would be nice to get this fast tracked. although, judging from the (lack of) complaints, i might be the only user of it. PS: and meanwhile we're packaging an alternative, namely gnome-shell-extension-clipboard-indicator, with an enormous security flaw: by default it saves the clipboard history in clear text, and calls the feature "cache only favorites", so that even if you look for it, you still don't realize it: https://github.com/Tudmotu/gnome-shell-extension-clipboard-indicator/issues/138#issuecomment-904689439 ...and its author actively defends this situation. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The noble-minded are calm and steady. Little people are forever fussing and fretting.” — Confucius (551–479 BC), 'Analects of Confucius'
Re: Emacs and Gnome branches are merged now
just a heads' up: after pulling the gnome changes i was unable to log in (empty screen with frozen pointer after the password). `herd restart elogind` can be used to get back to the login screen. i could fix it using the following steps: 1. move away my ~/.local/share/gnome-shell/extensions/ 2. login 3. start extension config and disable all extensions 4. re-login, move back that dir 5. re-login, update some extensions (still disabled) 6. reenable all extensions in the config HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The most potent weapon of the oppressor is the mind of the oppressed.” — Steve Biko (1946–1977)
Re: xz backdoor
> The quick summary is that Guix currently shouldn't be affected > because a) Guix currently packages xz 5.2.8, which predates the > backdoor, and b) the backdoor includes checks based on absolute > paths e.g. under /usr and Guix executable paths generally don't > match the patterns checked for. and guix doesn't use systemd that patches sshd -- a critical piece of security -- in a way that made the backdoor possible... -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “War is a moral contest that is won in the temples before it is ever fought.” — Sun Tzu (c. 6th century BC), author of 'The Art of War' (as paraphrased by Jack Kennedy)
Re: Losing signing keys for custom Guix channel
> from reading about guix authentication I think the new signing key > must be first added to the .guix-authoriations file and that commit > must signed with the current signing keys before the new signing > key can be used. yep. otherwise anyone with access to the origin git repo could override the commit signature based authentication framework. if you think about it, if there were any options for you to sidestep this situation of a lost key, then any attacker could do the same. i'm afraid your only option is to re-record and re-sign every commit, force-push them, and publish a new channel intro snippet that all your users must copy into their config. alternatively, you *may* be able to simply publish a new channel intro snippet (and convince all your users that it's a genuine situation) that will point to the first new commit that is signed with the new key... but i doubt the contract (nor the implementation) of the authentication code would just silently accept the non-authenticated commits that precede your new channel intro commit. all the best in fixing the situation! -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “’Tis better it be a year later before he can read, than that he should this way get an aversion to learning.” — John Locke (1632–1704), 'Some Thoughts Concerning Education'
Re: the right to rewrite history to rectify the past (was Re: Concerns/questions around Software Heritage Archive)
> We are talking about social rules that we have here in the Guix > community not legal/state rules. ethics, i.e. the discussion of rights, is a branch of philosophy. ideally, it should inform the people who are writing and enforcing state laws, but these days -- sadly -- it has precious little to do with state laws. and i think you're the one here who conflates the two. > Specifically the social rules that we support trans people and we want > to include them. Any person really that want to change their name at > some point for some reason. > > To that end we listen to their concerns/wishes and we accommodate them. i've asked you this before, and i'll keep asking it: sure, accommodate, but to what extent? what is a reasonable cost i can incur on others? (see the discussion of negative vs. positive rights in this context) what if i declare that i only feel accommodated here if everyone attaches the local weather forcast to each mail they send to guix-devel? the limit of your demands begins where it starts to constrain the freedom of others. considering this is an essential part of respectful behavior towards others. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “I am not what happened to me, I am what I choose to become.” — Carl Jung (1875–1961)
Re: rewriting history; Was: Concerns/questions around Software Heritage Archive
> not an expert in guix internals) the only reason we care about > identity is that it's part of git commits. identities are deeply intertwined with trust (our best predictor of future behavior is past behavior). and how trust is facilitated by the tools and processes (including the social "technology") can make or break any group effort. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The direct use of physical force is so poor a solution to the problem of limited resources that it is commonly employed only by small children and great nations.” — David D. Friedman (1945–), 'The Machinery of Freedom' (1973)
rewriting history; Was: Concerns/questions around Software Heritage Archive
> I was also distressed to see how poorly they treated a developer > who wished to update their name: > https://cohost.org/arborelia/post/4968198-the-software-heritag > https://cohost.org/arborelia/post/5052044-the-software-heritag let's put aside the trans aspect of this question for a moment, because this question has broad implications, much broader than the regrettable struggles of trans people. the question here is whether person A has the right to demand that others change their memory of A's past actions (i.e. rewrite history, or else become a felon... or maybe just unwelcome in polite society?). so, let's just assume that i have decided to prefer being called a new name (without disclosing my reasons). is it reasonable for me to demand from somebody else to change their memory of my past actions? e.g. to demand that they rewrite their memory/instances of my books that i have published under my previous name in the past? or that they forget my old name, and when the change happened? or that they do not link the two names to the same individual? if so, then where is the line? what's the principle here? and what are its implications? do i have the right to demand the replacement of a page in each copy that exists out there? i.e. should it be criminal (or just a sin?) to own old copies? do i have the right to demand that certain libraries must sell/burn their copies of my books and never own them again? what if i committed a fraud? e.g. i pushed a backdoor somewhere... do i have the right to memory-hole my old identity? and who will enforce such a right? the government? i.e. those people who already keep an (extralegal) record of whenever i farted in the past decade? where can i even file my GDPR request for that? would that really be a "right to be forgotten", or merely a tool of even tighter monopolization of The Central Database? what if i'm a joker and i demand a new change every week for the rest of my life? do i have the right to the resources of every library out there? to keep their staff and computers busy for the next couple of decades? but let's put the technical aspects aside; wherever we draw the line... what are the implications of that for borader society? because i sure see some actors out there who can hardly wait to start erasing certain records at the barrel of the law, including rewriting books of significance... (and while we are at it, i suggest to start preserving your offline/local copies, because we're up to a wild ride!) humanity has reached an enormous challenge with the complete marginalization of the costs of storing and transmitting information. it's a completely new/different playing field, and how we proceed from here has grave implications. this questions is nowhere near as obvious/trivial as presented in the cited blog post. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “It is only when compassion is present that people allow themselves to see the truth. […] Compassion is a kind of healing agent that helps us tolerate the hurt of seeing the truth.” — A.H. Almaas (1944–), 'Elements of the Real in Man (Diamond Heart, Book 1)'
Re: Concerns/questions around Software Heritage Archive
> only a 35 yrs old white cis boy you're judging a group of individuals, namely those who were handed the cis white male mix at the genetic lottery, as a uniform blob. and maybe even somewhat deplorable, if i'm reading your right. does it make sense to judge an individual based on some coincidental properties? or really, based on anything else than their actions? does it make sense to discuss the actions/morality of a group of individuals that is formed based on some coincidental properties? e.g. what can we say about the morality of all the blond people? and ultimately, is that an effective way of speaking up for human rights and welcoming environments -- of all things? maybe it's time to take a thorough look at the book that you're preaching from? if i may, let me attempt to inspire you: “The world is changed by your example, not by your opinion.” — Paulo Coelho (1947–) % “Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself.” — Rumi (1207–1273) % “If there is to be peace in the world, There must be peace in the nations. If there is to be peace in the nations, There must be peace in the cities. If there is to be peace in the cities, There must be peace between neighbors. If there is to be peace between neighbors, There must be peace in the home. If there is to be peace in the home, There must be peace in the heart.” — Lao Tzu (sixth century BC) % “A man of humanity is one who, in seeking to establish himself, finds a foothold for others and who, in desiring attaining himself, helps others to attain.” — Confucius (551–479 BC) % “To put the world in order, we must first put the nation in order; to put the nation in order, we must first put the family in order; to put the family in order; we must first cultivate our personal life; we must first set our hearts right.” — Confucius (551–479 BC) % “Until we have met the monsters in ourselves, we keep trying to slay them in the outer world. And we find that we cannot. For all darkness in the world stems from darkness in the heart. And it is there that we must do our work.” — Marianne Williamson (1952–), 'Everyday Grace: Having Hope, Finding Forgiveness And Making Miracles' (2004) % “If things go wrong in the world, this is because something is wrong with the individual, because something is wrong with me. Therefore, if I am sensible, I shall put myself right first” — Carl Jung (1875–1961), 'The Meaning of Psychology for Modern Man' -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “If liberty means anything at all, it means the right to tell people what they do not want to hear.” — George Orwell (1903–1950)
Re: Contribute or create a channel?
> > channels are a step towards this, but they are not enough in their > > current form to successfully accommodate for such a setup. an obvious > > thing that is missing is a way to formally express inter-channel > > dependencies, including some form of versioning. > > > Do we not have this? The manual documents a mechanism for channel > dependencies in "(guix) Declaring Channel Dependencies". > > I haven't used it, but it looks like the dependencies are declared as > channels, which can have the usual branch/commit specifications to tie > them to specific versions. good point, thanks! i looked briefly at the code just now. it's not trivial, and it seems to treat the guix channel specially (because i don't need to specify it as a dependency in my channel's .guix-channel file), and i'm not sure how it behaves when e.g. two channels depend on the same channel, but pick two different commits... or all the other convoluted situations. the reason i assumed it doesn't exist is that i've never seen it used by any channels that i looked at. > What are we missing? i guess it's time to experiment to be able to answer your question. FTR, it's READ-CHANNEL-METADATA and friends in guix/channels.scm note that it's not the same thing as /etc/guix/channels.scm, even though they appear similar (https://issues.guix.gnu.org/53657). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “People who have never gone to school have never developed negative attitudes toward exploring their world.” — Grace Llewellyn
Re: Contribute or create a channel?
> > the patch inflow to the guix repo is currently overwhelming the > > available capacity for review and pushing. > > > With an email like the one sent by Hartmut we can better arrange for > shepherding this large submission. (Nothing is to be gained from > repeatedly bemoaning well-known issues in the patch review processes > here and in other threads on the mailing list.) i was reflecting on why i wrote this, and what i wanted to express is that i think guix has reached a point where a monorepo is becoming a net negative, and i don't see this being discussed. my gut feeling is that new abstractions are needed that would enable splitting the monorepo/community into less tightly coupled subgroups where they can have their own coding standards, repos, channels, etc, and a more federated way to maintain/integrate all the software that exists out there into a guix system. in this hypothetical setup commit rights could be issued much more liberally to non-core sub-repos, and more rigorous code reviews would only need to be done when a new version of the split-out part is being incorporated back into a new revision of the core/bootstrap chain (if e.g. assuming python is needed for the bootstrap of the core, then the python subgroup's stuff would only need core review when a new version of that is pointed to by the core). or alternatively, simply try to split guix into a minimal core that is essential for the bootstrap, and everything else into multiple subchannels (gnome, gui stuff in general, random apps, etc). i have no impression how much that alone could shrink the monorepo part, though. channels are a step towards this, but they are not enough in their current form to successfully accommodate for such a setup. an obvious thing that is missing is a way to formally express inter-channel dependencies, including some form of versioning. sadly, i don't have any proposals beyond discussing the observable issue (i.e. the insufficient patch throughput). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Values in a free society are accepted voluntarily, not through coercion, and certainly not by law… every time we write a law to control private behavior, we imply that somebody has to arrive with a gun [to enforce it].” — Ron Paul
Re: Mechanism for helping in multi-channels configuration
> Although I concur with this need, I do not see how it would be help for > detecting compatibility between channels. :-) maybe i'm overthinking this, and all we need is a way to point to git commit ranges that are compatible. more specifically, i'm maintaining the guix-crypto channel, and i often miss the ability to point to a guix commit, beyond which there is a change in guix that my channel is not yet compatible with. if my users issue a `guix pull`, then it would not pull the guix channel beyond that commit, and warn the users that it's being held back. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “In the electronics industry, patents are of no value whatsoever in spurring research and development.” — vice-president of Intel Corporation, Business Week, 11 May 1981.
Re: Guix Days: Patch flow discussion
> Somehow, the reader will judge if Message-ID is smoothly supported. :-) i regularly meet this most unfortunate attitude in the GNU circles, where oldtimers dismiss any discussion of friendlier defaults for newcomers with the "argument" that it's configurable, and therefore it's a non-issue. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The wise prince must provide in such a way that the citizens will always be in need of him and of his government.” — Niccolo Machiavelli (1469–1527), 'The Prince' (1513)
Re: Contribute or create a channel?
> WDYT? I'm eager to learn about your thoughts. the patch inflow to the guix repo is currently overwhelming the available capacity for review and pushing. if you want an agile experience, i.e. where you can quickly fix/update this and that, then i suggest your own channel (unless you have the commit bit for the guix repo... or there's a committed maintainer who is a regular user, and as such will fast-track your patches). otherwise you'll end up using a channel anyway (i.e. your fork of the guix repo while your patches are waiting in the queue to be reviewed and pushed). PS: i don't mean to sound cynical here, just matter-of-factly. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Opportunity is missed by most people because it comes dressed in overalls and looks like work.” — Thomas A. Edison (1847–1931)
Re: A basic Shepherd bug?
hi Felix, > > you should follow the instructions in [1]; namely: > > > > https://lists.gnu.org/archive/html/guix-devel/2023-12/msg00018.html > > > > together with "Installing development snapshots with Guix" in > > shepherd's README to add shepherd's channel. > > > I did so on a production system which I do not reboot often. Two days > ago, I reconfigured and saw a new Shepherd version being deployed. It > used up a lot of CPU cycles until I rebooted. just to clarify: you `guix system reconfigure` into a new shepherd version, and after that the currently running shepherd init process went 100% CPU, i.e. it was busy looping in one thread? > It makes sense that upgrading the Shepherd requires a reboot, but maybe > a warning somewhere would be appropriate, if possible. Maybe an email to > root? unfortunately a lot of the infrastructure around guix is lacking explicit formal description of dependencies/requirements. e.g. there's nothing (that i know of) in the shepherd config files (which are generated by `guix system reconfigure`) about what shepherd version they require/assume. a quick and dirty solution here could be to manually emit an assert into the config file that there's an exact match between the shepherd that generated the config file, and the shepherd process trying to load it. a warning could be issued that the shepherd process is unable to load/process the generated config file until a reboot... which would probably be overkill in most cases. https://ngnghm.github.io/ this^ blog has interesting thoughts on migration and staged computation. it's a most interesting vision of how these abstractions could be formally captured, and what the resulting computing system would look like. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Love and do what you will.” — Augustine of Hippo (354–430), 'A sermon on love'
Re: Guile-Git 0.6.0 released; looking for maintainers!
> An idea might be to look into using nyacc’s ffi-helper to generate > struct definitions. over there in CL land i wrote an automatic FFI generator. it's now part of the main CL FFI lib: https://github.com/cffi/cffi/tree/master/src/c2ffi it is based on c2ffi: https://github.com/rpav/c2ffi which is a piece of C++ code that uses CLANG as a library to parse any C header file, and emit its content into a json file. a thin layer of lisp code can generate the actual sexp FFI definitions from the json files, that then can be hooked into the usual guile way of doing FFI. the json files can be checked into the repo, which then eliminates the dependency on c2ffi on the user side (i.e. the project is only as heavy as any other hand-written FFI wrapper). that way only the maintainer needs to regenerate the json files every once in a while. or short of a smarter build tool like ASDF, we can also check in the generated lisp files. if there's interest, then i can help porting this over to guile. below are some example projects that are using it. they are rather thin and simple, yet provide a full FFI: https://github.com/hu-dwim/hu.dwim.zlib https://github.com/hu-dwim/hu.dwim.sdl PS: clang now supports `-ast-dump=json`, which may or may not eliminate the need for c2ffi entirely: https://github.com/rpav/c2ffi/issues/112 -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Sometimes I wonder whether the world is being run by smart people who are putting us on or by imbeciles who really mean it.” — Mark Twain (1835-1910)
Re: Google Season of Docs 2024
> 3. Any for improving the documentation? just a general wish: much less prose and much more structure, please. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “There is no coming to consciousness without pain. People will do anything, no matter how absurd, in order to avoid facing their own Soul. One does not become enlightened by imagining figures of light, but by making the darkness conscious. The latter procedure, however, is disagreeable and therefore not popular.” — Carl Jung (1875–1961), 'Alchemical Studies' (1967)
Re: Mechanism for helping in multi-channels configuration
> The wishlist is: provide a machine-readable description on guix-science > channel side in order to help in finding the good overlap between > commits of different channels. i wrote about a missing abstraction here: https://lists.gnu.org/archive/html/guix-devel/2023-12/msg00104.html which is more or less related to this. the git commit log is a too fine-grained granularity here. there should be something like a 'guix log' above the git log that could be used, among other things, to encode inter-channel dependencies. maybe frequent semver releases for guix channels could work as reference points to be used to formally encode inter-channel dependencies? (and to guide the substitute chaching/building; mark "safe points" for the time-machine; etc) -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- Life is a tragedy to those who feel and a comedy to those who think.
Re: Mechanism for helping in multi-channels configuration
> Anything is better than an obscure failure/backtrace i disagree with this specific statement. in the long run, the (inconspicuous) cost of added complexity can easily move anything into net negative territory. IOW, feel encouraged to account for the cost of complexity. it's rarely done prior to setbacks. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Until we have met the monsters in ourselves, we keep trying to slay them in the outer world. And we find that we cannot. For all darkness in the world stems from darkness in the heart. And it is there that we must do our work.” — Marianne Williamson (1952–), 'Everyday Grace: Having Hope, Finding Forgiveness And Making Miracles' (2004)
Re: Introducing Guix "Features"! (Was: Syntactic Diabetes)
> In the systemd realm, there are different types of services, I think one > is called "one-shot" which is effectively quite similar to the types of > services guix has... they do something once, and there is no running > daemon. So, for better or worse, guix is not so far from one of the most > widespread and commonly used systems here... executed at each boot vs. executed when compiling the system (i.e. at different stages, as in 'staged computation'). it's a bit like using the same word to describe macros and functions in lisp: yes, deep down they are both just functions, but they are different enough to warrant a different name to facilitate a more efficient human comprehension. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The nation that will insist on drawing a broad line of demarcation between the fighting man and the thinking man is liable to find its fighting done by fools and its thinking done by cowards.” — Sir William Francis Butler (1838–1910)
Re: Introducing Guix "Features"! (Was: Syntactic Diabetes)
> Am Donnerstag, dem 01.02.2024 um 20:30 + schrieb Attila Lendvai: > > > for an average unix user a service is a process that is running in > > the backgroud, doing stuff mostly without any user interaction. you > > can try to argue this away, but i'm afraid that this is the state of > > things. > > Which is exactly what etc-service-type does. It symlinks stuff to /etc > without user interaction. we can spend our life honing in on a satisfying definition, but let it be enough that what is commonly understood as a service has an active component (see 'run' in my definition); i.e. it has a temporal dimension. but honestly? it felt silly to even provide a definition in my mail. we either live in a different universe, or you're just focused on justify the status quo. whichever is the case, we have reached a dead end, because essentially, this is aesthetics. but anyway, i gave my feedback, and as i don't have the authority to lobby for renaming core guix abstractions, i'm out. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Until you figure out that [manipulation] is going on, we're all going to be run like rats through a maze. Culture is an intelligence test, and when you pass that test you don't give a shit about culture.” — Terence McKenna (1946–2000)
Re: Introducing Guix "Features"! (Was: Syntactic Diabetes)
> > for an average unix user a service is a process that is running in the > > backgroud, doing stuff mostly without any user interaction. you can > > try to argue this away, but i'm afraid that this is the state of > > things. > > > I don’t think it’s a good idea to aim to satisfy some presumed “average > unix user”, because such a user would not be familiar with many concepts > introduced by Guix (e.g. “guix shell” or “guix system”). the primary argument was that two, very different abstractions share the same name, and in shared contexts. it's just icing on the cake that one of the abstractions is nothing like what most users understand by the name 'service'. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “If you love somebody, let them go, for if they return, they were always yours. If they don’t, they never were.” — Khalil Gibran (1883–1931)
Re: Introducing Guix "Features"! (Was: Syntactic Diabetes)
> there for most of the time already. And if you think about it, > symlinking stuff to /etc is a service. i've arrived to guix after 3+ decades of programming, most of that in opensource environments, unix-like OS'es, and more than a decade using linux as my primary OS and lisp as my goto language. it could be me, of course, but it took me months of tinkering until i understood the guix service vs shepherd service nomenclature. and i still need to focus when i'm dealing with foo-service-type and shepherd services at the same time. this nomenclature was an obstacle to understanding, because the naming suggests something that was misleading me. for an average unix user a service is a process that is running in the backgroud, doing stuff mostly without any user interaction. you can try to argue this away, but i'm afraid that this is the state of things. and if you care whether your words (code) is communicating what you want to be understood by your audience, then you must consider their model of reality. which reminds me of: “Programs must be written for people to read, and only incidentally for machines to execute.” — Abelson & Sussman, SICP, preface to the first edition -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “As a rule, whatever we don’t deal with in our lives, we pass on to our children. Our unfinished emotional business becomes theirs. As a therapist said to me, "Children swim in their parents’ unconscious like fish swim in the sea."” — Gábor Máté (1944–), 'In the Realm of Hungry Ghosts' (2008)
Re: [OT: s/Joshua/Josiah/ in sig ; -] Re: [shepherd] several patches that i deem ready
> > “But if you wish to remain slaves of bankers and pay the cost of your own > > slavery, let them create money.” > > — Joshua Stamp > > ^^ > Josiah > https://en.wikipedia.org/wiki/Josiah_Stamp,_1st_Baron_Stamp > > Hi attila (and others who, like me, may enjoy the quotations > at the bottom of your posts :) your report is much appreciated, and thanks for your kind words, too! it's good to know that someone not only enjoys them, but that it has initiated further research. it reminds me of how it all started: years ago i found myself, that on a mailing list i was only reading the end of mail quotes from a great hacker (http://fare.tunes.org), from whom i have learned a lot on a wide range of topics. and then it struck me: i should have this too! (be the change you want to see in the world, and whatnot... :) in that spirit, my scripts and my collection is available below (it often has quotes and references in comments, and it's grouped by topics): https://codeberg.org/attila.lendvai/dotfiles > Should such misspellings be reported somewhere as a bug? an email like this is perfect. you may consider keeping it off-list though, to respect the topic of the list. thanks again and happy hacking, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Many people believe that evil is the presence of something. I think it’s the absence of something.” — Lisa Unger (1970–), 'Sliver of Truth'
Re: [shepherd] several patches that i deem ready
> i have prepared the rest of my commits that were needed to hunt down the > shepherd hanging bug. you can find them at: > > https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/attila FWIW, here i have the guix side of the patches (they are not required for the shepherd changes): https://codeberg.org/attila-lendvai-patches/guix/commits/branch/shepherd-guix-side the first commit touches hurd, which i have not tested. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “But if you wish to remain slaves of bankers and pay the cost of your own slavery, let them create money.” — Joshua Stamp
Re: [shepherd] several patches that i deem ready
> About "cheaper code path when a log level is disabled at runtime", > perhaps it can be improved in guile-lib, but otherwise that's a nice > list. I just wish we had a good logging library in Guile and could stop > reinventing the wheel left and right. i've made my judgement that the logger in guile-lib was never applied seriously when i relized that it stores the enabled state in a hashtable (which must be looked up for every log statement). i made sure the log statements have a unique syntax, so the underlying machinery can be replaced easily later, and then i moved on. > OK. For levels greater than debug, they I see them as glorified > comments (executable comments as yo wrote), so I don't see a strong > reason to attempt to hide them or treat them specially. In Python > (which strives to be readable), we typically break logging lines (which > are concatenated for free inside the parens -- default Python behavior), > and that doesn't hurt readability in my opinion, and means we can just > follow the usual style rules, keeping things simple. my experience is different. i found myself only ever looking at log statements when i'm debugging something, regardless of the level, and including other people's code. and then i just toggle line wrap with the press of a button. must be related to my habit that i usually put more effort into making the code more self-documenting (readable) than i put into writing informal comments and documentation. and rethinking my "executable comment" metaphore: these log statements serve much less as comments than reporting the temporal state and program flow. but my primary aim is to color it all gray, and i don't immediately know how to do that in emacs for multiline sexp's (i.e. balanced parens). this is the primary reason our team just kept them on one line, but the flexibility of toggling word wrap as needed is also nice: the essetial part is always within a reasonable margin, and the rest can be read when word-wrap is enabled. if requested, then i'm willing to re-format the log statements if i can find a way to still color it all gray. it's important that logging stays out of sight while reading the code. > Thanks for working on this, I'm sure it'll help many, myself included, > following the execution of Shepherd more easily. my pleasure! in my experience when a project doesn't have proper logging, backtraces, error handling hygene, and warning-free compilation, then inefficient debugging quickly eats up more time than it would take to implement these features properly. unfortunately, guix and guile is not very good on this front, so i found myself working on these, too. such investment rarely pays off for the first bug, but it pays off very well in the long run. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “A political situation is the manifestation of a parallel psychological problem in millions of individuals. This problem is largely unconscious (which makes it a particularly dangerous one!)” — Carl Jung (1875–1961), Letters, vol.1 pg. 535
Re: [shepherd] several patches that i deem ready
hi Maxim, > > > - a lightweight logging infrastructure together with plenty of log > > > lines throughout the codebase, and some hints in the README on how > > > to turn log lines gray in emacs (i.e. easily ignorable). > > > Are you using guile-lib's logging library for it? I've used it in > guile-hall if you want to have an example. We should maximize its > users, refine it and aim to have it builtin Guile at some point. i looked at that lib first (IIRC by your recommendation), but i ended up rolling my own for the cost of two additional pages of code in shepherd. i think the main issue i had was the amount of unconditional computation that happens on the common code path, and its complexity in general, including its API. shepherd has some non-trivial machinery regarding logging output being captured and redirected through sockets to herd and whatnot; i.e. most of the handler machinery in guile-lib's logger would be just an impedance mismatch instead of being helpful. for those two pages it's: - one less external dependency - less resource use - more flexibility - cheaper code path when a log level is disabled at runtime - compile-time log level to drop entire log levels - and most importantly much less code complexity. you can find the relevant commit at: https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/attila FWIW, it's a partial bort of a CL lib of mine: https://github.com/hu-dwim/hu.dwim.logger > > a quick note on the log statements: they are essentially noise when it > > comes to reading the code, hence the gray coloring i suggest in > > emacs. (although they may often serve also as "executable" comments). > > > > i'd also like to propose to relax the 80 column limit for log lines > > for the same reason that i've explained above. > > > I don't think an exception is deserved here; the logging library from > guile-lib for example concatenates message strings by default, which > makes it easy to brake long messages on multiple lines. my ultimate goal here is to minimize the disruption of code readability. only some emacs (editor) magic and/or code formatting can help with that. log lines are only relevant when you're debugging something, or when you're trying to understand a codebase. all other times they are essentially noise. my proposal is what our CL team settled with: always one line per log statement, and setting the foreground color of the entire line gray in emacs. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The pursuit of commerce reconciles nations, calms wars, strengthens peace, and commutes the private good of individuals into the common benefit of all.” — Hugh of Saint Victor (1096–1141)
Re: [shepherd] several patches that i deem ready
> - a lightweight logging infrastructure together with plenty of log > lines throughout the codebase, and some hints in the README on how > to turn log lines gray in emacs (i.e. easily ignorable). a quick note on the log statements: they are essentially noise when it comes to reading the code, hence the gray coloring i suggest in emacs. (although they may often serve also as "executable" comments). i'd also like to propose to relax the 80 column limit for log lines for the same reason that i've explained above. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Happiness, whether consisting in pleasure or virtue, or both, is more often found with those who are highly cultivated in their minds and in their character, and have only a moderate share of external goods.” — Aristotle (BC 384–322), 'Book VII, 1323.b1'
[shepherd] several patches that i deem ready
dear Guix, Ludo, i have prepared the rest of my commits that were needed to hunt down the shepherd hanging bug. you can find them at: https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/attila there's some dependency among the commits, so sending them to debbugs would be either as one big series of commits, or a hopeless labirinth of patches otherwise. therefore i recommend the following workflow instead (assuming that Ludo is pretty much the only one hacking on shepherd): Ludo, please take a look at my branch, and cherry-pick whatever you are happy with. then based on your feedback, and the new main branch, i'll rebase and refine my commits and give you a head's up when it's ready for another merge/review. the commits are more or less ordered in least controversial order, modulo dependencies. the main additions are: - a multi-layered error handler that got employed at various points in the codebase. this makes shepherd much more resilient, even in case of nested errors, and much more communicative in the log when errors end up happening. - a lightweight logging infrastructure together with plenty of log lines throughout the codebase, and some hints in the README on how to turn log lines gray in emacs (i.e. easily ignorable). looking forward to your feedback, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “What you do speaks so loud I cannot hear what you say.” — Ralph Waldo Emerson (1803–1882)
Re: Guix deploy --keep-going equivalent for machine connection failures
> Another option worth considered is adding a `'can-connection-fail?' (default: > #f)` to either `machine` or `machine-ssh-configuration`. i'd call it `ignore-connection-failure?`, or if we want to ignore all problems for a machine, then `ignore-failure?`. --keep-going could set the default value for the session, and the machine specific variable would override it. as for the implementation, i'd use continuable exceptions of a specific type, and then from scheme code users could install handlers that ignore the situation. avoiding shell scripts these days is a good idea after all. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Democracy and liberty are not the same. Democracy is little more than mob rule, while liberty refers to the sovereignty of the individual.” — Walter E. Williams (1936–)
Re: Module unavailable on build side
this may help: https://github.com/attila-lendvai/guix-crypto/blob/main/src/guix-crypto/service-utils.scm#L56 and you should grep for its use in that repo. w-i-m ensures that the GEXP's that are instantiated in its dynamic extent will "capture" these modules as their dependencies, and i think it also inserts the appropriate use-modules forms at the head of the GEXP's. but take this all with a grain of salt, because what i understand i decoded on my own from the guix codebase, and the names of these abstractions are rather misleading. also, i'm using these in the service code that gets compiled for shepherd to be loaded. the environment surrounding the building of packages may behave differently. HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “There is no difference between living and learning […] it is impossible and misleading and harmful to think of them as being separate.” — John Holt (1923–1985)
Re: Guix wiki
> 1. People find the [data] service provides value (can someone restate what > that > value is exactly? Is it needed e.g. to power if you allow hijacking the above into the wiki discussion: this is a good example where a wiki page (central, easily editable, capturing the current state) would tremendously help this discussion. who, where, why, what, etc... such a wiki doesn't need to be completely open for self-registration, which is the source of most issues. kinda like the commit bit, but with more relaxed requirements. maybe invite-only, or a somewhat hidden static secret could be used for gatekeeping the registration. it's an illusion that everything is captured by the mailing list archive when finding stuff is inefficient in a discussion log. also, the wiki displays the current state, not the entire bumpy road getting there. i know about https://libreplanet.org/wiki/Group:Guix but the search engines don't really. and i don't know how others feel about it, but i subconsciously don't take it seriously because the search box is not focused on guix, and in this form it feels like just an aftertought, not a guix wiki. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “If you are ever tempted to look for outside approval, realize that you have compromised your integrity. If you need a witness, be your own.” — Epictetus (c. 55–135 AD)
Re: Are declarative app configs worth it?
> > - adding it to guix increases maintenance burden: new versions could > > add or remove config options > > > This is why there should be automated tests. There are too few of them. early detection of the breakage is just one part of the story. then it also needs to be fixed -- before dropping the hammer and abandoing the worksite. writing and maintaining the tests have a cost, too. > > - it requires documentation/translation, another hidden cost > > > We should only accept configuration procedures that have proper > documentation, yes. in this context i recommed: What is Seen and What is Not Seen by Bastiat https://oll.libertyfund.org/page/wswns or specifically: "In the sphere of economics an action, a habit, an institution or a law engenders not just one effect but a series of effects. Of these effects only the first is immediate; it is revealed simultaneously with its cause, it is seen. The others merely occur successively, they are not seen;3 we are lucky if we foresee them." if you demand that e.g. all services accepted into guix have a configuration entry for every possible config field, and that the documentation of these fields are duplicated into the guix codebase... then whatever is included into guix will have a 100% coverage. this is what is seen. but what about the lost potential? because i can guarantee you that while you'll get 100% coverage, you'll also only get a fraction of the total number services and fields. which one will yield a better guix experience? what i'm doing with my own services, and what i also recommend, is to always have an 'extra-arguments or 'extra-fields that allows defining any config value, and serializes it as-is. that way the user can rely on the documentation of the daemon, and blindly apply it while writing the guix config. and only reify those couple of config fields into scheme code that can provide something useful beyond merely serializing the value as-is. this way: - the guix codebase remains smaller (OAOO principle) - updating the app's package in guix is simpler - guaranteed not to get out of sync with the app - smaller threshold for new contributions - which translates to more supported services i find the free-form module type, as suggested by John Soo above, to be a good idea. so much so that i may even look into writing a prototype and try to use it to replace my two inline shepherd-service instances. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “War is a ritual, a deadly ritual, not the result of aggressive self-assertion, but of self-transcending identification. Without loyalty to tribe, church, flag or ideal, there would be no wars.” — Arthur Koestler (1905–1983)
Re: A basic Shepherd bug?
> I hope to test your hypothesis. Will the trick to enable logging [1] > also pull pull in the bug fix? yes, you should follow the instructions in [1]; namely: https://lists.gnu.org/archive/html/guix-devel/2023-12/msg00018.html together with "Installing development snapshots with Guix" in shepherd's README to add shepherd's channel. but it will not enable logging, but rather configure your operating system to compile and use the latest commit from the shepherd git repo, which now (probably) contains the fix for your problem. with the logging you're probably referring to my pending commits in: https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/attila but they are still being polished. i haven't even proposed them yet for inclusion, but i'm already running my servers with that branch. > Alternatively, may I fix the fcgiwrap-service-type to work with the > Shepherd version currently standard in Guix? if you can't make progress with the above, then send me your config, and i'll run it with my shepherd branch, and hopefully the extra logging can help easily fix any issues. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “If a nation values anything more than freedom, it will lose its freedom; and the irony of it is that if it is comfort or money that it values more, it will lose that, too.” — Somerset Maugham (1874–1965)
Re: A basic Shepherd bug?
> I added the service below to my operating-system, and 'herd status' > prints nothing but hangs. Is that a bug? > > Same on reboot. Thanks! this has probably been fixed in: https://git.savannah.gnu.org/cgit/shepherd.git/commit/?id=9be0b7e6fbe3c2e743b5626f4dff7c7bf9becc16 but it hasn't reached guix yet. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- An economist is a person who spends half his life telling us what will happen and the other half explaining why it didn't.
Re: A different way to build GCC to overcome issues, especially with C++ for embedded systems
TL;DR it's probably due to some cmake mess. and i gave up on compiling this; if i really want to, i'll do it in a debian chroot. > > i tried with your gcc … but it errors out with: > > > > $ make > > [ 1%] Linking ASM executable bs2_default.elf > > arm-none-eabi-gcc: error: nosys.specs: No such file or directory > > > That file is located here: > /gnu/store/…-newlib-arm-none-eabi-4.3.0/arm-none-eabi/lib/nosys.specs that didn't help: ~/workspace/guix/guix/pre-inst-env guix shell gcc-toolchain cmake make pkg-config -e '(@ (gnu packages embedded2) gcc12-cross-newlib-arm-none-eabi-toolchain)' cd ~/workspace/bios/pico-serprog cmake . COMPILER_PATH=/gnu/store/i9kpjzbagdlpm8bs10gxmm21b271s056-newlib-3.0.0-0.3ccfb40/arm-none-eabi/lib/ LDFLAGS=-B/gnu/store/i9kpjzbagdlpm8bs10gxmm21b271s056-newlib-3.0.0-0.3ccfb40/arm-none-eabi/lib/ make it leads to the same problem. but then i tried to compile the pico-sdk subproject as a standalone project, and then it succeeds (but fails with a different error later). therefore i think it's due to some cmake mess that i don't want to get deeper into. so, the rest is just FYI: git clone https://github.com/raspberrypi/pico-sdk ~/workspace/guix/guix/pre-inst-env guix shell gcc-toolchain cmake make pkg-config -e '(@ (gnu packages embedded2) gcc12-cross-newlib-arm-none-eabi-toolchain)' cmake . make $ make [ 0%] Building ASM object src/rp2_common/boot_stage2/CMakeFiles/bs2_default.dir/compile_time_choice.S.obj [ 0%] Linking ASM executable bs2_default.elf [ 0%] Built target bs2_default [ 0%] Creating directories for 'ELF2UF2Build' [ 0%] No download step for 'ELF2UF2Build' [ 0%] No update step for 'ELF2UF2Build' [ 0%] No patch step for 'ELF2UF2Build' [ 0%] Performing configure step for 'ELF2UF2Build' -- The C compiler identification is unknown -- The CXX compiler identification is unknown -- Detecting C compiler ABI info -- Detecting C compiler ABI info - failed -- Check for working C compiler: /gnu/store/3yqw7js9slid5q3d1f5bpbk92vann109-profile/bin/gcc -- Check for working C compiler: /gnu/store/3yqw7js9slid5q3d1f5bpbk92vann109-profile/bin/gcc - broken CMake Error at /gnu/store/4bn07jsqk6lxp0qdxv7kkc3krz3afnna-cmake-3.25.1/share/cmake-3.25/Modules/CMakeTestCCompiler.cmake:70 (message): The C compiler "/gnu/store/3yqw7js9slid5q3d1f5bpbk92vann109-profile/bin/gcc" is not able to compile a simple test program. It fails with the following output: Change Dir: /home/alendvai/workspace/bios/pico-sdk/elf2uf2/CMakeFiles/CMakeScratch/TryCompile-fg3QLY Run Build Command(s):/gnu/store/3yqw7js9slid5q3d1f5bpbk92vann109-profile/bin/make -f Makefile cmTC_12bac/fast && make[3]: Entering directory '/home/alendvai/workspace/bios/pico-sdk/elf2uf2/CMakeFiles/CMakeScratch/TryCompile-fg3QLY' /gnu/store/3yqw7js9slid5q3d1f5bpbk92vann109-profile/bin/make -f CMakeFiles/cmTC_12bac.dir/build.make CMakeFiles/cmTC_12bac.dir/build make[4]: Entering directory '/home/alendvai/workspace/bios/pico-sdk/elf2uf2/CMakeFiles/CMakeScratch/TryCompile-fg3QLY' Building C object CMakeFiles/cmTC_12bac.dir/testCCompiler.c.o /gnu/store/3yqw7js9slid5q3d1f5bpbk92vann109-profile/bin/gcc-o CMakeFiles/cmTC_12bac.dir/testCCompiler.c.o -c /home/alendvai/workspace/bios/pico-sdk/elf2uf2/CMakeFiles/CMakeScratch/TryCompile-fg3QLY/testCCompiler.c as: unrecognized option '--64' make[4]: *** [CMakeFiles/cmTC_12bac.dir/build.make:78: CMakeFiles/cmTC_12bac.dir/testCCompiler.c.o] Error 1 make[4]: Leaving directory '/home/alendvai/workspace/bios/pico-sdk/elf2uf2/CMakeFiles/CMakeScratch/TryCompile-fg3QLY' make[3]: *** [Makefile:127: cmTC_12bac/fast] Error 2 make[3]: Leaving directory '/home/alendvai/workspace/bios/pico-sdk/elf2uf2/CMakeFiles/CMakeScratch/TryCompile-fg3QLY' the root cause of the messy error message above is the following: as: unrecognized option '--64' this happens with the host gcc (or due to some misconfiguration the host gcc is used wrongly). $ file src/rp2_common/boot_stage2/bs2_default.elf src/rp2_common/boot_stage2/bs2_default.elf: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, not stripped -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” — Antoine de Saint-Exupery (1900–1944)
Re: A different way to build GCC to overcome issues, especially with C++ for embedded systems
hi Stefan, > In the end it worked nicely for my embedded C++ stuff and I also managed > to compile a custom keyboard firmware based on ZMK using Zephyr, > although that is just C code. i've also encountered a problem with the guix cross compiling tools as i have described here: https://lists.gnu.org/archive/html/guix-devel/2023-12/msg00179.html i tried with your gcc (copied into guix as (gnu packages embedded2)): ~/workspace/guix/guix/pre-inst-env guix shell gcc-toolchain cmake make pkg-config -e '(@ (gnu packages embedded2) gcc12-cross-newlib-arm-none-eabi-toolchain)' cd ~/workspace/bios/pico-serprog cmake . make but it errors out with: $ make [ 1%] Linking ASM executable bs2_default.elf arm-none-eabi-gcc: error: nosys.specs: No such file or directory IIRC, this happens with the vanilla guix packages when i try to use a gcc package instead of a gcc-toolchain package. any thoughts on this? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “There can be no causeless love or any sort of causeless emotion. An emotion is a response to a fact of reality, an estimate dictated by your standards.” — Ayn Rand (1905–1982)
arm-none-eabi toolchain and compiling C++ stuff
dear Guix, i'm trying to compile something to a raspberry rp2040 microcontroller (https://codeberg.org/Riku_V/pico-serprog). $ guix shell gcc-toolchain cmake make pkg-config -e "((@ (gnu packages embedded) make-arm-none-eabi-toolchain-7-2018-q2-update))" $ cd _deps $ git clone https://github.com/raspberrypi/pico-sdk.git pico_sdk-src $ cd .. $ cmake . $ make so far, so good. it compiles for a while, but then it fails with: [ 14%] Building CXX object CMakeFiles/pico_serprog.dir/_deps/pico_sdk-src/src/rp2_common/pico_standard_link/new_delete.cpp.obj In file included from pico-serprog/_deps/pico_sdk-src/src/rp2_common/pico_standard_link/new_delete.cpp:11:0: /gnu/store/7i9fw82x6hljy6sb4g10v2dl53l7pybl-profile/arm-none-eabi/include/c++/cstdlib:75:25: fatal error: stdlib.h: No such file or directory #include_next if i read this right, it tries to cross compile some C++ stuff, but a header file is missing. $ guix locate stdlib.h [...] gcc-toolchain@13.2.0 /gnu/store/dpfxpfyghkc19wz8jwaw31llhnvn8ngx-gcc-toolchain-13.2.0/include/stdlib.h gcc-toolchain@11.3.0 /gnu/store/5vn4pkf70ql7v1svrfknfkfsh4m3737h-gcc-toolchain-11.3.0/include/stdlib.h clang-toolchain@15.0.7 /gnu/store/6m5gi7l7bi93gnzm2j422q9wawq3p6al-clang-toolchain-15.0.7/include/stdlib.h [...] i.e. it's usually part of the gcc-toolchain package... but it's not part of the cross-compiling ones? is that a bug in (gnu packages embedded)? shall i look into fixing it? or am i the one who has invalid expectations? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “If money is your hope for independence you will never have it. The only real security that a man will have in this world is a reserve of knowledge, experience, and ability.” — Henry Ford (1863–1947)
Re: shepherd, fibers, and signals (asyncs)
@emixa-d kindly proposed something that turned out to be a fix: https://github.com/wingo/fibers/issues/29#issuecomment-1858922276 i've sent it to: shepherd: sometimes hangs on `guix system reconfigure` https://issues.guix.gnu.org/67839#6 in essence: shepherd violates the fibers API by calling it from an async signal handler, and this is an issue indeed, but the bugs caused by this should manifest rarely and randomly; i.e. the frozen recofigure behavior must be caused by something else. the actual root cause here was that the with-process-monitor parameterize was not covering some code that it should have. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “I am not what happened to me, I am what I choose to become.” — Carl Jung (1875–1961)
shepherd, fibers, and signals (asyncs)
dear Guix, context: Shepherd stops responding during "guix system reconfigure" https://issues.guix.gnu.org/67538 https://issues.guix.gnu.org/65178 https://issues.guix.gnu.org/67230 i've added a ton of logging and asserts in my fork: https://codeberg.org/attila-lendvai-patches/shepherd which resulted in this report: https://github.com/wingo/fibers/issues/29#issuecomment-1858319291 to which @emixa-d kindly responded: https://github.com/wingo/fibers/issues/29#issuecomment-1858497720 which essentially identifies the following: -- posix signal handlers are async, and shepherd uses the fibers API from inside signal handlers, specifically in at least handle-SIGCHLD. this violates the fibers API, and most probably leads to the root cause of the reconfigure hang: a match-error flying out from service-controller due to losing the value of the parameter called (current-process-monitor), which then makes that fiber exit. i have little experience with posix signal handlers, so i probably won't come up with a fix for this, or at least not without someone's bird's eye view guidance. maybe the solution could be something like packaging up posix signals and delivering them to the fibers universe by some form of polling of an atomic variable? or is there some signal-safe semaphore facility in guile that could be used in accordance with the fibers API? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Virtue is never left to stand alone. He who has it will have neighbors.” — Confucius (551–479 BC)
Re: shepherd: hardening error handling
while i found this bug: https://issues.guix.gnu.org/67839 i was reading the discussion under its probable root cause: https://github.com/wingo/fibers/issues/29 and it suggests that Guile before 3.0.5 had important bugs WRT fluids, which are relied upon in shepherd. maybe Guile 2.2 can not be used reliably even for current Shepherd? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “It is only when compassion is present that people allow themselves to see the truth. […] Compassion is a kind of healing agent that helps us tolerate the hurt of seeing the truth.” — A.H. Almaas (1944–), 'Elements of the Real in Man (Diamond Heart, Book 1)'
Re: Should commits rather be buildable or small
> Preparing a large set of updates like this is already a great deal of > work. It does not seem to me like a good use of volunteers' time to ask > them to break such an update into hundreds of tiny pieces, especially > not if the result is hundreds of broken commits to Guix. fair enough. in that paragraph i did not consider the consts, only the benefits of the two approaches. i myself also had headaches multiple times when i fixed something that needed to touch several different packages, and they would only work when applied in one transaction: how many debbugs issues? multiple issues and record the dependencies? little gain for much more effort on both sides... but if one issue, then what should be the name of the debbugs issue? etc... the contribution process has quite some accidental complexity, and it most probably turns away valuable potential contributors... which is something that is both hard to notice, and has a strong impact. but this has already been discussed in a long thread recently. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “As long as habit and routine dictate the pattern of living, new dimensions of the soul will not emerge.” — Henry Van Dyke (1852–1933)
Re: Should commits rather be buildable or small
> > FWIW, this commit policy has always bothered me as a newcomer to > > Guix. pretty much everywhere else it's a major offence against your > > colleagues to commit something that breaks the build in any way. > > > In the last few months I’ve repeatedly seen assertions in a similar > style as this one. They always genuinely surprise me, and it’s probably > not just because I’m oblivious and out of touch. well, both point of views are reasonable. they just make different tradeoffs. i think an abstraction is missing here, let's call it guix log for this mail. it's something like the git log, but one that lists the buildable and substitutable states of the guix repo. it's probably the same thing that causes the discrepancy between git commits and substitutes: the build servers are not building every commit of the git repo. they pick an unpredictable (?) series of commits, skipping some inbetween. if i guix pull, or guix time-machine to the "wrong" commit, then i'll need to build some stuff locally. sometimes these can be heavy packages. this hypothetical 'guix log' is probably also what's missing between a hypothetical staging branch and master, whose role would be to make sure that commits don't reach the users prior to having substitutes for them. does this make sense? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Sometimes I wonder whether the world is being run by smart people who are putting us on or by imbeciles who really mean it.” — Mark Twain (1835-1910)
Re: Should commits rather be buildable or small
> > Define "buildable" and "unbuildable". > > > I used these definitions: a buildable commit does not have build > failures (or at least no new ones). An unbuildable commit introduces > new build failures (in this case a lot of them). > > Buildable commits are safe spots to land on with time-machine in the > sense that the packages defined in them can be used. I expect it would > be very painful to try jumping to past commits with time-machine if a > large portion of the commits in Guix were unbuildable. [...] > I guess "required" here means that in some cases Guix's policy is to > prefer small commits over buildable commits (with the previous > definition). I at least don't see any technical reasons why it would be > required. The question then becomes whether that policy applies in this > case. FWIW, this commit policy has always bothered me as a newcomer to Guix. pretty much everywhere else it's a major offence against your colleagues to commit something that breaks the build in any way. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “I will probably be asked why I don't cite the author's name? Because my philosophy teacher taught me that it sometimes jeopardizes the effects of the quote.” — Author's name withheld.
Re: Heisenbug
> What's a good way to debug this, please? in Geiser i usually get the proper error message: M-x geiser ,m (gnu tests reconfigure) ,reload > Where is my error? good question! silently swallowing errors and warnings should be something that is frown upon, and only ever employed when deemed really necessery. and we thought about it... twice. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “One cannot redistribute wealth without first becoming master of all wealth; redistribution is first and foremost monopoly.” — Anselme Bellegarrigue (ca. 1820–1890)
shepherd: hardening error handling
dear Guix, i'm working on hardening shepherd's error handling and logging to debug an issue that i'm facing. these changes escalated quickly, so i'm writing to clarify a few things before i shape the codebase into a direction that the maintainers will not accept. the codebase seems to use catch/throw, and at some places with comments like "for Guile 2.2". what is the minimum guile version that the shepherd codebase wants to support? the README says "GNU Guile 3.0.x or 2.2.x". is this still intended? or can i assume guile 3? i.e. use with-exception-handler, raise-exception, guard, instead of catch/throw with key and args? some WIP commits are available at: https://codeberg.org/attila-lendvai-patches/shepherd/commits/branch/attila -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “It is only when the people become ignorant and corrupt, when they degenerate into a populace, that they are incapable of exercising the sovereignty. Usurpation is then an easy attainment, and an usurper soon found. The people themselves become the willing instruments of their own debasement and ruin.” — James Monroe (1758–1831), 5th president of the USA
Re: shepherd GEXP module import mystery
> so, the only mystery left is that i still don't know where it is > imported into the unnamed package in which the GEXPs are > compiled/loaded, and whether that is intended. FTR, i've filed it as: https://issues.guix.gnu.org/67649 -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “What is history but the story of how politicians have squandered the blood and treasure of the human race?” — Thomas Sowell (1930–)
Re: Shepherd service logging
> Thanks for offering a logging facility! I run a custom Guix and would > like to test your changes. Is it enough to switch to your 'wip-logging' > branch in the package declaration? [1] Thanks! AFAIU that will lead to quite some local recompiling that are not necessary. you can just set the shepherd package of the shepherd-root-service-type to a custom package. e.g. this will use the latest shepherd from the shepherd channel: (operating-system ... (essential-services (modify-services (operating-system-default-essential-services this-operating-system) (shepherd-root-service-type config => (shepherd-configuration (inherit config) (shepherd (@ (shepherd-package) shepherd))) HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “[A] Computer [programming] language is inherently a pun — [it] needs to be interpreted by both men & machines.” — Henry Baker
Re: Syntactic Diabetes (was Re: A friendlier API for operating-system declarations)
> > the downside of generating countless macros is that they won't show up > > in backtraces, and they cannot be found when grep'ping the codebase, > > and as such make the codebase much less approachable. > > > Reading your words really helped me feel that I'm not alone. You more or > less summarized my feelings about the Guix codebase, which I have been > reading now for over a year. Guile's syntax features make the code more > symbolic and less approachable to newcomers. just FTR, i don't think that the guix codebase is too bad in this regard. here i just wanted to remind people to the not so obvious cost of syntactic abstractions that should be accounted for when making decisions. introducing macros that generate macros is rarely justified. in general, it's *very* valuable when stuff can be grep'ped -- and not only for newcomers. after enough time has passed i can feel like a newcomer to my own codebase... :) modulo the protocols that i keep while writing code. e.g. define.*whatever is a grep that i regularly employ. the pattern here is that although there are countless ways to define countless different stuff, there's a convention to stick to the define.*[name] pattern. intuitive, unique (i.e. grep'pable) names are also key to facilitate code approachability, especially for abstractions that are scattered around in many source files. in some situations i even copy-paste names of abstractions into comments for any future grep'ping to pick it up. a negative example for this in the guix codebase is the use of 'service' to describe two rather different abstractions: a component of an OS vs. a deamon process run by shepherd. for a while it caused me quite some confusion. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The State is that organization in society which attempts to maintain a monopoly of the use of force and violence in a given territorial area; in particular, it is the only organization in society that obtains its revenue not by voluntary contribution or payment for services rendered but by coercion.” — Murray N. Rothbard (1926–1995), 'Anatomy of the State' (1974)
Re: Syntactic Diabetes (was Re: A friendlier API for operating-system declarations)
> lines of code. I think hyper-focusing on syntax to make services > "nicer" might be the wrong approach here: You could greatly reduce the > complexity by making them procedures instead of syntax and still keep > most of the configuration readable to a great extent. i agree. in my experience, it's good practice in general to try to make a functional API as convenient as possible, and if that is still too verbose or cumbersome, only then add a thin layer of syntactic abstractions that expand to code that uses this functional API. such code is much easier to work with, especially when debugging stuff interactively (i.e. it's possible to recompile a function that will then affect every place in the code where the macrology is used). the downside of generating countless macros is that they won't show up in backtraces, and they cannot be found when grep'ping the codebase, and as such make the codebase much less approachable. the size of their expansion can also become an issue if they are used often enough. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “They muddy the water, to make it seem deep.” — Friedrich Nietzsche (1844–1900)
Re: Failed to build in QA
> > The other issue with the v4 series is that Patchwork has got confused > > and only picked out the first of the v4 patches. The threading also > > looks weird to me in my email client, but I'm not quite sure why. How > > did you send the v4 patches? > > > I sent them with git send-mail but I also noticed that the order got > mixed up in issues.guix.gnu.org, no idea how this happened... i also see this every once in a while. i guess it's because the SMTP server-farm receives mutliple emails in close proximity, and they end up reaching debbugs in a different order. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “In matters of conscience, the law of majority has no place.” — Mahatma Gandhi (1869–1948)
Re: [maintenance] Compressed JSON files and served file extension?
> And if I do not want to use curl but instead another tool as wget? :-) then maybe complain to the authors that it doesn't comply with the standard? :) here's the bug report BTW: Wget not honouring Content-Encoding: gzip https://savannah.gnu.org/bugs/?61649 or use wget2 instead. i guess they didn't fix it in wget because they didn't want to break "backwards compatibility". (remember: if it's not backwards, it's not compatible! :) happy hacking, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “We cannot train our babies not to need us. Whether it's the middle of the day or the middle of the night, their needs are real and valid, including the need for a simple human touch. A 'trained' baby may give up on his needs being met, but the need is still there, just not the trust.” — L. R. Knost
Re: shepherd GEXP module import mystery
hi Felix, Ludo, > > a start GEXP of my service sees bindings that come from the module > > called (shepherd support), but i have no idea who and where imports > > that module. > > > Without code it's hazardous to speculate, but could the Guix service > (gnu service mcron) cause that issue when it is being logged? unfortunately it's a complex web of stuff, but i managed to make a small reproducer that is attaced. it can be run with: $(guix system --no-graphic vm reproducer.scm) and in the VM (must use fold, because it's a dumb terminal): cat /var/log/messages | fold -150 to my surprise this one does list (shepherd support) in the module-use list. and i realized why: the logging infrastructure somewhere siently truncates the lines, and in my original case that module was chopped off. not that i understand everything here... e.g. why are there several (guix build utils) modules? *** reproducer gexp speaking, current module: #, module-uses: ( # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #), ringbuffer: # so, the only mystery left is that i still don't know where it is imported into the unnamed package in which the GEXPs are compiled/loaded, and whether that is intended. maybe it's part of the shepherd API that (shepherd support) is made available for the service GEXPs? looking at the public definitions in (shepherd support), it's not obvious that those are meant to be available for the users of shepherd, though. Ludo? > In my code tree, which is a month behind, (gnu services mcron) is the > only Guix service that imports (shepherd support). it's a good hint, but that could only cause this if all the service GEXPs were loaded into the same module, but that would have already broken things in countless other ways. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “We are products of our past, but we don't have to be prisoners of it.” — Rick Warren (1954–) ;; Run with something like this: ;; $(guix system --no-graphic vm reproducer.scm) (define-module (reproducer) #:use-module (gnu system) #:use-module (gnu system shadow) #:use-module (gnu system nss) #:use-module (gnu system vm) #:use-module (gnu tests) #:use-module (gnu services) #:use-module (gnu services base) #:use-module (gnu services dbus) #:use-module (gnu services shepherd) #:use-module (gnu packages admin) #:use-module (gnu packages base) #:use-module (gnu packages bash) #:use-module (gnu packages certs) #:use-module (gnu packages package-management) #:use-module (gnu packages linux) #:use-module (guix gexp) #:use-module (guix git) #:use-module (guix git-download) #:use-module (guix store) #:use-module (guix modules) #:use-module (guix packages) #:use-module (srfi srfi-1) #:use-module (ice-9 match)) (operating-system (inherit %simple-os) (services (cons* (simple-service 'reproducer shepherd-root-service-type (list (shepherd-service (requirement '(file-systems)) (provision '(reproducer)) (documentation "") (start #~(begin (lambda _ (format #t "*** reproducer gexp speaking, \ current module: ~A, \ module-uses: ~A, \ ringbuffer: ~A~%" (current-module) (module-uses (current-module)) (and=> (module-variable (current-module) 'ring-buffer) variable-ref)) 0)) %base-services)))
shepherd GEXP module import mystery
dear Guix, context: i'm adding logging to shepherd, and while i was testing it i encountered a conflict with my service code that shouldn't happen. i'm probably missing something from my mental model around guile modules, and/or how shepherd compiles and loads them. the symptom is that a start GEXP of my service sees bindings that come from the module called (shepherd support), but i have no idea who and where imports that module. i added the following two format lines to the beginning of my start GEXP: (format #t "*** start gexp speaking, current module is ~A, module-uses: ~A~% ringbuffer: ~A~%" (current-module) (module-uses (current-module)) (and=> (module-variable (current-module) 'ring-buffer) variable-ref)) (format #t "*** ringbuffer in packages: ~A~%" (map (lambda (module-name) (and=> (module-variable (resolve-interface module-name) 'ringbuffer) variable-ref)) '((guile) (oop goops) (shepherd service) (srfi srfi-34) (system repl error-handling) (guix build utils) (guix build syscalls) (gnu build file-systems and they print: *** start gexp speaking, current module is # module-uses: ( # # # # # # # # # # # # # # # # # # *** ringbuffer in packages: (#f #f #f #f #f #f #f #f) how else can a definition (i.e. 'ringbuffer) get into this module without it coming from one of the modules in its module-uses list? i'm pretty sure that it's coming from (shepherd support) because i'm neck deep in debugging the original issue: a macro from (shepherd support) overwrote my function that errored when it was called at runtime as a function. i'm also seeing this warning (i.e. my root issue): WARNING: (#{ g108}#): `log.debug' imported from both (guix-crypto utils) and (shepherd support) i've checked the module list of the gexp, and how guix compiles the .go files that are then given to shepherd, and i see nothing obvious. i even looked at the source file that gets generated and compiled for the service in question, and it doesn't contain any mention of this module. there are no direct mentions of (shepherd support) in the source, that's why i thought maybe something re-exports the entire module, so i checked the presence of 'ringbuffer in all the used modules... but it's not in any of them. could it be a bug in how different versions/instances of guile serialize/load a .go file? or could it be due to the module magic in (gnu services shepherd) that compiles the shepherd config into a .go file? i'm out of ideas, any hint is appreciated! -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “People always find partners [with] exactly the same level of unresolved traumas they are at, without any exception. Now the trauma may look differently, and how it manifests may look differently, but the degree of trauma is always equal, I mean with perfect accuracy. […] And that trauma then shows up in the relationship.” — Gábor Máté (1944–), 'On The Spot - Az ellenség gyermekei', 48m50s
Re: Syntactic Diabetes (was Re: A friendlier API for operating-system declarations)
> (service+ OS SERVICE [CONF]) > (service- OS SERVICE) > (modify-service OS SERVICE UPDATE) what would the benefit of generating multiple macros for each service compared to the above functional API (with 3-4 elements altogether)? i could be missing something here, but it seems to be precious little to me while it costs some extra complexity. if i were to add a syntactic abstraction for this, i'd generate a full DSL in the form of a (modify-operating-system OS [fictional DSL to describe desired changes]). but i don't think the extra complexity justifies any macrology here. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- By the time a man realises that his father was right, he has a son who thinks he’s wrong.
Banana Pi BPI-R4 support?
dear Guix, i'm hoping to run Guix on this Arm Corex-A73 board: https://wiki.banana-pi.org/Banana_Pi_BPI-R4 MediaTek MT7988A (Filogic 880) i'm clueless about how to add support for a new platform to guix, but i'm experienced in guile, and in hacking near the metal. with that in mind: how hard/hopeless would this task be? both 1) technically (if we ignore any possible use of blobs), and also 2) regarding the FSDG standard? i don't see any mention of MT7988A here: https://trustedfirmware-a.readthedocs.io/en/latest/index.html https://review.trustedfirmware.org/admin/repos/TF-A/trusted-firmware-a,general i'd appreciate some bird's eye view insights before i get lost in a pointless struggle. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Heresy is only another word for freedom of thought.” — Graham Greene (1904–1991)
Re: [maintenance] Compressed JSON files and served file extension?
TL;DR the filename shouldn't contain the .gz extension, and the HTTP standard is crap ("If no Accept-Encoding field is present in a request, the server MAY assume that the client will accept any content coding."). use curl --compressed the details: the Content-Encoding response header instructs the client on how to decode the __transfer payload__ that the server is serving. i.e. proper HTTP clients should automatically decode the content as instructed by the Content-Encoding response header, or at the very least warn that they do not understand the response encoding... but that should not happen, because the HTTP request can contain an Accept-Encoding header that tells the server what the client understands, and it defaults to unprocessed raw data ('identity')... except that the standard allows the server to ignore the Accept-Encoding request header. well, this is the theory, but both wget and curl don't automatically decode the content. curl at least can be instructed to do so, which arguably should be its default: curl --compressed https://guix.gnu.org/sources.json | less --verbose can be used to inspect the reques/response headers (printed to stderr): curl --verbose https://guix.gnu.org/sources.json >/dev/null curl --verbose --compressed https://guix.gnu.org/sources.json >/dev/null here's a detailed discussion of this very question: https://stackoverflow.com/questions/8364640/how-to-properly-handle-a-gzipped-page-when-using-curl so, in an ideal world wget and curl should transparently decode the content according to the Content-Encoding response header, and nginx should not respond with a compressed content when the client is not sending an Accept-Encoding request header. the pragmatic solution is to use curl --compressed in scripts, and/or add it to your ~/.curlrc: # to automatically decode responses with some of # the supported Content-Encoding compressed HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “We need people in our lives with whom we can be as open as possible. To have real conversations with people may seem like such a simple, obvious suggestion, but it involves courage and risk.” — Thomas Moore (1940–)
Re: shepherd-action debug help needed
> Exit code 127 means "command not found". [1] More information is > available in stderr (or sometimes stdout) if you can capture it. but how come binaries are not found when i have the full path for the commands...? the install binary is only used to set the umask of the result. without it, a simple invokation of tar also fails, even without --gzip. i dug a bit deeper, and it turned out that the SYSTEM call i thought was coming from guile was rebound by shepherd to point to its own SPAWN-SHELL-COMMAND. through various complex code paths, it ends up calling fork+exec-command. SPAWN-SHELL-COMMAND's role is to make the SYSTEM call non-blocking. as a quick test, i have added a simple (system "ls -l /bin/sh") call to my action, and that fails, too. i'll need to add proper logging to shepherd and see what's going wrong. but for that i need https://issues.guix.gnu.org/61750 merged. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Justice is not concerned with the results of the various transactions, but only with whether the transactions themselves are fair.” — F.A. Hayek (1899–1992), 'Law, Legislation and Liberty', I.6.j
shepherd-action debug help needed
dear guixers, i have a custom shepherd action that calls tar with (system ...): https://github.com/attila-lendvai/guix-crypto/blob/staging/src/guix-crypto/services/swarm.scm#L311 it works fine in a `guix system vm` environment, where i'm usually testing it. but when i pull it to my server, it fails with exit code 127. it has logging: 2023-10-30T14:24:12 Will run backup cmd: "install --mode=600 <(/gnu/store/iiyf5ns10j8sgzfpmcgbfyb0byk5sbjl-tar-1.34/bin/tar --verbose --create --directory /var/lib/swarm/mainnet/bee-0 keys/ statestore/ | /gnu/store/gjsxzcc0gqpz4lpbsrbidlnn5ij1lfm1-gzip-1.12/bin/gzip) /tmp/2023-10-30-serlap-bee-mainnet-0.tgz", PATH is /gnu/store/d4rqw481nwvrzs09nd8ad647nczgm9k1-coreutils-9.1/bin:/gnu/store/gjsxzcc0gqpz4lpbsrbidlnn5ij1lfm1-gzip-1.12/bin, uid is 0 2023-10-30T14:24:12 Cmd returned exit code 127 if i copy-paste the above command into the same root prompt from where i'm invoking `herd backup-identity bee-0 /tmp/`, then it works as expected. even if i prefix it with `PATH="" ...`. my question is: what could be the difference between running a `guix system vm` from my git checkout, and guix pull'ing the same to a proper guix system? could it be file or other permissions? according to the log, (getuid) is 0 when the action is executed. alternatively, how could i debug this? for completeness, this is how i start the system vm test: $(./pre-inst-env guix system --no-graphic vm --share=$HOME/workspace/guix/var-lib-of-guest-vm=/var/lib ~/workspace/guix/guix-crypto/tests/swarm-tests.scm) -m 2048 and this is the file: https://github.com/attila-lendvai/guix-crypto/blob/staging/tests/swarm-tests.scm -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Tyranny is defined as that which is legal for the government but illegal for the citizenry.” — Thomas Jefferson (1743–1826)
Re: Divvying up service definitions
> I dont think there's any problem wrt categorization. For your Kerberos > example, either would be fine as they're not mutually exclusive. > (though I'd lean towards 'authentication' here) sure, but the crux of the issue here is how to improve code readability; i.e. would it make the humans working on the Guix codebase more effective. if all the 3 options are equally reasonable, then some people will have to check two places before they find it in the third. with such a codebase people will develop a habit of just using multi-file grep to find stuff... which is arguably a better strategy to begin with. and it's worth pointing it out here that following grep'able patterns like `define.*kerberos` highly facilitates code navigation, and especially so for newcomers. this is one of the features of lisp(ers) that i badly miss in many other languages/cultures. and arguably, this is much more important than the placement of the definitions. and while i'm ranting, another useful strategy is to give unique names to abstractions, and make sure that wherever these abstractions are employed, that name shows up in the source code; i.e. it can be grap'ped. IOW, names should not be e.g. programmatically assembled in DWIM'ish ways, unless it is a frequent enough pattern that the loss of grep'pability is worth it. or abstractions should not be hidden behind late binding, unless it's worth the additional loss of code readability. ultimately, definitions shouldn't live in text files, but in a source code database, with proper search and projection tools in the editor (and the DVCS) that understand the graph nature of the source code. that would make this entire discussion moot, but we're not there yet. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.” — Carl Jung (1875–1961)
Re: more than 1,800 dependent packages: website out of date
> > When will the above website be updated to use the latest manual > > information about branches? > > > What do you mean? > > 1. The manual you are pointing is the released v1.4.0 manual, not the > latest one. This v1.4.0 manual is set in stone and is a snapshot > of Guix at v1.4.0. if Guix is using a rolling release model, then maybe it's not an unreasonable expectation that the online manual also follows the latest in the git repo, no? maybe we should stop prefixing devel/, and start prefixing the releases? -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “'Emergencies' have always been the pretext on which the safeguards of individual liberty have been eroded.” — F. A. Hayek (1899–1992)
Re: Version from file in a package
this may be related (also see the discussion): (current-filename) is #f when guix pull'ing https://issues.guix.gnu.org/55464 i haven't yet double-checked Ludo's recommendation to use INCLUDE, which should work, but it didn't for me. my current solution is this: (define (%read-module-relative-file module filename) (with-input-from-file (or (search-path %load-path (string-append (dirname (module-filename module)) "/" filename)) (error "%read-module-relative-file failed for" filename)) (lambda _ (values (read) ; version (read) ; hashes (define-syntax read-hashes-file (lambda (syn) (syntax-case syn () ((_ filename) (with-syntax ;; Reads the file at compile time and macroexpands to the first form in it. ((form (call-with-values (lambda _ (%read-module-relative-file (current-module) (string-append "hashes/" (syntax->datum #'filename) ".hashes"))) (lambda (version hashes) #`(values '#,version '#,hashes))))) #'form) HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Never argue with someone whose TV is bigger than their bookshelf.” — Emilia Clarke
Re: comparing commit-relation using Scheme+libgit2 vs shellout plumbing Git
is the decision between libgit2 and invoking git really such a big commitment? let's make sure the entire guix codebase uses a single git related API, and then we can easily switch back and forth between the two. on another note, i'm surprised that the reference implementation of git itself doesn't have a lib, and libgit2 even had to be written. even this may change in the future. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “I learned long ago, never to wrestle with a pig. You get dirty, and besides, the pig likes it.” — George Bernard Shaw (1856–1950)
Re: Pinned/fixed versions should be a requirement.
> For these reasons, I believe that pinned versions should be a > requirement in libraries, always specifying the exact dependency, for > example, `rust-serde-json-1.0.98`. aiming a little higher, we could stop using module-global variables for pointing to packages (aka define-public), and with that eliminate one of the two, orthogonal and inconsistent (https://issues.guix.gnu.org/61292) package repositories, and only use the one-and-only reified package registry. then, with some more work there could be: - a syntax extension to easily point to a package from scheme code; something like #^rust-serde-json@1.0.98 - a "package GC" algorithm that collects and prints orphaned/unreferenced packages - "semantic" version pointers for situations where multiple versions of the same package cannot coexist in one profile, and/or dependent packages want to point to e.g. the latest, or the latest in a specific major version, or the "picked as current by the surrounding environment" (thinking here of things like the web of Gnome related packages that must all be compatible with each other, and only one instance is allowed to run at once). this way we could be "rolling ahead" with the package definitions similarly to how derivations in the store are "rolling ahead" and unused ones get GC'd. it would increase complexity in the sense that contributors would need to constantly keep an eye on moving forward the dependency pointers (unless "latest" or some other semantic reference is used as version for an input). but in return it would increase resilience: a random update of a package would not break anything else that is downstream on the dependency chain. and i'm not only thinking of build errors here, but also subtle changes at runtime (e.g. how Chromium stopped being able to open the file open/save dialog while xdg-desktop-portal-gtk and xdg-desktop-portal-wlr were installed. their upgade broke Chromium, and it took me quite some time/annoyance to debug this). i think this would also enable interesting features like being able to have two versions of Gnome alive and runnable in the same profile. this would also open the path for something else that may or may not be worth it: the files holding the package definitions could be divorced from being full scheme module files, and turned into a text-based package database where algorithms (e.g. package import, GC) could reliably add and delete entries without human intervention (besides recording it as a git commit). it's a whole other story that we shouldn't store source code as a string of characters, but instead admit that it's a graph, and store/edit it as such. that would implicitly resolve most of this problem, too, but i'm afraid we're not ready for that just yet. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “When you change the way you look at things, the things you look at change.” — Wayne W. Dyer (1940–2015)
Re: How can we decrease the cognitive overhead for contributors?
> also not as obvious (you search for lines added or removed or changed, > not easy language such as 'gnu: package-name: Add'). i'm pretty sure that the source of the annoyance is not the strict format in the summary line, but the formal details in the commit message body. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Until you make the unconscious conscious, it will direct your life and you will call it fate.” — Carl Jung (1875–1961)
Re: How can we decrease the cognitive overhead for contributors?
> I've got something like 6 patches waiting, all have been sitting around > for many months. They'll get some committer attention and then it drops > off and nothing happens. To me, that sounds like people lose track of > it, because debbugs doesn't allow people to stay easily on top of > patches they're interested in. Possibly we need some kind of concept of > patch stewards that can see something through. yep, same impression here. > I have more things I want to do with Guix, but it's tough, because I > have to maintain each of my patches separately in different branches, so > that I can rebase them as necessary and resubmit them cleanly if > necessary, or simply just to work on them when issues come up. But my > master branch pulls in each of them, so any time I need to pull, I've > got a list of things (switch to every branch, rebase, fix if necessary, > switch back to master, reset to origin/master, then merge all the > branches I'm maintaing). Adding more branches on top of the ones I > already have is just too much. here's a script that i use to prepare an 'attila' branch that i can `guix pull` from: $ cat ../rebase-my-guix-branches.sh #!/usr/bin/env bash BRANCHES="kludges ui-warnings trezor4" # kludges-sent shepherd-from-git # ddclient idris set -e initial_branch=$(git branch --show-current) git rebase attila-baseline attila-initial-commit git checkout attila git reset --hard attila-baseline git pull . attila-initial-commit for branch in ${BRANCHES}; do echo "*** Processing branch ${branch}" #git rebase attila-baseline $branch git cherry-pick attila-baseline..$branch done #git checkout $initial_branch git -c pager.log=false log --pretty=oneline attila-initial-commit~1..attila-initial-commit explanation: i `git tag -f attila-baseline` on some commit, typically i pull master and on its head. then i have a branch called attila-initial-commit that holds a single commit that adds my key as authorized. this script cherry-picks all the listed branches, and at the end it prints the commit hash of the initial commit that i can then copy-paste into my channels.scm file. whenever cherry-picking fails, i abort it, manually rebase the branch in question, and then restart the script. HTH, -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Freedom is strangely ephemeral. It is something like breathing; one only becomes acutely aware of its importance when one is choking.” — William E. Simon
Re: How can we decrease the cognitive overhead for contributors?
> To second this, I'd like to note for the record that on fedi at least > 1-2 people told me that they chose Nix over Guix because they don't want > to deal with the email based workflow. At least one of these people is > a highly skilled programmer with decades of experience. FWIW, i'm 46, programming since my childhood. i have 10+ years of Common Lisp experience using Emacs, most of it on opensource projects. here's an approximate list of what's consuming/training my frustration-tolerance with Guix: - debbugs and related tooling. i could live with an email based workflow, but whatever is documented, or at least whatever i have put together locally, is very inefficient. the chore vs. coding ratio is low. - large backlog. contributions somtimes even fall through the cracks. - strict adherence to changelog style commit messages without a clearly worded and documented argument about why it's worth the effort in 2023. whenever 'C' fails to add an entry to the commit message in Emacs, i groan out loud. i came to Guix from a couple of years of NixOS (also contributing), being frustrated by the way they use Nix, the language, to describe OS services. it felt an uphill battle for no good reason that Guix liberated. Guix has much more flexibility and common sense in the coding domain (that compensates for the increased frustration in the social domain). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Toxic people will not be changed by the alchemy of your kindness. Yes, be kind, but move on swiftly and let life be their educator.” — Brendon Burchard
Re: package definition question: referring to source files of another package?
> In some scenarios package A may refer to source files in package B. depending on where and what you need, you can do something like this in a GEXP context: (define (upstream-file relative-path) (let ((git-origin (let ((commit "v0.13.2")) (origin (method git-fetch) (uri (git-reference (url ...) (commit commit))) (file-name (git-file-name "foo-bar" commit)) (sha256 (base32 ...)) (file-append git-origin relative-path))) #~(let ((x #$(upstream-file "/some-path"))) ...) this way the versioning of the two packages are not tied together, which may or may not be what you want from a semantics perspective. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “There are some ideas so wrong that only a very intelligent person could believe in them.” — George Orwell (1903–1950)
Re: How can we decrease the cognitive overhead for contributors?
> That said, I understand that some (many?) users are not comfortable with > CLI interfaces and prefers a GUI interface (web UI is the same), but > there is no reason to increase the reviewers cognitive overhead by > introducing an inefficient web based patch management workflow just to > address a "simple" and unrelated interface problem. IMO the crux of the issue here is not so much the UI, but the underlying data model, because it sets the limitations for everything that may be built on top of it. for now the two contenders seem to be, basically, 1) git and 2) mailbox. a PR is just a git branch with extras. it's worth noting that git already provides scriptable CLI tools for dealing with branches and remotes. and 1) is not better because there already exist different webuis for PR management, but because of the future tools and scripts that it makes possible relative to 2). and i, for one, would be open to hack on something that is written in scheme and uses the former data model, but i'd never touch something like debbugs. i'm not sure that this is a general sentiment around here, but i suspect that i'm not alone with this. ideally, there should be something like mumi, maybe mumi itself, that is officially sanctioned and is easy to start up locally to hack on. then frustrated contributors could write out their frustration into its source code (compunding benefit for all!) instead of the mailing list (consuming attention without much benefits). just now i wanted to take a look at mumi's sources, but the link in the manual (https://git.elephly.net/gitweb.cgi?p=software/mumi.git) times out. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Until you make the unconscious conscious, it will direct your life and you will call it fate.” — Carl Jung (1875–1961)
Re: How can we decrease the cognitive overhead for contributors?
> > until needed (rarely). the email based model is just a flat list of > > messages that includes all the past mistakes, and the by now > > irrelevant versions. > > What the? If anything, emails are like a tree and discussions in most > forges are a single long list that's rarely well organized. Virtually not sure how most people consume their emails nowadays, but for me it's one flat list per thread, in a browser (i.e. it's not a tree). and i haven't used any forges where there's not at least a flat timeline per PR. i used to use emacs for emails for a while, but i stopped. the learning curve was too steep, and documentation was sparse, for little perceived benefit. at one point i noticed that it was saving unencrypted drafts of encrypted threads into my online mail account, and i decided to abandon it. now i rely on webmail with Edit in Emacs browser extension. it's good enough for everything i do, except maybe for contributing to Guix. > every mail client supports threads, whereas a certain one of the more > popular forges still refuses to do so. Hiding obsolete versions of a > pull request is in practice implemented either by pushing more commits > on top of the existing one, often with dubious commit messages or by > force-pushing a branch, neither of which is an acceptable solution for > Guix. are you by any chance mixing the standards that we set out for the Guix repo, with the means of communication while a patchset is being forged into mergeable quality? and even if you are not, we could simply add an entry to the requirement list that the commit history of a PR must be retained even when a PR is force-pushed. it's not that emails are inherently superior. > Other implicit assumptions include that people will be happy to switch > for the particular fork you've chosen (they won't) and will not demand > $new_hot_thing within the next five years (they likely will, just look my implicit assumption is that a project is ready to switch communication technologies every few years, *if* it's justified. and all i'm trying to achieve here is to move the discussion away from email-is-superior-period, to look at what requiremenets we have and what tools satisfy them, if any. > at the ChatGPT-related stuff that has been submitted). There sadly is > no pleasing everyone here and unless these tools are incredibly simple > to maintain, the utilitarian approach of least misery leads you to > plain email. but is a least-misery model appropriate here? how do you even account for the contributions that could have happened in a different environment, but did not happen? similarly, e.g. demanding a dated ChangeLog format for commit messages has a filtering effect on who will contribute, and then who will stick around. most engineers have a hard time jumping through hoops that they find pointless (git automatically encodes that information), and i'd suggest considering what the effect are of such rules on Guix as an emergent order. what makes it even more tricky is that it's a self-perpetuating meme, because the more such conservative policies are enforced, the fewer people will stick around who would advocate for change. yes, the balance between change and conservatism is tricky, but i think we have moved on from CVS long enough ago to e.g. abandon the ChangeLog format... (or alternatively, i'm blind to the use-cases that it is crucial for). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “I sincerely believe that banking establishments are more dangerous than standing armies, and that the principle of spending money to be paid by posterity, under the name of funding, is but swindling futurity on a large scale.” — Thomas Jefferson (1743–1826)
Re: Why does Guix duplicate dependency versions from Cargo.toml?
i may not understand this well enough, but with that in mind... the nix crowd allows something that they call vendoring: they use the native tools of the language ecosystem to fetch the transitive closure of the dependencies, as specified by their own package management descriptions. then they compute a hash on the entire directory, and record it in the leaf package's definition. i think this vendoring dir/archive then even gets cached by their substitute servers (for prosperity). IIUC, this method is rejected by guix on principle. if someone wants to test their mailing list search-fu, then there was a similar discussion about golang in the past. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “As children go, so go nations. It's that simple.” — Carol Bellamy
Re: How can we decrease the cognitive overhead for contributors?
> > > straight out forgotten/ignored submissions (khm). especially if > > > it's about some hw or some sw that none of the committers care > > > about, or could test locally (e.g. Trezor support: > > > https://issues.guix.gnu.org/65037 that > > > doesn't even build in master). > > Do you mean that Trezor doesn't currently build on master or that this > series doesn't build relative to master any longer? It'd be quite in master `guix build python-daemon` leads to an attempted local build that fails. the rest of the stack depends on python-daemon. this has been the case for several months now. a previous iteration of this fix is back from 2022 dec: https://issues.guix.gnu.org/58437#7 it was pending long enough to get obsolete by new releases of the projects. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “People are always shouting they want to create a better future. It's not true. The future is an apathetic void of no interest to anyone. The past is full of life, eager to irritate us, provoke and insult us, tempt us to destroy or repaint it. The only reason people want to be masters of the future is to change the past.” — Milan Kundera (1929–)
Re: How can we decrease the cognitive overhead for contributors?
> For this, I either go to issues.guix.gnu.org to download the newest patches, > in case the message is not in my inbox. some patchsets evolve a lot, and there are countless messages where obsolete patche versions are intermingled with non-obsolete discussion... > Otherwise I do not get your point: I keep untreated messages with the latest > patch version in my Guix inbox, and file away the others in a separate mbox. > So things are not flat, but have two levels: "to be treated" or "done". my point is that in a PR based model/workflow things like this is done by a program. and each brain cycle you spend on maintaining the sanity of your local inbox, is not spent on hacking, and the results of your effort is not even mirrored into the inbox of the other contributors. this seems like a small thing, but multiply this with every message, and every potential contributor and maintainer... and then consider its cumulative effect on the emergent order that we call the Guix community. meta: the reason i'm contributing to this discussion is not that i'm proposing to move to some specific other platform right now. it's rather to nudge the consensus away from the conclusion that the email based workflow is good and is worth sticking with. once/if we get closer that consensus, only then should the discussion move on to collect our requirements and evaluate the free sw solutions that are available today. which again could be organized much better in a wiki than in email threads, but that's yet another topic... -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The only valid political system is one that can handle an imbecile in power without suffering from it.” — Nassim Taleb (1960–)
Re: How can we decrease the cognitive overhead for contributors?
> I feel like the advantages of a email-based workflow nowadays is more on > the maintainer side of things (as managing large projects is easier another thing worth pointing out here is that the harder it is to test a submitted patchset locally, the fewer non-committer reviews will happen. and if all the review work rests on the shoulders of the committers, then there'll be long response times on submissions, or straight out forgotten/ignored submissions (khm). especially if it's about some hw or some sw that none of the committers care about, or could test locally (e.g. Trezor support: https://issues.guix.gnu.org/65037 that doesn't even build in master). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “A programming language is low level when its programs require attention to the irrelevant.” — Alan Perlis
Re: How can we decrease the cognitive overhead for contributors?
> Now you might say that this leads to less diversity in the team of > committers and maintainers as you need a certain level of privilege to > seriously entertain the idea of dedicating that much time and effort to > a project and I agree, but I also think this is a bigger reality of > volunteer work in general. the ultimate goal is not just diversity, but high efficiency of the people who cooperate around Guix, which then translates into a better Guix. if the "rituals" around Guix contribution were merely a steep initial learning curve, then one could argue that it's a kind of filter that helps with the signal to noise ratio. but i think it's also a constant hindrance, not just an initial learning curve. > Just because it's brought up a lot of times doesn't mean it's a good > idea. There is a lot of good things that can be done for our web-based > front ends; improving the search results on issues.guix.gnu.org would > be one of them. However, I have little hopes for a web based means to > submit contributions. I think email should be a format that's > understood by folks who know how to operate a web browser. again, i would press the argument that it's not about being able to, but about how much effort/attention is wasted on administration (i.e. not on hacking). i often have the impression that it took comparable effort to submit a smaller to mid size patch than making it. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Virtually no idea is too ridiculous to be accepted, even by very intelligent and highly educated people, if it provides a way for them to feel special and important.” — Thomas Sowell (1930–)
Re: How can we decrease the cognitive overhead for contributors?
another +1 for the general sentiment of Katherine's message. > I am all for it if it supplements the email based workflow (every > time I need to do a modern style pull request type action, I am > completely out of my depths and lost in the web interfaces...). in my experience learning the quirks of the web based PR model, at least as a contributor, is much less effort than the constant friction of an email based workflow, let alone the learning curve of the emacs based tools. i couldn't even find out which tools are used by those who are comfortable with the email based workflow. i looked around once, even in the manual, but maybe i should look again. i'm pretty sure most maintainers have a setup where the emailed patches can be applied to a new branch with a single press of a button, otherwise it'd be hell of a time-waster. one fundamental issue with the email based workflow is that its underlying data model simply does not formally encode enough information to be able to implement a slick workflow and frontend. e.g. with a PR based model the obsolete versions of a PR is hidden until needed (rarely). the email based model is just a flat list of messages that includes all the past mistakes, and the by now irrelevant versions. > But someone would have to write and maintain them... there are some that have already been written. here's an ad-hoc list of references: #github #gitlab #alternative https://codeberg.org/ https://notabug.org/ https://sourcehut.org/ https://sr.ht/projects https://builds.sr.ht/ https://git.lepiller.eu/gitile codeberg.org is gitea and sr.ht is sourcehut -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The condition upon which God hath given liberty to man is eternal vigilance; which condition if he break, servitude is at once the consequence of his crime and the punishment of his guilt.” — John Philpot Curran (1750–1817)
Re: Updates for Go
> each package for those two to work properly. Also, while having pinned > versions of dependencies upstream seems like the consensus, I'm not sure > we'd like doing that, be it for the exponential CI work that would be > required. not arguing either way, FWIW: - rumour has it that golang compiles very fast, and - IIUC currently the go build system in guix does not reuse build artifacts, i.e. it recompiles everything for each leaf package. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “What you do speaks so loud I cannot hear what you say.” — Ralph Waldo Emerson (1803–1882)
Re: Updates for Go
at one point i tried to compile some large projects written in golang in a reproducible way, and making sure that they use the exact same versions of all their dependencies. in short: there's a philosophical mismatch between how guix and the golang crowd looks at building go apps. guix likes to explicitly represent every dependency in the guix namespace. golang has its own way of gathering the dependencies (and the nixos crowd don't mind "vendoring" the dependencies). i've also worked on the golang importer (https://issues.guix.gnu.org/55242 which needs a bit more care). once it works reliably enough, then it'd be possible to import the entire transitive closure of the dependencies into an isolated namespace in guix, which would be a halfway solution between the two conflicting philosophies. for now i've abandoned that endeavour in favor of adding binary packages to my channel, but i made some notes on the way: https://issues.guix.gnu.org/43872 a go-build-system patch http://issues.guix.gnu.org/50227 discussion with iskarian https://logs.guix.gnu.org/guix/2021-08-31.log#024401 https://logs.guix.gnu.org/guix/2021-08-31.log#180940 iskarian: If you search for my name and "go" or "golang" in the mailing list archives, you should find a lot of discussion https://savannah.gnu.org/users/lfam Here's a more recent message from me: https://lists.gnu.org/archive/html/guix-devel/2019-03/msg00227.html Ah, I see I've unknowingly repeated you! https://lists.gnu.org/archive/html/guix-patches/2021-08/msg01222.html Heh, it's gratifying that someone else came to the same conclusion. It means I wasn't totally in the weeds -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “The anxiety children feel at constantly being tested, their fear of failure, punishment, and disgrace, severely reduces their ability both to perceive and to remember, and drives them away from the material being studied into strategies for fooling teachers into thinking they know what they really don't know.” — John Holt (1923–1985), 'How Children Learn' (1967)
Re: The package/inherit trap (was: gnu: stellarium: Enable ShowMySky.)
> How about 'package/variant' or 'package/variant-of'? +1 for trying to capture the intention in the name, instead of the means of implementation. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “Although teachers do care and do work very, very hard, the institution is psychopathic-it has no conscience. It rings a bell and the young man in the middle of writing a poem must close his notebook and move to a different cell where he must memorize that humans and monkeys derive from a common ancestor.” — John Taylor Gatto (1935–2018), 'Dumbing Us Down: The Hidden Curriculum of Compulsory Education' (1992)
Re: Adding %conf-dir breaks module
> Any thoughts on why that would happen? not sure whether this is related, but: https://issues.guix.gnu.org/55464 i.e. current-filename does some syntax-time magic. one effect of that is that it behaves differently when guix pull'ing the code (as opposed to running it in Geiser). -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- You can do everything with bayonets except sit on them.
Re: A Forum for Guix Users
> Like I've mentioned on fedi before, advocates of Lispy languages tend to > talk a lot about what's possible with the language, but the truth is > that the actual tooling that matters simply isn't very good, and having > an S-expression based syntax doesn't magically make writing the kinds of > refactoring tools that Java developers have been enjoying for 10+ years > significantly easier. > For that we need good static analysis, and unbounded dynamism and too > much syntax magic makes that more difficult. > At the very least I want to be able to rename variables across the whole > project and jump to definitions reliably. i came to Common Lisp from that world, and i don't miss those tools one bit. those refactoring tools in the java world feel so useful exactly because of the linguistic inability to formally express abstractions in the language. when lisp is used properly (which includes discipline while naming abstractions!) then one doesn't miss those tools. a related quote that captures this sentiment: “[Design] Patterns mean "I have run out of language."” — Rich Hickey but i agree that there's plenty of room for improvement in the lisp tooling, even for just Guile + Geiser to catch up with CL + Slime. and i also agree that the learning curve is way too steep with Emacs + lisp tools. ultiamtely, i think it's worth it, but it does require quite some determination and frustration tolerance. > ps.: As far as I can tell, the Lisps with good IDEs are image based, not > source based, that's why they have an easier time doing metaprogramming, > because the runtime helps a lot. But an image based system is not > exactly in line with Guix's goal of reproducibility. all lisps are image based in the sense that they are a VM once the source has been loaded... no? but, unfortunately, all (non-obsolete) lisps use flat text files to represent the source code. java tools turn that flat text source code into a graph and work on the graph, and does this text-graph-text conversion transparently for the user. but it's only possible to do this conversion in languages that have a relatively little degree of freedom... which translates to less freedom to express abstractions... which in turn translates to a greater need for refactoring tools. again, i most agree with you. what i wanted to express is that there's much more to this topic. -- • attila lendvai • PGP: 963F 5D5F 45C7 DFCD 0A39 -- “It's surprising how many persons go through life without ever recognizing that their feelings toward other people are largely determined by their feelings toward themselves, and if you're not comfortable within yourself, you can't be comfortable with others.” — Sydney J. Harris