Re: [DNG] Be prepared for the fall of systemd

2022-08-04 Thread Laurent Bercot



What do we as a community need to do
to get S6 into a "corporate friendly" state?

What can I do to help?


 "Corporate-friendly" is not really the problem here. The problem is
more "distro-friendly".

 Distributions like integrated systems. Integrated systems make their
lives easier, because they reduce the work of gluing software pieces
together (which is what distros do). Additionally, they like stuff like
systemd or openrc because they come with predefined boot scripts that,
more or less, work out of the box.

 There are two missing pieces in the s6 ecosystem before it can be
embraced by distributions:

 1. A service manager. That's what's also missing from runit. Process
supervisors are good, but they're not service managers. You can read
why here[1].
 In case you missed it, here is the call for sponsors I wrote last year,
explaining the need for a service manager for s6: [2]. It has been
answered, and I'm now working on it. It's going very slowly, because I
have a lot of (easier, more immediately solvable) stuff to do on the
side, and the s6-rc v1 project is an order of magnitude more complex
than what I've ever attempted before, so it's a bit scary and needs me
to learn new work habits. But I'm on it.

 2. A high-level, user-friendly interface, which I call "s6-frontend".
Distros, and most users, like the file-based configuration of systemd,
and like the one-stop-shop aspect of systemctl. s6 is lacking this,
because it's made of several pieces (s6, s6-linux-init, s6-rc, ...) and
more automation-friendly than human-friendly (directory-based config
instead of file-based). I plan to write this as well, but it can only
be done once s6-rc v1 is released.

 Once these pieces are done, integration into distributions will be
*much* easier, and when a couple distros have adopted it, the rest
will, slowly but surely, follow suit. Getting in is the hard part, and
I believe in getting in by actually addressing needs and doing good
technical work more than by complaining about other systems - yes,
current systems are terrible, but they have the merit of existing, so
if I think I can do better, I'd rather stfu and do better.



Here are some ideas:
- easier access to the VCS (git, pijul, etc)


 The git repositories are public: [3]
 They even have mirrors on github.
 All the URLs are linked in the documentation. I don't see how much 
easier

I can make it.

 Note that the fact that it's not as easy to submit MRs or patches as
it is with tools like gitlab or github is intentional. I don't want to
be dealing with an influx of context-free MRs. Instead, if people want
to change something, I'd like *design discussions* to happen on the ML,
between human beings, and when we've reached an agreement, I can either
implement the change or accept a patch that I then trust will be
correctly written. It may sound dictatorial, but I've learned that
authoritarian maintainership is essential to keeping both a project's
vision and its code readability.



- Issue tracking system


 The supervision ML has been working well so far. When bigger parts
of the project (s6-rc v1 and s6-frontend) are done, there may be a
higher volume of issues, if only because of a higher volume of users, so
a real BTS may become an asset more than a hindrance at some point.
We'll cross that bridge when we get to it.



- CI/CD build chain (being careful not to make it too painful to use)


 Would that really be useful? The current development model is sound,
I think: the latest numbered release is stable, the latest git head
is development. The s6 ecosystem can be built with a basic
configure/make/make install invocation, is it really an obstacle to
adoption?

 I understand the need for CI/CD where huge projects are concerned,
people don't have the time or resources to build these. I don't think
the s6 ecosystem qualifies as a huge project. It won't even be "huge"
by any reasonable metric when everything is done. It needs to be
buildable on a potato-powered system!



- "idiot proof" website
- quick start / getting started guide
- easier access (better?) Documentation


 I file these three under the same entry, which is: the need for
community tutorials. And I agree: the s6 documentation is more of a
reference manual, it's good for people who already know how it all works
but has a very steep learning curve, and beginner-to-intermediate
tutorials are severely lacking. If the community could find the time
to write these, it would be a huge help. Several people, myself 
included,

have been asking for them for years. (For obvious reasons, I can't be
the one writing them.)

 Turns out it's easier to point out a need than to fulfill it.

 It's the exact same thing as the s6 man pages. People can whine and 
bitch

and moan for ages saying that some work needs to be done, but when
asked whether they'll do it, suddenly the room is deathly silent.
For the man pages, one person eventually stepped up and did the work[4]
and I'm forever grateful to them; I 

Re: [DNG] Why /command ?

2017-07-19 Thread Laurent Bercot



So I was wondering what the original intent was in having these two
directories directly off the root? Is it so the init and supervision
can proceed even before partition mounts are complete? Is there some
other reason? Can anyone recommend setups that fulfill the reasons for
the direct-off-root dirs without having direct-off-root dirs?


 /package and /command are initially an idea from DJB. They were
introduced with daemontools.

 The intent was to step away from FHS, which is a bad standard. You
can see the original issues that DJB had with FHS here:
 http://cr.yp.to/slashpackage/studies.html

 That was in 1997-1998, and 20 years later, things have not changed.
The FHS is still bad - arguably worse, because official-looking
documents have been written to make it look like the Standard™
without ever questioning its technical value: this is inertia and
laziness at work.

 There are a few initiatives that had the courage to think about the
guarantees they want a file hierarchy to offer, and come up with
original solutions. Among them, for instance: DJB's slashpackage,
the GoboLinux distribution, and the Nix and Guix file hierarchies.
Each of those initiatives have their own advantages and their own
drawbacks, just like FHS. They make certain things possible or
more convenient, and certain other things impossible or less
convenient.

 The main guarantee that slashpackage (and also typically Nix) wants
to offer that FHS does not, for instance, is fixed absolute paths for
executables. If you have a slashpackage installation, you know where
to find your runit binary: it's /package/admin/runit/command/runit.
if you only have FHS, there's always doubt: is it /bin/runit? 
/sbin/runit?

/usr/bin/runit? This may not be a problem for runit, but it is a
problem for binaries you'd want in a shebang: execlineb, for instance.
Is it #!/bin/execlineb? #!/usr/bin/execlineb? Slashpackage gives
the answer: #!/package/admin/execline/command/execlineb. It's longer,
but prevents you from using horrors such as "#!/usr/bin/env execlineb",
which only displace the problem (there's no guarantee that env is
in /usr/bin, and in fact on some embedded devices it's in /bin).

 There are other interesting things that slashpackage, or more
generally not-FHS, allows you to do that FHS does not. For one, the
ability to install side-by-side several versions of the same software
is pretty interesting to me: very useful for debugging. It also sounds
like something distributions' package managers should like. But FHS
makes it very difficult, if not impossible, and mainstream package
managers (including Alpine's apk!) are being held back by that.

 I personally think that the guarantees offered by FHS were useful
at some point in time, but that we're long past this point and FHS
is mostly obsolete by now; that FHS is some legacy we have to carry
around for compatibility but that is preventing us from moving
forward instead of helping us. And distributions that refuse to move
an inch from FHS are the main problem here. They *are* the inertia,
and their unwillingness to question the validity of FHS stems out
of intellectual laziness. It was the case in 2000, it still is the
case today.

 There are ways to nudge them towards adoption of better systems, but
it's a lot of effort and it's baby steps. The best software authors can
do is make their software completely configurable, adaptable to any
policy, and gently encourage better policies. s6 will install under
FHS, under slashpackage, under Nix and basically anything else.
runit, like daemontools, tries to enforce slashpackage - I wanted to
enforce slashpackage at some point, too, but unfortunately a convention
is only useful if everyone follows it, and nobody follows slashpackage.
And anyway it's not an author's job to enforce policy - that's a
distro's job, and if a distro's views conflicts with the author's,
it can simply avoid packaging the software, so authors have no leverage
at all on this front.

 To answer Steve's last questions: there is no real way of getting
slashpackage's guarantees without following slashpackage, because
guaranteed absolute paths are only good if everyone can agree on their
location, and as far as slashpackage is concerned that ship has sailed.
However, there is still a way to get some of the additional benefits
of not-FHS: install a package in its own subdirectory - no matter where
it is - and make symlinks where required.

 There's even a FHS place for packages that want to be installed
in their own directory: /opt. So if you want to install runit in a
FHS-approved location, you could use /opt/runit. You could use
/opt/runit/2.1.2, with a /opt/runit/current -> 2.1.2 symlink, or
/opt/runit-2.1.2 with a /opt/runit -> runit-2.1.2 symlink if you want
the "install several versions at the same time" benefit.

 ... but unfortunately, FHS specifies that /opt was made to install
software _that does not come from distributions_. So if a distribution
wants to be 

Re: [DNG] OpenRC: was s6-rc, a s6-based service manager for Unix systems

2015-09-29 Thread Laurent Bercot

On 29/09/2015 17:34, Timo Buhrmester wrote:

It can't respawn

Probably because people don't want this behavior.  Auto-respawn only
makes sense when you're "relying" on buggy software you already expect
to blow up, *and* are unwilling to debug it.  "Try turning it off
and on again", "A restart will fix it" is the Windows-way...


 That's a common mistake, but a mistake nonetheless. In an ideal world,
process supervision may not be necessary, but we don't live in an
ideal world. Software crashes happen. Even software without bugs can
hit a temporary failure (out of memory, for instance) and exit; the
conditions can then change, but without supervision, your process is
dead until manual intervention.
 Process supervision also provides the admin with better tools to
manage processes, for instance the ability to reliably send signals
to them without .pid files.

 Process supervision is *not*, and should not be, a crutch to help
buggy software run. Pretending that it is its goal is a straw man
argument.



In all other cases (I can think of), respawning a crashed service
is exactly *not* what I want to happen (it could have crashed because
it was exploited, providing the attacker with unlimited attempts).


 A service being respawned does not preclude the system from sending
an alert when it crashes. Critical services *should* be monitored by
an alert system.
 Also, as Simon says (pun unintended): if an attacker can crash the
service, what is better: that the attacker can trivially DoS your
service with one attack, or that he has to try again and again in
order to DoS you?



Or it could have crashed because there's an environmental problem
that isn't directly under the program's control, in which case
restarting it would just be pointless, because it likely can't start
at all.


 You don't know that in advance. Some failures will be permanent, in
which case you'll most likely notice them as soon as you start the
service for the first time and can address the problem; other failures
are temporary, and that's where process supervision is a good thing to
have.



Bonus points if the logs of the initial problem get rotated away due to
excessive retrying, or the core dump of the initial crash gets
overwritten...


 If your admins did not prepare for this and write correct scripts
to save the core dumps to a safe place, or save crash logs to a place
where they won't be rotated away, this is a problem with your admins,
not with process supervision.

 Ultimately, process supervision is a tool, and a good tool. It should
be a decision for the sysadmin to use it or not to use it; the decision
should not be enforced by the rc system. As Steve says, it is an
oversight of OpenRC to not provide the *possibility* of process
supervision.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-25 Thread Laurent Bercot

On 25/09/2015 11:27, KatolaZ wrote:

I actually had the impression that servers was what Laurent was
referring to... :)


 Was I? It's possible.
 I usually refer to servers because it's the environment I'm used
to; but what I'm saying about boot times, parallelism and so on
is also true for clients, or any kind of machines really.

 I think it's a mistake to say "Boot times do not matter".
Additionally to embedded systems, which I've already written about,
virtual machines are now ubiquitous. There are several projects
now that boot Linux in a Javascript virtual machine, in your
browser. I've just used one today:
http://linsam.homelinux.com/extra/s6-altsim/jor1k/demos/s6-rc.html
Lots of companies are now moving their production environments
to Docker containers, which are easier to host and manage than
physical machines. And yes, mobile computing - the future of
client machines is the phone, not the PC, with wildly different
user demographics and habits.

 Boot times may not matter to you, on your desktop PC, because
you can make yourself a coffee while your machine boots. It may
not matter to the NOC person managing real servers, because
while one machine is booting, there are a hundred others already
serving. But boot times do matter for the person powering up an
Internet box, a VR machine or any "smart thing". They matter for
the person whipping up a complete OS in a Javascript VM, or
booting a container, or a phone. They will matter in tomorrow's
uses of Linux that we can't even foresee today.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-25 Thread Laurent Bercot

On 25/09/2015 17:29, Simon Hobson wrote:

Windows and MacOS both prioritise those tasks needed to get a desktop
picture (or login prompt) on the screen - as that gives the illusion
of fast boot time.


 Oh, yes, definitely. (My client machine runs Windows, and I experience
that every day.)
 It does not mean the dependencies are broken, though: if you can
actually display a desktop early on in the initialization process,
without stuff breaking, why not do so? It's only cheating if you
pretend the initialization is over at this point. Of course,
vendors are dishonest and incite the consumer to think the system
is ready when the desktop is there, but technically speaking,
unless something fails when you try to launch a program from the
desktop, it's not dependency mismanagement.



it's not worth trying to do anything until all the background stuff
is at least well past it's peak !


 Yes, it would be more honest to have a small console in the corner of
the desktop that displays everything the system is doing, and users
would know to avoid launching heavy programs until the console has
stopped scrolling.
 I'm just afraid that with Windows, the console would *never* stop
scrolling. XD



That all hinges around what is meant by "start service A". If "start
service" means nothing more than "kick off a process that'll run it"
then you are completely correct. On the other hand, if "start
service" actually means a process which is only deemed to be complete
when it's running and ready to go to work then that's a different
matter.


 That is the point of readiness notification.

 Traditional rc systems consider that the service is ready when the
process that has launched it dies. This is correct for oneshot
services (barring a few exceptions). This is not the case for longrun
services, unless the daemon can notify its readiness and the launching
script takes advantage of that; but few sysv-rc - or OpenRC - scripts
actually bother doing this, so their dependency management is actually
incorrect.

 systemd supports readiness notification, in its own over-engineered
way, but doesn't even get it right: see http://ewontfix.com/15/

 s6-rc uses the hooks provided by s6 to correctly handle readiness
notification.



Of course, regardless of what system or definitions you use - if a
service then dies then you have a problem. IMO, "it might die at some
indeterminate time" isn't an excuse for not trying to get the "start
stuff up" part right.


 Apparently Rainer disagrees with that, and seems to think that since
you can't get a 100% reliable system, it's useless to get the common
case working as smoothly as possible. I've stopped trying to convince
him.



Taking the example of syslog - if it dies
(regardless of when, whether it's seconds or days after system start)
then you are going to lose logging. You can try and manage that (spot
the problem and deal with it automatically), but short of predicting
the failure in advance, there will always be a window during which
logging can be lost.


 No, you don't have to lose logs if you're maintaining the logging
file descriptor open and your new logger instance can actually reuse
it. In that case, logs simply accumulate in the kernel buffer
(blocking the client if the buffer fills up), until the logger restarts
and can read it.
 I call this "fd-holding", and it is precisely the only thing of value
that systemd's "socket activation" brings. On that point, systemd
actually does something better than traditional rc systems.

 s6-rc, and even just s6, also performs fd-holding for loggers. The
system creates a pipe from a service to its logger and maintains
it open, so you never lose logs either. You get the useful feature,
but not the surrounding socket activation crap.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-25 Thread Laurent Bercot

On 25/09/2015 09:05, Simon Hobson wrote:

More to the point, I'd rather have reliability over speed any day.


 How about you get both?
 The dichotomy is a false one. People believe they can't have both
because init systems have never been done right so far, and always
forced them to choose between one and the other. This is what
I'm aiming to change. No less.



But one trick that the desktop vendors are doing, and I suspect
SystemD are copying, is to "fake" a fast boot. By prioritising
certain bits, you get the illusion of a fast boot


 Yes. I have no idea how Windows does it, or how other OSes do it,
but systemd sacrifices a lot of reliability to gain a little speed
by starting services before their dependencies are ready. Start
your services before the loggers are ready to log! Who cares if
something goes wrong and you have no logs to analyze to understand
what happened?

 This is definitely not what s6-rc is doing - on the contrary.
s6-rc does not cheat: when it says that a service is up, it is
really up - no stuff that keeps loading in the background,
unless the author of the service designed it so.

 As I wrote one year ago at
https://forums.gentoo.org/viewtopic-t-994548-postdays-0-postorder-asc-start-25.html#7581522
you gain speed by doing two things:

 - eliminating times when the processor is doing nothing
 - eliminating times when the processor is doing unnecessary work

 The former is what parallelism accomplishes. This is where s6-rc
wins over sysv-rc or OpenRC. (Yes, I'm aware that OpenRC has a
"rc_parallel" option. It is not reliable; it uses ugly, ugly hacks
to make it appear to work. Don't use it.)
 The latter is what simplicity accomplishes. This is where s6-rc
wins over systemd - and over basically anything else than starting
your services by hand with zero overhead.



But, if you are going to boot slowly and methodically, it helps if
there's signs of progress.


 If s6-rc is run with "-v2", it prints in real time to stderr what
it is doing. It's the equivalent of OpenRC's "ebegin" and "eend",
without the pretty-printing.
 Anything more elaborated is the domain of a user interface; I'm bad
at user interfaces. But a distribution can add progress bars to
their service start scripts if it wishes, nothing prevents it from
doing so.



There's nothing that gets people impatient
better than something that appears to be taking a long time "doing
nothing" !


 Show them a terminal with a lot of scrolling gibberish! That should
convince them that 1. the computer is actually doing something, and
2. the uninitiated should not be questioning what it is doing or the
time it takes - they should shut up and worship! ;)

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-25 Thread Laurent Bercot

On 25/09/2015 09:26, Jaromil wrote:

What I'm particularly interested is something to do process monitoring
and respawning for a certain group of daemons


 Just supervise the daemons you want to supervise, and don't
supervise the ones you don't want to. But really, there's no
reason *not* to supervise every daemon on your system: even if
you're not using the respawning mechanism, it gives you a
reproducible environment to start them (without 40 lines of
boilerplate) as well as a nice, race-free interface to send
them signals - no .pid files.

 The only reason why supervision is not more widespread is
inertia. There's a huge historical collection of sysv-rc-like
scripts that does not play well with supervision, and rewriting
them requires a tremendous effort. But I'm hopeful: since the
init wars, people have realized that some work was necessary.
There are initiatives to develop collections of supervision
scripts already. And daemon authors have started putting in the
effort to write systemd .service files, when supervision scripts
are easier, so there's no reason they can't be convinced.



I wish to have something
that is not a lousy shell-script (and of course not a monster of
office-suite dimensions taking over the whole setup) to notice the
crash, save the logs aside and restart the daemon - and fast.


 That's exactly what a supervision suite does. You could already
do it with daemontools in 1998! :P
 Nowadays, supervision suites are a dime a dozen. Of course, I'm
advertising s6, the one I wrote - but it is honestly more featureful
than the other ones, and, I like to think, correctly maintained.



It has been since the times of Icecast1 (pre-kh) that I need something
like that, been using restartd for a quick setup, but it does not handle
dependencies.. now wondering if s6-rc does it.


 s6 does it. s6-rc is not a supervision suite, it's a service manager
working on top of s6. The distinction is important:
 - booting a machine (s6-linux-init),
 - supervising processes (s6), and
 - managing services (s6-rc)
are not the same job, even if they are somewhat related. It's the
confusion between those three jobs that led to the birth of
mediocre software like sysvinit (I'm not hating: we didn't know
better at the time), and later on, of abominations like systemd
(I *am* hating).

 But yes, if all you need is process supervision, s6, or any
other supervision suite, is what you need. If you need dependency
management, especially between oneshots and longruns, then s6-rc
can do it.



In general I'd be incline to use s6 more because the whole suite Laurent
is developing seems very minimal and... no-bullshit (Gandi's tm)


 It's Gandi's motto (I like those guys, my VPS is hosted by them,
I have a few of their T-shirts), it's the suckless philosophy,
and it's mine. I like to think it's also the philosophy of
Devuan people, who voluntarily rejected the biggest bullshit of
the decade.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-24 Thread Laurent Bercot

On 24/09/2015 15:31, Rainer Weikusat wrote:

I'd still very much like to see an actual example which really needs
these depenencies which isn't either bogus or a workaround for a bug in
the software being managed.


 Your network must be up before you do any network connections.
 Your DNS cache must be up before any other service resolves names.
 Your filesystems must be mounted before you write to them.
 Want more?

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-24 Thread Laurent Bercot

On 24/09/2015 16:40, Rainer Weikusat wrote:


Hence 'failure'
is part of the normal mode of operation and proccesses trying to use TCP
need to deal with that.


 Yeah, well, if your favorite startup mode is to start everything at
the same time and say "eh, if it doesn't work, the program is supposed
to handle failure" then your users will give you funny looks, and they
will be right.

 I'm talking normal use cases here, i.e. situations where the services
*will* succeed. In those situations, it is better to start everything
according to the dependency graph, because then you do *not* trigger
failure paths, which is nicer.
 This does not preclude programs from actually handling failure when
it happens, but it's best if failure is the exception, not a normal
mode of operation.



That's slightly different because it's obviously not possible to start a
program stored in a file (which needs various other files to start)
before accessing any of these files is possible (it's still subject to
changes at runtime, though). But it's not necessary to declare a
dependency on "the filesystem" in a dozen different files and then run
some program in order to work out that "The filesystem namespace must be
constructed prior to using it!"


 And still, "the filesystem namespace must be constructed prior to using it".
No matter how you call it, that's a dependency, and that's what I'm talking
about. Now, you're free to start daemons logging stuff to /var/log/foo
before mounting /var/log, but the reasonable people among us prefer not to
do it. ;)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-24 Thread Laurent Bercot

On 24/09/2015 17:51, Rainer Weikusat wrote:

If it starts working within less than five minutes, users will forget
about it faster than they could complain, especially for a system which
is usually supposed to be running. But that's actually a digression.


 Five minutes? And you think it's acceptable? Sorry, I don't have five
minutes to waste every time a program fails. Also, this is 2015: if a
program isn't responsive within 30 seconds, users will come knocking at
your door raging - and they will be right.



But there's really no way to predict this because
'starting program A before program B' does not mean 'program A will be
ready to serve program B by the time program wants to use its services'.


 And that's why there is readiness notification, in order not to start
program B before program A *can* actually serve. s6 is the only process
supervision suite handling readiness notification right, thank you for
underlining this. :)



Provided a program is supposed to work this out on its own, this
information can be modelled as 'a dependency'. But you could equally
well modify rc.local to do a

mount-all-filesystem

before a

start-the-services


 You mean, removing the mounts from your service manager? Sure, you
can always do everything by hand. And if performing the mounts depends
on something else, such as insmoding a kernel module, then you also
need to do that something else by hand. And so on. Basically, you
never need to use any tool if you're doing the tool's work yourself -
you're just making it harder on yourself. /shrug



about that, I'm interested in situations where 'service dependencies'
are actually useful. I'm convinced there aren't any but I'd gladly learn
that I'm wrong[*]


 If the "mount filesystems" example cannot convince you, nothing will,
so I'll stop there.



[*] In line of TOCTOU, there's a TOSTOU race here --- time of start is
not time of use which means things can change in between and 'TOS'
doesn't even guarantee that the intended situation ever existed.


 Yes, everything can fail, processes can die, the network can go down,
yadda yadda yadda. It's still sensible to optimize the common case.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-24 Thread Laurent Bercot

On 24/09/2015 21:23, Steve Litt wrote:

What's the benefit of having the shortest run-time code path of any
service manager?


 - Speed: a short run-time code path means that less instructions are
executed, so the job is done faster. The point is to do the amount
of necessary work (calling the scripts, starting the services) with
as little overhead as possible.
 - Safety: less run-time code means less places where things can
go wrong. At this low level, it's not always possible to recover
when something goes wrong; you want to perform as few instructions
as possible in such a place.
 - Security: less code means less attack surface. A service manager
usually runs as root, so it needs to be trusted code. By minimizing
the amount of code run as root, you minimize the risk of exploitable
security holes.
 - Maintainability/QA: it's easier to debug a piece of code / ensure
it works properly when said piece of code is not all over the place.
A bug in the s6-rc engine happens within 20 kB of code, which should
make it easier to narrow down than a bug in systemd, or even in OpenRC.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[DNG] [announce] s6-rc, a s6-based service manager for Unix systems

2015-09-23 Thread Laurent Bercot

if you can confirm the plan of releasing s6-rc within september

  I confirm it.


 And, lo and behold, I'm on schedule for once.

 s6-rc-0.0.1.0 is out.

 s6-rc is a service manager for Unix systems, running on top of
a s6 supervision tree. It manages the live state of the machine,
defined by a set of services, by bringing those services up or
down as ordered by the user.

 It handles long-lived processes, a.k.a. "longruns", ensuring they're
supervised by the s6 tree. It also handles one-time initialization
scripts, a.k.a. "oneshots", running them in a reproducible environment
without needing the usual sanitizer boilerplate.

It manages dependencies between services, no matter whether they are
oneshots or longruns; it can intertwine oneshot starts and longrun
starts, or oneshot stops and longrun stops. When changing the machine
state, it always ensures the consistency of the dependency graph.

 Services can be grouped in collections named "bundles", for easier
manipulation. Bundles are like runlevels, but more powerful and
flexible.

 s6-rc allows an arbitrary number of longrun services to be pipelined.
s6, and other supervision suites, can maintain a pipe between a
producer and a consumer ("logger"), so the producer or the consumer
can restart without breaking the pipe and losing any data; s6-rc
extends this idea to an arbitrary chain of producers piping their
data into consumers that themselves pipe their data into...

 s6-rc features the shortest run-time code path of any service manager
to this day: this includes systemd, sysv-rc, and OpenRC.

 http://skarnet.org/software/s6-rc/
 git://git.skarnet.org/s6-rc

 Enjoy,
 Bug-reports welcome.

 Please send your reports and suggestions to the
skaw...@list.skarnet.org mailing-list. (You can subscribe by sending
a mail to skaware-subscr...@list.skarnet.org.) That is where s6-rc
is discussed, even if we can of course talk about a possible use in
Devuan here. Thanks!

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Doing away with multi-threading in my project (netman)

2015-09-03 Thread Laurent Bercot

On 03/09/2015 18:35, Steve Litt wrote:

I'd figure out how to stop those zombies from happening in the first
place. It's pretty hard to create a zombie: I think you have to
doublefork and then terminate.


 Nope, a zombie is actually very easy to create:
 - have a process parent that spawns a child and stays alive
 - have the child die first, and the parent not reap it.
That's it. A dead child is a zombie until the parent acknowledges its
death, and if the parent is failing its duties, the zombie will remain,
an empty soulless shell spawned inconsiderately with no love or
regards for consequences that will haunt the system forever.

 ...not quite forever. If the parent dies, the child process' paternity
will be reassigned to PID1, and PID1 will receive a SIGCHLD. Basically
every PID1 program correctly handles that and reaps every zombie it gets,
even if it's not the one that fathered it.

 But when the parent is a daemon, it's not supposed to die, so counting
on PID1 to do the dirty work is obviously not a solution, and it has to
reap children.

 Edward's problem is that his "backend" process is spawning children and
not reaping them. It has nothing to do with the frontend. To make sure
what process the zombies are children of, use the "f" option to ps, or
the pstree utility.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-09-01 Thread Laurent Bercot

On 01/09/2015 10:29, Jaromil wrote:

if you can confirm the plan of releasing r6-rc within september


 I confirm it.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-09-01 Thread Laurent Bercot

On 01/09/2015 10:04, Tobias Hunger wrote:

Now that is a really depressing outlook.


 What can I say: the state of affairs with the systemd madness
*is* depressing.



I am way more positive than about your chances than that. X11 used to
be impossible to replace because of drivers and we are pretty close
to getting Wayland anyway.


 Eh, if you like systemd, it probably makes sense that you like Wayland;
but I don't. Wayland is yet another integrated, all-encompassing,
monolithic (despite otherwise claims) attempt at solving a problem, with
the exact same design issues as systemd.

 Granted, the X problem is much more acute than the init problem, and
much harder; and it's probably impossible, given our technical knowledge,
to find a solution that accommodates both my desire for security and
modularity and a modern graphics system's need for performance and
support for stuff like 3D. But still, I don't think Wayland is the right
answer, and I'm not exactly impatient to see it released.



Heck, windows was the future for ages


 No, Windows was never the future. Windows was the present, because it
was all people had. Unfortunately, when it was released, Windows
was already 20 years behind the academic state of the art in
operating systems; so, a more accurate statement would be that
Windows was the past. :P

 Windows was the future from a "market shares" point of view, and
you are reasoning in market shares terms - this is also exactly
what systemd is doing. And the reason why Linux eventually took
over is that it was designed right in the first place. In the
same fashion, we are getting things right first no matter what is
happening in the marketplace, and we will eventually take over.



I do not see that happening anywhere at this time. So far it is
mostly claiming everything used to be fine before systemd. It was
not.


 What you are saying is that the "anti-systemd" camp is lacking
communication effort. You're right. I can't talk for other people,
but as far as I'm concerned, I've been totally taken by surprise
by systemd's success. I've always found it so technically inept
that I simply could not see it spreading, so I just ignored it.
Boy, was I wrong: so many resources - time, money, energy - have
been poured into communicating about systemd that it has become
the most talked-about init system ever. If a quarter of the
resources spent in promoting systemd had been spent on hiring
experienced Unix programmers and designing a better init system,
we would all be in paradise right now.
 And propaganda works: when you smother people with information
about a product, they tend to forget that this product is not the
only thing that exists.

 What is important to realize is that the "alternatives to systemd"
community is not a company: it is mostly made of people who only
have an interest in the technical side of things (i.e. who like to
get things right before making big announcements) and it does not
have the resources of a company (i.e. getting things right takes
time, and communicating also takes more time).

 So it's obvious why you're not hearing much about alternatives
yet. But the loudest voice is not necessarily the wisest;
actually, it very rarely is.



I do not see why you would need the same socket for all daemons


 It is possible to open as many sockets as there are daemons, but
this is only desirable if you're using the daemontools model, i.e.
one supervisor per daemon. And if you're using that model, opening
a socket to listen to daemon notifications is entirely unnecessary
and a waste of resources, so why would you do that? s6, for instance,
has a simpler notification model that does not need a socket. Using
the NOTIFY_SOCKET mechanism would require a deep rework of the
supervisor code for a net loss in simplicity and maintainability.
So, in practice, NOTIFY_SOCKET only benefits monolithic designs.



SD_notify is so much simpler to do than double forking, so most
developers will pick that on their own.
Do something even simpler and they will use that instead.


 I've already done it. Writing a newline to stdout is much
simpler than using sd_notify().
 You haven't heard of it because I'm not in the "communication"
phase yet. I want to get s6-rc ready first: deeds before words.



They do whatever makes their lives easier, just like any other open
source project out there.


 No, that's not the right way to look at it. Despite the licensing
terms, systemd *does not* have an "open source" approach; it has a
proprietary, company-driven approach, with the exact same political
and technical stunts as proprietary software to make people use it,
to make sure it grabs the market and holds it captive. As I said,
systemd does not play fair: it pretends to play the open source
game where projects are judged and adopted or ignored on mostly
technical merits - but it's heavily skewing the rules and behaving
in the open source world like a bully in a schoolyard.



Which concrete problems did you, 

Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-09-01 Thread Laurent Bercot

On 01/09/2015 15:50, shraptor wrote:

I am interested in r6-rc is there any place to read more about it
or perhaps I have to wait for the release?


 http://skarnet.org/s6-rc/ but you won't see much there until it's
released.
 You can get a preliminary look, which includes some early
documentation, at https://github.com/skarnet/s6-rc
 Discussion happens on the skaw...@list.skarnet.org mailing-list,
so please subscribe to it and post there if you have comments or
suggestions!
 Note that s6-rc relies heavily on s6, so if you're unfamiliar
with process supervision, you may want to start there first.

 Have fun,

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-31 Thread Laurent Bercot

On 31/08/2015 20:56, Tobias Hunger wrote:

Oh, I am pretty happy with systemd and won't lie about that. I would
still like to see some competition going.


 That's pretty much the crux of the problem here. Nothing can compete
with systemd on the same grounds, because systemd covers so much ground
and does so many things (that *it should not* be doing) that any good
engineer who wants to design an alternative will see a lot of insane
features and immediately say "Nope, I'm not doing that, it's not the
init system's job"; and systemd proponents will always answer "See?
systemd is the only one that does that stuff, and all the competition
is inferior".

 Even where competition *can* happen, systemd likes to fudge the odds.
For an example that I know because I had to deal with it not long ago:
the sd_notify() protocol was made so that a process' supervisor has to
be listening on a Unix domain socket, which ultimately favors the model
where the supervisor for all the daemons is a single program performing
all the service management tasks: so in order to implement a server for
sd_notify, you basically have to adopt the systemd monolithic architecture,
which means, in essence, rewrite systemd - and, obviously, no sane
person wants to do that.

 So by this single design choice, systemd ensures that it's the only
service manager that can actually implement sd_notify. And systemd
enthusiasts actively try to make daemon authors use sd_notify, saying
"Oh, but it's a good notification protocol; and the protocol is free,
and it's all open source, so people who want to write an alternative
server are free to do so!" with a bright smile and big, innocent
eyes; but it's nothing short of misleading and dishonest.

 systemd does not play fair by *any* measure; the only way to provide
healthy competition is to ignore everything it's doing, and design
a sane, Unixish init system, as well as sane, Unixish administrative
tools,  from the ground up.

 I'm working on it with s6-rc. Jude is working on the udev system
with vdev. Other people are working on the other parts of the Linux
userspace that systemd would love to phagocyte. And we are not
getting the kind of money or resources that the systemd lead
developers are, so it's a longer, harder task; but don't worry, the
competition exists and is getting better everyday.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: ???su??? command replacement merged into systemd on Fedora Rawhide

2015-08-30 Thread Laurent Bercot

On 30/08/2015 04:29, Isaac Dunham wrote:

Correction for this:
Alpine Linux is OpenRC based.


 Ah, sorry, I mixed them: it's Void Linux that's runit-based.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-30 Thread Laurent Bercot

On 30/08/2015 20:54, Steve Litt wrote:

http://www.ibuildthecloud.com/blog/2014/12/03/is-docker-fundamentally-flawed

Very nice!


 Note that the article is about the containers' *host*.
When people talk about making systemd work with containers,
they're usually meaning running systemd *in* the containers,
i.e. they're talking about the containers' *guests*.

 Guests are the interesting case, because they're the
production images, and that's what people want to make as
smooth-sailing, and (if possible) small, as possible. That's
what we focused on with the s6-overlay project, for instance.

 I had, naively, never thought that running the Docker
daemon under a systemd host would be problematic. It's just a
daemon, and systemd can run them, right ? Ha, ha, it would be
too simple.
 The article stupefied me: because the Docker daemon does not
conform to systemd's model, it is fundamentally flawed ?
And they have the balls to ask docker to change models,
because *only* systemd is supposed to handle cgroups ?

 They never wonder whether the problem wouldn't, by chance,
come from systemd, that insists on being a global registry
of everything on the host and on controlling everything,
not allowing daemons to do their own thing. No, systemd is
perfect, and the problem obviously comes from Docker.

 The nerve, hubris and total lack of shame of these people is
baffling. And what's even more mind-boggling is that a large
part of the community welcomes that attitude with open arms.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-29 Thread Laurent Bercot

On 29/08/2015 14:43, Rainer Weikusat wrote:

'su' is not a concept, it's a program.


 grumble Okay, let's clarify.
 A program is the implementation of an idea. The idea is often
unwritten or unspoken, or forgotten, and people will only refer
to the implementation; but good design always starts with the idea,
the concept, of what the program is supposed to do.

 When Lennart says su is a broken concept, he's saying the
concept behind the su program is not clear or well-defined, and
it was not a good idea to implement it; and I agree with that.
(Then he naturally branches onto his NIH obsession and decides
that UNIX is bad and systemd must reinvent everything, which I
obviously disagree with.)

 As you're saying, the correct design is to separate the tasks
that the su program accomplishes, if one doesn't need a full-
environment root shell.
 But if a full-environment root shell is needed, logging in as
root works. That's exactly what the login _concept_ is.



Now, is

1. Build systems suck and git isn't exactly the greatest tool on the
planet for working with more than one source tree, so lets add the
code we want to write to systemd

2. goto 1

a concept?


 Of course it is! I'm surprised systemd-versioncontrol isn't a thing yet. XD

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-29 Thread Laurent Bercot

On 29/08/2015 20:10, KatolaZ wrote:

Well, I wouldn't say that su is a broken concept on its own. In
assessing the quality of ideas and software one should always take
into account the motivations which led to a certain solution.

su appeared in ATT Unix Version 1:


 Yes. However, Unix has evolved in 40 years, sometimes in a good
way, sometimes in a not so good way. And piling on more functionality
into su wasn't exactly the best idea: from a simple privilege-gaining
tool, it mutated into something between what it was and a complete
login - it lost the clarity of concept it had at first.



The fact that we have gone a bit further than that with su, asking it
to do things it was not conceived for, is our problem, not su's one.


 Oh, I'm not blaming some abstract entity called su. When I say that
su is not good (anymore), it's obviously on us.
 There's a reason why I have written programs performing privilege gain
without bit s executables. ;)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-29 Thread Laurent Bercot

On 29/08/2015 23:11, Steve Litt wrote:

in my LUG, the most pro-systemd guys are the mega-metal admins
administering hundreds of boxes with hundreds of Docker containers.
These guys are telling me systemd is necessary to efficiently manage
the Dockers.


 They're telling you that because they've been brainwashed with that
idea, and since it's working for them, they're too lazy to try anything
else. But they're wrong. Alpine Linux, for instance, makes Docker
containers a breeze to use, and makes the images a lot smaller.
 Alpine is runit-based, and people have also found success running Docker
containers under s6; systemd is very unnecessary, and anyone pretending
it is is either ignorant or malicious.



Yeah, whatever, give me the budget Redhat gave the systemd
cabal, so I can hire good programmers, and I'll make it work
efficiently without breaking Linux and entangling everything.


 If you have 1/10 of that budget and are hiring, I'm available
and willing to full-time this. ;)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-29 Thread Laurent Bercot

On 30/08/2015 01:13, Simon Hobson wrote:

I don't think anyone has suggested it's for servers only. But, there
is an argument for picking the low hanging fruit - and that means
trying to do the easy bits first. I've not really followed it in
detail, but from what I've read it does seem that the desktop
environments have been the most tightly bound to systemd. IMO it
makes sense to try and get a non desktop system sorted, and then
tackle the harder problem of getting the desktop stuff cleansed.


 Honestly, servers are easy, and a few distros have always gone
pretty far with what they're doing with servers: Alpine Linux,
Void Linux, and many others I'm not following as closely. The
big or mainstream distributions are called that way because
they're the ones that people install on their desktops.

 I'm interested in Devuan because I view it as a replacement for
Debian, which I had on my desktop at some point. If it's about a
server, I already have lots of distributions to pick from, including
not using one in the first place.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The show goes on: “su” command replacement merged into systemd on Fedora Rawhide

2015-08-28 Thread Laurent Bercot

On 28/08/2015 17:00, Michael Bütow wrote:

https://tlhp.cf/lennart-poettering-su/


 The thing is, he's not entirely wrong: su *is*, really, a
broken concept.
 What he conveniently forgets, of course, is that having a
real root session with a separated environment, which is
what the new feature does, could already be achieved... by
logging in as root.

 Duh!

 So, this is just yet another propaganda stunt.
 su sucks. See? UNIX sucks! And now systemd can do so much
better than UNIX: it gives you real root sessions that do not
leak anything from the user environment.
 But, um, can't UNIX already do that ?...
 NO NO NO systemd does it better because insert confusing
buzzwords that will bamboozle executives and journalists

 It's been like this since day 1 of systemd, and I'm not
expecting it to change any time soon.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[DNG] *****SPAM***** Re: C string handling

2015-08-22 Thread Laurent Bercot
Spam detection software, running on the system tupac2,
has identified this incoming email as possible spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
the administrator of that system for details.

Content preview:  On 22/08/2015 16:40, Rainer Weikusat wrote:  [*] In theory.
   In practice, people working on glibc are mad  scientist-style x86 machine
   code hackers and the actual implementation  of something like strcpy might
   (and likely will) be anything but  straight-forward. [...] 

Content analysis details:   (5.3 points, 5.0 required)

 pts rule name  description
 -- --
 0.0 RCVD_IN_SORBS_DUL  RBL: SORBS: sent directly from dynamic IP address
[82.216.6.62 listed in dnsbl.sorbs.net]
 1.7 URIBL_BLACKContains an URL listed in the URIBL blacklist
[URIs: musl-libc.org]
 3.6 RCVD_IN_PBLRBL: Received via a relay in Spamhaus PBL
[82.216.6.62 listed in zen.spamhaus.org]


---BeginMessage---

On 22/08/2015 16:40, Rainer Weikusat wrote:

[*] In theory. In practice, people working on glibc are mad
scientist-style x86 machine code hackers and the actual implementation
of something like strcpy might (and likely will) be anything but
straight-forward.


 That's the reason why my go-to reference for C library implementation
is not the glibc, but musl: http://musl-libc.org/

--
 Laurent

---End Message---
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] *****SPAM***** Re: C string handling

2015-08-22 Thread Laurent Bercot

On 22/08/2015 16:58, Laurent Bercot wrote:

Spam detection software, running on the system tupac2,
has identified this incoming email as possible spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
the administrator of that system for details.


 Oh, for fuck's sake. If I can't post legitimate mails to the
list from my legitimate IP and link a legitimate site without
being barked at by deficient antispam software, I guess it's
time for me to leave.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Systemd Shims

2015-08-19 Thread Laurent Bercot

On 19/08/2015 15:29, Edward Bartolo wrote:

This is the completed C backend with all functions tested to work. Any
suggestions as to modifications are welcome.


 OK, someone has to be the bad guy. Let it be me.

 First, please note that what I'm saying is not meant to discourage you.
I appreciate your enthusiasm and willingness to contribute open source
software. What I'm saying is meant to make you realize that writing
secure software is difficult, especially in C/Unix, which is full of
pitfalls. As long as you're unfamiliar with the C/Unix API and all its
standard traps, I would advise you to refrain from writing code that
is going to be run as root; if you want to be operational right away
and contribute system software right now, it's probably easier to stick
to higher-level languages, such as Perl, Python, or whatever the FotM
interpreted language is at this time. It won't be as satisfying, and the
programs won't be as efficient, but it will be safer.

 On to the code review.



//using namespace std;


 This is technically a nit, but philosophically important:
despite of what they've told you, C and C++ are not the same
language at all, and when you're writing C, you're not writing
C++. So, if you're going to write a C program, forget your C++
habits, don't even put them in comments.



#define opSave  0
(...)
#define opLoadExisting  7


 As you've been suggested: use an enum.



const
char* path_to_interfaces_files = /etc/network/wifi;


 Aesthetics: avoid global variables if you can. If your variable
needs to be used in several functions in your module, declare it
static: static const char *path_... so at least it won't be
exported to other modules if you add some one day.



1) Glib::spawn_sync instead of a pipe stream, provides a slot.


 I don't understand this comment. It's C++ syntax again, and about
the Glib module. Please don't try to explain what you're doing with
analogies to other modules in other languages.



2) cmd trying to call an inexistent command still returns a valid pointer!
verify cmd exists before calling exec


 ... and don't do that, because it creates what is known as a TOCTOU
race condition: you check that the cmd exists, then you use it, but
in the meantime, it could have disappeared. Or the cmd might not
exist when you check it, but would have appeared before you used it.
So, in essence, the check is useless. What you need is a proper error
code when you try to execute a command that does not exist; if
necessary, change your API.



inline int file_exists(char* name) {
   return (access(name, F_OK) != -1);
}


 And the function isn't actually used in your code either, so you
should simply remove its definition.



if(fgets(buffer, buf_size, pipe) != NULL)


 If a line is longer than 127 bytes, it will be silently truncated to
127. Is that what you want? If yes, it's fine, but you should document
the behaviour.



if (out != NULL)
strcat(out, buffer);
else strcpy(out, buffer);


 This makes no sense. strcpy() can't be called with a NULL argument
any more than strcat can. What are you trying to accomplish here?



return pclose(pipe);


 This is a bit more advanced. When you design a function, or an
interface in general, you want to make it as opaque as possible: you
want to report high-level status to the caller, and hide low-level
details.
 Here, the caller does not know you used popen(), and does not want
to know. The caller only wants the result in the out buffer, and
maybe a 0 or 1 return code (possibly with errno set) to know whether
things went OK or not. Returning the result of pclose() leaks your
internal shenanigans to the caller: don't do that. Instead, design
a proper interface (e.g. OK - nonzero, NOK - 0 with errno set) and
write your function so that it implements exactly that interface.



int saveFile(char* essid, char* pw) //argv[2], argv[3]
{
char ifilename[1024];
strcpy(ifilename, path_to_interfaces_files);

strcat(ifilename, /);
strcat(ifilename, essid);


 Boom. You're dead.

 I'm stopping here, but you're making this mistake in all the rest
of your code, and this is the *single worst mistake* to make as a
C programmer: a buffer overflow. A buffer overflow is the main cause
of security issues all over the computer world, and you cannot, you
ABSOLUTELY CANNOT, publish a piece of code that contains one, especially
when said piece of code is supposed to run as root.

 ifilename is 1024 bytes long. You are assuming that essid, and
whatever comes afterwards, will fit into 1024 bytes. This is true
for normal inputs, which is certainly what you tested your program
against, but the input is given as a command line argument to your
program: you do not control the input. *The user* controls the input.

Re: [DNG] Systemd Shims

2015-08-19 Thread Laurent Bercot

On 19/08/2015 19:14, Edward Bartolo wrote:

I am not assuming anything and understand the risks of buffer
overflows.  The first step I am taking is to make the code function.
The second step is further debug it until it behaves properly and the
third step is to correct any potential security issues.


 I'm sorry, but no, this is not how it works. The first step, as
you say, is to make the code function, and that means *without*
security issues in the design. You can't add security in the
third step; security cannot be an afterthought, it has to be an
integral part of the design.
 Correcting potential security issues may force you to change
your API entirely, or rewrite significant portions of your code.
This is often impractical, and you may miss some of the issues.



As anyone can understand, projects, whatever they are, are not
completed in one step.


 Of course projects are not completed in one step. You submitted
a code for review, I gave you a review: this is part of the process,
let's get on to the next step.



As to studying other languages, here, you are NOT talking to a youth
in his twenties or his teens, but to a 48 year old. Learning a new
language is a lengthy process and the ones I know are far more than
enough for what I do.


 I don't care what your age is, or where you live, or what gender you
are, or anything else about you. I'm only looking at the code and saying
what I think of the code. If you want to write in C, then please take
my review into account: it may not be to your liking, but it is honest.

 Or use whatever other language you want: I won't know it well enough
to review you, so I'll be off your back.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Systemd Shims

2015-08-16 Thread Laurent Bercot

On 16/08/2015 06:53, Steve Litt wrote:

The toughest part is how to store the passwords in a way that isn't a
security problem.


 Unfortunately, /etc/wpa_supplicant.conf doesn't have an include feature
(which is strange, because hostapd supports a wpa_psk_file option).
 So you have to store the passwords (or the equivalent binary PSKs) in the
configuration file, and make this file readable only from root - which means
you need a small suid root binary to write the whole configuration file.

 Password security isn't a problem that you can fix at the interface level,
it's something that must be tightly integrated with the tool that uses the
password - and there's no doubt wpa_supplicant could do better here.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Devuan and upstream

2015-08-15 Thread Laurent Bercot

On 15/08/2015 22:19, Stephanie Daugherty wrote:

They did, but out of all this design by committee, hidden between all
the political bullshit and bikeshedding, they also created the most
brilliant, most comprehensive set of standards for quality control,
package uniformity, license auditing, and of course. the most robust
dependency management, at least among binary distributions.


 And their package management features:

 - no way to have several versions of the same package installed on
your machine
 - no atomic upgrades for single binary packages: if you have stuff
running during an upgrade, things can break.
 - no possibility to rollback.

 I'm sure there's a lot of good to say about the way Debian does
things, but quality of their package management isn't a part of it.
When you're running production servers and need reliability when
upgrading software, you just can't use Debian. And that is sad.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] ideas for system startup

2015-08-07 Thread Laurent Bercot

On 07/08/2015 14:58, Rainer Weikusat wrote:

There's obviously a TOCTOU race here because A is ready now doesn't
mean A is still ready at any later time.


 Of course. That's why you need a supervisor to receive death notifications
and publish them to whomever subscribes.



If there's something B can't do without A, it can't do that but it should
try to cope as good as possible, be it only by retrying until A becomes
available or delaying any 'expensive steps' until A became availabe.


 That's why you run B under a supervisor: worst case, it will be retried.
This does not preclude optimizing the common case, which is A will
succeed at some point, don't try starting B before that point.



But since the whole situation is hypothetical, this is nothing but a
speculation.


 It may be hypothetical to you, but it's very real to some people. It
has been very real to me in a few professional environments.



start-stop-daemon is a less ambitious solve everything somehow related
to process startup in one program (or a set of tightly coupled
programs) and that's something I consider to be the wrong approach.


 So do I.



| daemon chdir / monitor -n ca-server u-listen -g $GROUP -m g -l 
$SOCKET ca-server


 I agree with your script skeleton, except on this line. Here, your instance of
monitor is forked by your script. It can behave differently when you run your
script by hand from when it is run at boot time. Also, monitor itself is not
monitored; if it dies, you can't control ca-server anymore.

 And despite you not believing in dependencies, this is important. If daemon 
or
monitor do not support readiness notification and return as soon as their 
child
is spawned, ca-server may not be ready when your script returns, and if some
ca-client is started right afterwards, it will fail. True, it will be restarted
by monitor until it succeeds, but it's polling that can be avoided by proper
dependency management.



As someone else pointed out, the control flow code could be abstracted
away into some kind of 'universal init script' and individual ones would
just need to define the start and stop commands.


 That, or they may need to define service directories for a supervision
system. Just sayin'.



1) Keep a relatively simple init which kicks off execution of commands in
response to 'change the system state' request and nothing else (get
rid of as much of /etc/inittab as possible at some point in time)

2) Generally, keep the run-level directories, but with better management
tools.

3) Keep using the shell for everything it can easily handle. It's a
highly-level programming language capable enough of handling the
job. But I don't want to learn an additional programming language,
especially not one which looks like THIS and was designed in the
1970s! is not a sensible reason for discarding it. Programming has
its fashions and fads, just like everything else, but since 90% of
everything is crap 'new stuff' is mainly going to be just as bad as
'old stuff', only in a yet unknown way.

4) Provide the acutally missing features, ie demonstrably required
ones in form of additional tools which integrate with the existing
system.


 You keep hammering that down as if 1), 2), 3) and 4) were not obvious
to me. Would it be possible for you to stop feeling threatened by what
I'm saying and realize that we're after the same goals, so we can have
some constructive discussion ? If you can't, I'll simply let you argue
in front of a mirror and go back to coding.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Init scripts in packages

2015-08-07 Thread Laurent Bercot


 I'm not sure how systemd does it, but in my vision, there should be
two different states for the service: the *wanted* state, and the
*current* state.

 The wanted state is what is set by the administrator when she runs
a command such as rc thisrunlevel. The command should set all the
services in thisrunlevel in the wanted up state.
 The current state is exactly what it says: is the service actually
alive or not, ready or not?

 A service manager's job is to perform service state transitions
until the current state matches the wanted state - or an unrecoverable
error happens, in which case the admin should have easy access to the
current state in order to diagnose and fix the problems.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Init scripts in packages

2015-08-06 Thread Laurent Bercot

On 07/08/2015 00:09, Rainer Weikusat wrote:

Since this is maybe/ likely a bit harsh


 Not harsh, just unwilling to accept that I'm actually your ally and
not your enemy.

 I'm not trying to replace Unix, because Unix is not broken - at least,
not as far as system startup is concerned. There *are* broken things
in Unix, but that's a whole other enchilada that I won't have time to
address in the next 2 or 3 lifetimes.

 I'm trying to replace *systemd*, with something that embraces Unix
much, much more.

 sysvinit or sysvrc init scripts don't define Unix. Trying to do better
than them is not replacing Unix, it's trying to do things right. Now is
the time to do things right, because if we don't, systemd is going to
take over, for good. I don't want that, and since you're here, I don't
think you want that, either.



If you're convinced that rip and replace is the only
viable option then I won't claim that you're wrong because I really
don't know. However, until I hit a technical obstacle, I won't be
convinced that the existing system can't be fixed (I acknowledge almost
all defects you mentioned) and this is based on being familiar (as in 'I
wrote the code') with both approaches.


 Fine. I'm okay with the burden of proof. Let's take a simple example and
see how deep we have to dig to make it work.

 You have 3 longrun services, A, B and C.
 A and C can take a long time to start and be ready, because they access
some network server that can be slow, or even fail. But they don't depend
on anything else than this network server.
 B depends on A. It does not contact the network server, but it consumes
non-negligible resources on the local machine at startup time.

 Your objective is to reach the state where A, B and C are all up and
ready as quickly and reliably as possible. How do you proceed ?

 Serially ? A start  B start ; C start
 Not good: you add A's and C's latencies.

 Parallely ? A start  B start  C start
 Not good: B can be scheduled to start before A, and will crash.

 Using a supervision system to make sure all the services will eventually
be up ? s6-svc -u /service/A ; s6-svc -u /service/B ; s6-svc -u /service/C
 Better, but still not optimal: if B crashes because A is not up yet, it
will be restarted, and it's annoying because it's hogging important resources
every time it attempts to start.

 You need a dependency manager.

 rc A+B+C
 Much better. Except there's no such thing yet. The closest we have is
OpenRC, and for now it's serial: you'll eat the added latencies for A and C
just like in the naive serial approach. Ouch.
 ISTR there are plans to make OpenRC go parallel at some point. Let's
assume it is parallel already. What do you do if A crashes because the
network server is down ?

 You also need a supervision system, coupled with the dependency manager.
The OpenRC folks have realized this, and you can use a flag to have your
service auto-restarted. There's also some early support for s6 integration,
which I won't deny is flattering, but I still don't think the design is
right: for instance, there are early daemons you can't supervise.

 OK, now, how do you detect that A is ready and that you can start B ?
Well, you need readiness notification, assuming A supports it. You need
it because you don't want B to crash and restart. And now your rc system
must implement some support for it.

 And I haven't even mentioned logging yet.

 If you've written init systems, you must have reached that point. I'm
interested in knowing how you solved those issues.

 Now, if you try to implement process supervision, dependency management,
readiness notification and state tracking in pure init scripts, well,
glhf. That's what sysvrc, with tools like start-stop-daemon or other
horrors, has been trying to do for 10 years, without knowing exactly what
it was that it was trying to do. The result isn't pretty.

 Then systemd came and said hey look! I can do just that, and more, and
you don't have to bother anymore with those horrible sysvrc scripts!
And what did admins say? YES, PLEASE. And they gobbled up the crap because
it was coated with the sweet, sweet features. (And, yes, with an unhealthy
dose of propaganda and bullshit, but if you dig through the bullshit, you
can find a few things of value.)

 I'm saying the same causes will have the same results, and more tweaking
of the init scripts isn't going to accomplish anything without some serious
redesign. It's the easy path; I'm all for the easy path when it can be
taken, but in this case, it's not the right one. The right path is to
provide the sweet, sweet, *needed* features - but to do it the Unix way.



A social reason, eg, $someone
pays my bills ($someone =~ /hat/) and that's what I did and you will
have to eat it aka 'resistance is futile ...' won't do.


 You are fighting windmills. This is the Lennart way, it's not mine and
I don't think it is anyone's here. Even if I wanted to, which I don't
and never will, I don't have the power to ram anything 

Re: [DNG] Init scripts in packages

2015-08-06 Thread Laurent Bercot

On 06/08/2015 20:18, Rainer Weikusat wrote:

UNIX(*) and therefore, Linux, provides two system calls named fork and
exec which can be used to create a new process while inheriting certain
parts of the existing environment and to execute a new program in an
existing process, keeping most of the environment. This implies that it
is possible to write a program which performs a particular 'environment
setup task' and then executes another program in the change
environment. 'nohup' is a well-known example. Because of this, instead
of 40 lines of boilerplate in every init script, almost all of which
is identical, it's possible to identify common tasks and write programs
to handle them which can be chained with other programs handling other
tasks


 Yes. You are describing chain loading. (And the system call is named
execve, if you want to be pedantic.)



And that's finally the jboss start script. I have some more tools of
this kind because whenever I need a new, generic process management task
to be performed, I write a new program doing that which can be used in
concert with the existing ones.


 What you are saying is that your approach is exactly the same as the
one found here:
 http://skarnet.org/software/execline/
and here:
 http://skarnet.org/software/s6/

 It's free software, it works, it's maintained, and the author happens to
read the dng list, hoping to find technical interlocutors that share his
concerns for the future of GNU/whatever/Linux.

 Are you one of them ? Good. Let's talk.

 Now, chain loading is great, and all the necessary tools that perform
process state changes already exist, but that's not enough to make
init scripts safe. When you want to sanitize a process, or when you're
doing any kind of security really, you cannot have an allow everything
by default and deny specific things approach: fork your process from
the user's shell, then sanitize the environment, the fds, the cwd, etc.
You *will* forget something at some point; if you don't, the person who
writes the next init script will. Instead, you have to use a deny
everything by default approach: in the case of init scripts, that means
always starting daemons with the same, sanitized, environment. That can
only be done with a supervision system, as explained at
 http://skarnet.org/software/s6/overview.html

 And a supervision system itself is a great thing, but it's not enough,
because it does not handle dependencies between longrun services (i.e.
daemons that can be supervised) and oneshot services (i.e. stuff you run
once to change the machine state, but that do not leave a long-lived
process around). That is where a real service manager is needed.

 As loath as I am to admit it, systemd *is* both a supervision system
and a service manager. It does machine initialization, daemon supervision,
and state changes - badly, but it does them. And no amount of mucking with
init scripts, no matter how much chain loading you use, is going to
accomplish the same.

 The solution is not in criticizing, it's in doing. I have the supervision
system, and I'm working on the service manager. But this will be all for
naught if systemd opponents can't be convinced that it is necessary: the
admin who wants a service manager will hear sure, systemd does that! on
one side, and nah, this is purely systemd propaganda, you don't really
need that shit on the other side. Guess what they will choose.

 The other side should be able to answer sure, you can use *this* piece of
software, or *this* other one, to do what you want, and you don't have
to put up with all the systemd crap for this. It's the only way we can
make it work.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Init scripts in packages

2015-08-06 Thread Laurent Bercot

On 06/08/2015 16:00, Rainer Weikusat wrote:

That's all nice and dandy but it all boils down to 'the code executed by
the init script was deficient in some way'.


 Yes, just like root exploits boil down to the code executed by the
suid program was deficient in some way.
 My point is that you shouldn't need to have 40 lines of boilerplate
to sanitize your init script before you run it; if it's so fragile,
then it's badly designed.



But that's something different. Using inetd as simple, well-known
example, if I just changed /etc/inetd.conf, I want to daemon to reread
it and reconfigure itself accordingly to avoid interrupting unrelated
services but in case its on rampage in an endless loop, I want to get
rid of the current process and start a new instance. 'SIGHUP' is just a
random convention dating back to the times when signals where the only
kind of IPC supported by UNIX(*) and the signal was basically hijacked
because a 'daemon process' shouldn't ever receive it for some other
reason. It's not universally supported and not all daemons are so simple
that reread the config file is the only means of controlling them at
run time. Eg, someone might want to ask bind to reload a specific zone.


 All agreed. Service-specific configuration can only be achieved by
service-specific tools.



Service management is a more complex task than
[nohup] /usr/sbin/ochsenfrosch log 21 


 My point exactly.



[systemd]
Is it? Or is it an extremely incomplete, ad hoc designed programming
language with a service manager and a truckload of other potentially
useful stuff attached to it in order to encourage people to swallow the
language?


 I have never said, am not saying, and probably never will say that
systemd is any good. It's not, and Lennart and Kay should go back to
engineering school, and the people who hired them should go back to HR
school. Its Embrace and Extend strategy is as pernicious as Microsoft's
in the late 90s / early 2000s. It's technically inept and politically
evil.
 Nevertheless, those guys have understood a few things, and have done a
few things better than sysvinit or anything else really. You have to
analyze it, cut out the bad parts and keep the good parts. The concept
of service management is one of the good parts. They implemented it the
systemd way, but they did implement it.

 Know your enemy. It's easy and tempting to rant for days on systemd's
weaknesses; but it's also vital to acknowledge its strengths.



'Winning' against systemd will require getting support of a
commerically more potent company than RedHat and SuSE combined and one
willing to sink a sizable amount of money into the task.


 No, I don't think so. You don't fight Goliath with Goliath. You don't
fight Microsoft's proprietary-ness by investing into Apple. The last
thing we need against systemd is another systemd, as you say yourself
in the rest of your paragraph, which I fully agree with.



But 'booting the machine' is a much simpler task and it can be solved
within the existing framework by incrementally adding the missing
bits.


 Depending on what you mean by adding the missing bits, I may or may
not agree. I'm not suggesting doing things the systemd way, but I do
believe that a quantum leap is needed. Which, of course, does not
preclude maintaining compatibility for some time to ease the transition.



Starting from the
presumption that this will turn out to be necessary is a guess.


 You either want to do things right or you don't. If you do, then it's
not a guess: starting and maintaining services reliably requires more
than the existing framework. There are countless web pages and
heartbreaking mails on support mailing-lists to prove it.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] automount, mount, and USB sticks

2015-08-01 Thread Laurent Bercot

On 31/07/2015 11:47, Rainer Weikusat wrote:

But that's not a good reason for it being installed and running: A
daemon process should only exist because it provides some important
functionality with a real benefit for users of the system which cannot
(reasonably) be provided in some other way


 Nobody contradicts that, and in particular I don't see how this
disagrees with my point. It's the job of the distribution to make sure
only useful daemons are started.

 Giving the user the ability to gain privileges without opening a
security hole fits my definition of useful.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] automount, mount, and USB sticks

2015-07-29 Thread Laurent Bercot

On 29/07/2015 17:07, tilt! wrote:


I am certain there is a way of solving this automounting
problem (if I may call it that) cleanly, without the use
of either of them. :-)


 There is a way to solve (almost) every suid issue cleanly, but
it requires running a small additional daemon for every command you
might want to run with special privileges, so this is not a
generic solution - but it can work for automounting.

 http://skarnet.org/software/s6/s6-sudo.html

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] automount, mount, and USB sticks

2015-07-29 Thread Laurent Bercot

On 29/07/2015 16:02, kpb wrote:

That is a really interesting way of looing at things, thanks for the mental 
prompt.


 It's an elementary design principle: separate the engine from the interface.
I very much hope people who design GUIs keep it in mind.



How would you deal with providing notifications to a GUI layer if present?


 Have a general notification mechanism present, such as a bus (and no, I'm
not advocating D-Bus). Then have your GUI clients register on the bus and
subscribe to the events they're interested in, just like any other client
would.

 But that's for the general case. In many other, specific cases, there will
be a communication stream from the command-line utility to its caller; if
the GUI is the caller, it can simply keep the command-line utility alive
and use the stream of information it's getting from it.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] automount, mount, and USB sticks

2015-07-29 Thread Laurent Bercot

On 29/07/2015 19:44, Jaromil wrote:

IMHO the bigger barrier to this is not having
a string parsing code (or basic grammar)
that is security oriented, I mean hardened
to run as root and handle corner cases


 The tool I linked does no parsing at all. The user gives the end
of the command line she wants to run, but the start of the command
line is fixed at daemon start time. One daemon per start of
command line; you can have hundreds of those if needed, because
each instance uses very little memory (max 2 pages of private dirty
stack, no heap).



most code out there has too many features
and is too ambitions to fulfill such a simple task


 I have a lot of tools that fulfill simple tasks, specifically made
to address these kinds of problems. When you're done with your
priorities - releasing Devuan 1.0 -, let's talk.



I think I speak for most people here when I say we dislike
the quantity of undocumented daemons running
on on gnu/Linux desktop nowadays and
I hope we can trim that down with Devuan


 The real sticking point in what you just wrote is undocumented.
 I think most people wouldn't mind a pandemonium on their machine IF
they knew exactly what daemon is doing what, how many resources a
daemon consumes, and how to disable the ones they don't need.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] systemd in the era of hotplugable devices

2015-07-23 Thread Laurent Bercot

On 23/07/2015 22:41, Peter Maloney wrote:

What's wrong with these, which Thunderbird handles just fine?


 Ah, indeed it does, when the list address is in the To:
 It does not when the list address is in the Cc:
 So the solution is to make sure to always send To: the list. :)
 Thanks for the notice.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Will there be a MirDevuan WTF?

2015-07-23 Thread Laurent Bercot

On 23/07/2015 10:36, T.J. Duchene wrote:

I do not understand this animosity toward D-BUS.  Could you please
explain why it is such a point of contention?  It is a only a
protocol, with many different implementations.   It is comfortably
very generic and used on other UNIXs.


 Simple: it has a horrific design and implementation. It actually
exhibits the same technical problems as systemd: it is a monolithic
thing that attempts to do everything, and manages to do everything
badly.

 AFAIK, there are no political problems with D-Bus; the political
issue comes from those who want to integrate it into the kernel. But
boy, are there technical problems. I pity every maintainer in charge
of software that uses D-Bus.
 Just one single example:
 http://thread.gmane.org/gmane.linux.kernel/1930358/focus=1939166
 Using that many resources is horrendous, and a sure sign of terrible,
terrible engineering.

 I agree that it's a fight for another time, though.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] systemd in the era of hotplugable devices

2015-07-22 Thread Laurent Bercot

On 22/07/2015 22:20, T.J. Duchene wrote:

That said, the reality of the situation is quite different than it is
in theory.  As the old saying goes in the American Midwest: The
proof is in the pudding.  Until someone provides a systemd
alternative that works better than systemd, yet provides conveniences
and the same API, no one who has latched on to systemd is going to
change their mind.


 Right. And that's why it's difficult: systemd has manpower, so it can
provide a lot of features - features that we have to replicate if we are
to offer a viable alternative, and we don't have as much manpower.

 For udev, login, and such, I don't think it's a problem, though. udev,
login et al. were working before systemd came along, so it's just a
question of cutting the BS and performing the right communication.
(Which is another issue per se, because the systemd people also have
manpower for communication - but it's not a technical issue.)



In my humble opinion, the best way to kill systemd is to dilute it by
cloning the API.


 I respectfully disagree. I'm of the opinion that cloning the API
acknowledges its value; to me, the best way to kill systemd is to
provide a serious alternative to everything that it does, but to
do it *right*, offering the advantages of systemd without the drawbacks;
and the API should be designed with that goal, to do things right -
which mostly precludes using systemd APIs.

 The problem with the systemd APIs is that they kinda enforce the
underlying architecture, and using them amounts to basically rewrite
systemd. The APIs themselves are not bad from a programmer's point of
view, but the architecture is, from an architect's point of view, and
that is what must be deconstructed.

 OT: I would like it if the list host could set the Mailing-List:
header on list messages. Most MUAs understand it and implement a
reply to list feature; without it, we're stuck with manual configuration
or hitting reply to all, which causes duplicates.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] systemd in the era of hotplugable devices

2015-07-22 Thread Laurent Bercot

On 22/07/2015 10:00, Oz Tiram wrote:

One argument I hear often about systemd is that it more adapted to current 
hardware needs, [e.g. here][1]

   Computers changed so much that they often doesn’t even look like
  computers. And their operating systems are very busy : GPS, wireless
  networks, USB peripherals that come and go, tons of softwares and
  services running at the same time, going to sleep / waking up in a
  snap… Asking the antiquated SysVinit to manage all this is like asking
  your grandmother to twerk.

What I don't understand is how an init system manages hot pluggable devices.
What does replacing a hot plugable disk drive it have to do with how the system 
is booted?
Maybe this all done at the none init parts of systemd?


 Hi Oz,

 Don't believe everything you read on the Web. ;)
 The author of the article has already adopted systemd's point of view, which
is one init should do everything, without even being aware of it.

 The truth it, you're perfectly right: it is not init's job to manage hot-
pluggable devices. There is NO reason why init should be made aware of
those kernel events, and the systemd can manage modern hardware meme is
but a pile of propaganda.
 Any init system, including sysvinit, will work just as well: managing
hotplug is udev's job, and anything implementing udev functionality will
do. udev predates systemd, so systemd did not invent the feature; it
just took udev and integrated it tightly to make itself unavoidable,
a.k.a. virus tactics.

 eudev and vdev, as well as other udev-like daemons, prove this is not
necessary. So you can safely ignore the article, written by someone who
has a wrong idea of what init is supposed to do.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] systemd in the era of hotplugable devices

2015-07-22 Thread Laurent Bercot

On 22/07/2015 16:24, Isaac Dunham wrote:

In general, I'd agree with you, but there are some situations where it's
possible to argue for hotplugger/service manager integration:
  if you hotplug a scanner or printer, there's reason to think that the
  corresponding daemon (sane/cups/lprng/lpr) should start.


 Oh, yes, integrating the hotplugger and the service manager is a good
idea. But it does not have to be performed as intimately as systemd does.
It's possible for a hotplug manager to spawn a script for certain events
and have those scripts make calls to the service manager. The scripts
can even be changed depending on the service manager you have, without
changing the hotplugger.

 That kind of modularity is a major strength of Unix, and is one of the
things that systemd is disregarding, either out of incompetence (can't
design Unix software) or out of malice (actively tries to get integrated
with every aspect of the system).



None of these are actually 100% reliable, since you have a service
starting upon some request; if there isn't enough RAM, it falls flat.


 It's the case with every service manager, and everything you start
on-demand. systemd or not, integration or not, you'll have that problem
anyway.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Proposed defaults changes

2015-07-19 Thread Laurent Bercot

On 19/07/2015 20:07, Didier Kryn wrote:

You say crapware; I've also read bloatware. Everyone complains
about GNU, including me, but I don't forget everyone is or should be
immensely gratefull for the wonderful software they provide to the
world, free and open. Think of gcc, glibc, emacs, latex...


 GNU is and has always been a political project more than a technical
one. And it has been very successful with its main goal, which is
great; I cannot thank GNU/Linux enough for giving me access to a free
Unix-like operating system in my learning years, and for providing a
serious alternative to Microsoft in the server world (serious as in:
it works about as well and it is cheaper).

 However, GNU should not be taken for what it isn't, and great
ideologies do not good engineering make. GNU is awesome when all the
alternatives you have are proprietary software, but the picture gets
much uglier when you start looking under the hood and evaluating the
software from a sheer technical viewpoint. Most of GNU *is* bloatware,
and a significant part of it makes outright bad technical choices;
some well-known GNU tools are a consistently bad experience for people
who use them everyday and either don't know better, or have no choice
because there really isn't any other free alternative.

 Do not be afraid to give credit where credit is due *and* point out
faults where they are; these are not contradictory. And I actually
believe that the best way you can be grateful to GNU is to be brutally
honest with them so you give them a chance to improve.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] startup scripts (was dng@lists.dyne.org)

2015-07-18 Thread Laurent Bercot

On 18/07/2015 09:52, Didier Kryn wrote:

There are two categories of launchers: supervisors and
non-supervisors. Similarly, I think two scripts only are needed for
every daemon: one for launching without supervision, alla sysv-init
and one for launching from a supervisor. Just two, not one per every
launcher/supervisor.

Further more, a method could be agreed on to tell the script if the
daemon is going to be supervised or not, and we would then need only
one script for all cases.

For example, providing supervised-start and supervised-reload in
addition to the other cases could do the job. For safety, the scripts
could check a file which tells them which supervisor is calling
them.

What do you think?


 It's a bit more complicated than that.

 Supervision suites don't only take a script, they take a whole
service directory where the configuration is stored. Which is a
good thing, because it involves not only the daemon-starting script,
but cleanup procedures, readiness notification procedures, and other
details on how to properly supervise the daemon.

 The daemon-starting script itself is launched by the supervisor
without arguments, so you cannot call it as foo supervised-start.
You could provide one unique script for every daemon that does just
what you suggest, but you would still have to provide a complete
service directory in addition to it.

 And the service directory isn't exactly the same depending on the
chosen supervision suite. daemontools, runit, s6, perp and nosh all
have subtly different options, so even if you have a common basis
and simple daemons can use the same service directory with several
supervision suites, when you get into advanced stuff such as
readiness notification,configuration details must be handled
differently.

 Believe me, providing sysv-rc compatibility when you're working on
a supervision suite is no small feat: the paradigm is very different,
and when you've had a taste of supervision, you realize how poor the
sysv-rc model is and how hard it is to contort yourself to fit into
the box. If it was easy, we would have provided compatibility packages
long ago, and supervision would already rule the world (with a gentle,
good hand).

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] startup scripts (was dng@lists.dyne.org)

2015-07-18 Thread Laurent Bercot

On 18/07/2015 12:42, Fred DC wrote:

I am not saying that runit is better as s6 - all I want to point out is
that debian runit, until recently, intergrates fairly well with sysv-rc.


 The reason why it does is that it compromises on supervision. I don't
know how debian runit is packaged, but I'm willing to bet that in this
model, for instance, udevd is not supervised by runsv. See the first FAQ
entry at http://skarnet.org/software/s6-linux-init/quickstart.html for
details.



Yes, the supervised services do need their own service-framework with
their own scripts. For me (as a simple user) the hard nut to crack was
to write stubs and a script which during a debian-update translate the
inet-calls to sv-calls without insserv telling me to take a hike.

I succeded because I accepted the fact that I have the standard
lsb-sysv-scripts in /etc/init.d/ and that the underlying dpkg-system
does use these scripts.


 I'm currently working on a service manager for s6 that can ease the
transition between a sysv-rc style and a supervision style, i.e. it can
use the sysv-rc scripts at first, and switch to a supervision style
service by service when package maintainers start supporting it. It
still doesn't make things easy, but I believe it makes the support
envisionable for a distribution, because the workload does not have
to be done all at once - it can be spread over time.



BTW, and what about rcS, rc0 and rc6a complete re-write?


 That's my ultimate goal, but it's obviously not feasible without a
gradual and smooth transition plan, which I'm elaborating now.



Naah... KISS... use what we got! Strictly my opinion.


 KISS all right, but sysv-rc is not SS, and most importantly, it's
not correct. Use what we got is a good strategy to organize work
and focus on the urgent things first, but at some point, when
what we got is insufficient, it needs to be addressed. Take your
time, and I'll be there when distributions are ready to tackle the
issue. :)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Fw: [ale] ALE-Central Meeting Thursday July 16, 2015 @ 7:30pm

2015-07-15 Thread Laurent Bercot

On 15/07/2015 06:30, Steve Litt wrote:

Apparently somebody has arbitrarily declared July to be systemd month.
I wonder what *we* can do to celebrate systemd month.


 I'm going to happily ignore it and do my own thing, and unless you
want to give more power to systemd by acknowledging that the decisions
of some dude in Atlanta have to influence your life because there's
the word systemd in it, you should too.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[DNG] Mail writing interfaces / processes on Linux (was: systemd in wheezy)

2015-07-09 Thread Laurent Bercot

On 09/07/2015 19:36, Steve Litt wrote:

I know what you mean. In the past 9 months I've seen a huge uptick
in ambuification in emails, to the point where many times, you don't
know who said what, and it looks like the person is arguing with
himself, with temporal dislocations thrown in as people top post with
words like it instead of exactly what they mean, or I agree in a
thread with twelve different assertions.


 Blame the tool designers. Most users read far more than they write, so
tools are optimized for reading, and not much work goes into UIs for
writing. Users are lazy - that's nothing new - and simply don't put in
the necessary effort to properly format what they write; but a good UI
should make it easier for them, or even do it in their stead.
Unfortunately, they are few and far between.

 GMail is a prime example of this sad state of the art. The GMail Web UI
is optimized for conversation reading, i.e. it will display all the
mails in a thread at once. But the way GMail can do that is that when
you reply to a conversation, it automatically quotes the *whole*
conversation in your mail, and forces you to top-post, so the UI can
hide the quoted part that's below your answer.
 This is great for readers who use the GMail web interface. And it is
absolutely horrible for people who don't.

 My own lists only accept plain text - I consider that if you want to
communicate via a mailing-list, you should be able to handle plain text;
if you want HTML, go to a web forum. But obviously, not everyone agrees.



By the way, I have no personal knowledge of how many actor sockets a
listener socket can spawn off, but if I had to guess, I'd imagine 50
would be way too low a number, if for no other reason than none of
my current and former ISPs would have been able to serve httpd to
the masses if 50 was the limit.


 If you're interested in the how many simultaneous clients can I handle ?
question, a fundamental reference page is: http://www.kegel.com/c10k.html
 It was essentially written between 1999 and 2003, but parts of it have
been maintained until today, and most of it is still pretty accurate. The
underlying APIs or algorithms have not changed that much. TL;DR: if you
use the proper APIs, you can serve around the order of 1 clients
simultaneously on one socket. And that was already true in 1999.

 That is for heavy network servers. For services where you don't
expect 10k clients, you can use the fork/exec model just fine - that's
what inetd and tcpserver do, and it works pretty well. I expect you
could serve several hundreds without a problem, and in certain cases,
you could probably reach one or two thousands before experiencing
noticeable slowdowns.
 The first problem you'll encounter when doing that will probably be
the amount of resources, especially RAM, that you need to keep several
hundreds concurrent servers running. Most servers are not designed to
be especially thrifty with RAM, and if every instance is using a few
megabytes of private data, you're looking at a few gigabytes of RAM
if you even want to serve 1000 clients.

 Now, the original point was What is the maximum number of processes
you can run on a system. Well, for all practical intents and purposes,
the answer really is As many as you want. As I usually put it:
processes are not a scarce resource. Let me repeat for emphasis:
*processes are not a scarce resource.*

 I don't know what the scheduler algorithm was pre-Linux 2.6, but
in Linux 2.6, the scheduler was in O(1), meaning it scheduled your
processes in constant time, no matter how many you had. How awesome
is that ?
 They changed it for some reason, in some version of Linux 3.0 or
something around it. Now it's in O(log n), which is still incredibly
good: unless you have billions of billions of processes, you are not
going to noticeably slow down the scheduler. Fact is, you're going to
fill up the process table way before having scheduler trouble.

 Go ahead and make your fork bomb. You *will* notice a system slowdown,
but that will be because all the processes in your fork bomb are
perpetually runnable, so it's just that you will be hogging the CPU
with a potential infinity of runnable processes, and anything else
will have no timeslice left. You will see immediately that your shell
becomes unable to fork other commands - your fork bomb has filled up
the process table. But the system is still running, as best as it can
with all CPUs at perma-100% and a full process table.

 Historically, pid_t was 16 bits, and 32k processes won't kill your
scheduler. Nowadays, pid_t is 32 bits, and although there definitely
are limits that prevent you from having 2G processes, the sheer number
of processes isn't it.

 On a typical machine, the constrained resources are RAM and CPU. Those
are the resources you'll run out of first; and a process will consume
more of one or more of the other, depending on what it does and how it
is used.

 A Linux process takes some kernel memory (not sure exactly 

Re: [DNG] Linus answers a question about systemd

2015-07-01 Thread Laurent Bercot

On 01/07/2015 20:21, Aldemir Akpinar wrote:

http://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds-answers-your-question?utm_source=feedly1.0mainlinkanonutm_medium=feed


 Yes, it can happen to the best of us: even Linus doesn't know there
are alternatives to handle services the right way, and that are less
insane than systemd. I don't blame him: userspace is not his world,
he's not interested in it, and has only heard the loudest parties,
i.e. traditional init vs. systemd, with a distant ear.

 Linus really isn't the guy who must be convinced, here. He is the
guy who must be convinced that kdbus is a horrible idea, that's for
sure. But as far as systemd is concerned, the people who must be
convinced are distribution maintainers and daemon writers.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Packages aren't the only path to alternate inits

2015-06-18 Thread Laurent Bercot

On 18/06/2015 16:57, Hendrik Boom wrote:

I assume that aptitude has enough algorithmic capacity to do this, but
when things get complicated there may not be enough computational power
to carry out this analysis in available time and space.


 My experience is that we have way more computational power than
we think, and most inefficiencies come from the implementation, not
the amount of work there is to perform.

 I'm working on a dependency manager. At some point, I have to do
some operation in O(n^2), n being the number of services; there's
no other choice. But it's still blazingly fast, and you can have up
to thousands of services with sub-second execution times. Modern
computers are powerful, and data size is nothing to be afraid of -
as opposed to program size, which must be handled by humans.

 So, yeah, if aptitude can handle package dependencies, just feed it,
make it work. Even if you have disjunctions - and disjunctions
are algorithmically expensive - it's not what will take the most
time. Unless, of course, the engine is written in Python or something
equally unsuited.



Bow, since its possible to have seeral init systems installedd, and
even to have different subsytems started by different init systems
(not all running as PID 1, of course), perhaps the mutual exclusion
among the init systems is a bad idea.


 Absolutely. Why enforce exclusion when you can have a choice ?
Make a currently active vs. inactive switch, I don't know the
Debian/Devuan terminology, and allow users to install both.



But I can't recommend this be done for davuan jessie.  Too soon, and it
would make jessie too late.


 I think we're all planning for the future, here - get Jessie out
first, and then small steps, one at a time. :)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-15 Thread Laurent Bercot

On 15/06/2015 05:57, Isaac Dunham wrote:

(3) Server uses not-a-supervisor:
# write a small C wrapper that forks, execs server in child,
# accepts s6-style notification and exits in parent
fake-sv -d 1 server
client


 The main reason why I advocate such a simple notification style is
that this kind of thing is trivial to script, without a C wrapper:

 if {server ;} | read y ; then
   client
 else
   echo Server did not notify readiness 12
   exit 1
 fi



(4) Server with s6, as far as I can tell from the skarnet docs:


 Using a supervision framework is the best solution if there is a
case for running the server independently from the client. If there
isn't, the above script works.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] printing (was Re: Readiness notification)

2015-06-14 Thread Laurent Bercot

On 14/06/2015 11:58, KatolaZ wrote:

Sorry for asking a silly question, but what's the problem in having
cups running all the time? And better, if you start/stop cups when
you need it, why should cups notify systemd (or any other init) that
it is ready to do things? Why should init be informed of the fact that
a daemon is running or not, except maybe at boot time, and just for
the sake of allowing parallel boot?


 It's not about parallel boot, it's about reliability, even in the case
of sequential boot. It's also not about notifying init, it's about
notifying services that depend on it, which is generally handled by
a centralized service manager - and systemd is such a centralized
service manager that also runs as init.

 Say you have an awesome automatic Web printing service; it should
not be activated before CUPS is ready, else it will not work and
clients will complain. How do you make sure you don't start it too
early ? The historical solution is to sleep for a certain amount of
time, but this is ugly, because either you don't sleep enough and
it still fails, or you sleep too much and you waste time. The
correct solution is for CUPS to tell your Web printing service
when it's okay to start - but since CUPS doesn't know what depends
on it, it can only notify its manager, which will then take care of
things.

 Of course, you might not have a use for the mechanism, and if Devuan's
only intended audience is desktop PC users who don't care about
reliability, then it does not matter. But readiness notification is
a real engineering issue that needs to be solved.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] printing (was Re: Readiness notification)

2015-06-14 Thread Laurent Bercot

[ Didier ]

What happens then? Does the webprinting service crash? Or does it
hang until Cups is ready? Is it able to detect that it is hanging?
The last would probably be the most sensible way to handle the
dependency :-) A professional webprinting service should be able to
do that. And this is what Cups itself does when a printer is paused.


 Yes, of course services should fail gracefully when their dependencies
are not met. should being the operative word here. In reality,
professional or not, few programs test their error paths thoroughly.
Often, the best behaviour when encountering such an issue is simply
to exit. You cannot ask every service to hang until their dependencies
are met - exiting with an error message is much simpler.
 As an admin, I'd rather not experiment with all the error paths in
all the services I'm launching. I'd rather make sure they work.


[ Stephanie ]

I'm firmly in the camp that process supervision is evil, because
service failures on a *nix system should not happen


 They should not happen. But they do.
 And auto-restart is not the only thing that process supervision gives
you. Ease of process management is a big one for me.



and when they do
they should be a really big inconvenient deal that wakes people up at
3am - because that's the sort of thing that gets problems noticed and
fixed.


 Nothing prevents you from having an alert that wakes the admin up
when the service fails. But while the admin is waking up and logging
in, it's better if the service has been restarted and is trying to
be operational.



Process supervision trivializes failures, and leads us down a
path of *tolerating* them and fixing the symptoms instead of fixing
the problem


 No, it does not. If your admins use process supervision as an excuse
to be complacent with failures, the problem lies with your admins, not
with process supervision.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] printing (was Re: Readiness notification)

2015-06-14 Thread Laurent Bercot

On 14/06/2015 19:17, KatolaZ wrote:

I am sorry but you simply don't get rid of race conditions by
signalling that the daemon is ready. If the daemon dies or hangs for
whatever reason, you will still have a problem, since you thought the
service was up and running while it is not any more


 Which is why you have process supervision and service monitoring,
and that's another topic entirely. Yes, reliability is hard.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] printing (was Re: Readiness notification)

2015-06-14 Thread Laurent Bercot

On 14/06/2015 23:45, Jonathan Wilkes wrote:

Is there a way to tell which packages use a particular function like sd_notify 
et al?


 Authors *should* document readiness notification capabilities of
their daemons. But then again, reality may be different.

 sd_notify will be easy to spot: there will be a dependency on
libsystemd. :)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-14 Thread Laurent Bercot

On 15/06/2015 00:36, Isaac Dunham wrote:

I think that a program that must run in the background is broken.
Yet *prohibiting* auto-backgrounding imposes an even more heavy toll
on scripts where process 1 requires process 2 to be running and
ready: you *must* run a supervisor, or else run a lame-not-quite-a-
supervisor, just to do what would have been trivially done in a few
more lines of C.


 Can you please elaborate or reformulate ? I don't understand what you
mean. I don't think there's *ever* a case where it is necessary for
the original process to die, so what is the kind of script you are
thinking of ? Could you give a small example ?
 (We should take this to the supervision list, this is becoming
seriously off-topic for dng.)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-13 Thread Laurent Bercot

On 13/06/2015 11:37, KatolaZ wrote:

 AFAIK all this fuss with
daemon-readiness began with the first attempts to have parallel boot
sequences, which is something that is still *useless* in 95% of the
use cases: servers don't get restarted evey other minute and normal
users who don't use suspend can wait 30 seconds to have their desktop
ready, what else? Embedded systems? Well, they usually have just a few
services running, if any


 30 seconds is a lot. What if you could get your desktop ready in
5 seconds or less ?
 About embedded systems, things are changing quite fast. The amount of
services running on an Android phone, for instance, is mind-boggling,
and it can take several minutes to boot a phone. Most of this waiting
time is unnecessary.



What are we really talking about? Isn't this just another
feature-for-the-feature's-sake-thing? Why should I bother to allow
cups signalling immediately that it is ready to put my printouts on a
queue that will anyway need minutes to be processed?


 A printing service may not be the best example, but when you're late
for a broadcast and turn your TV on, you don't want to wait any more
than is necessary to get image and sound. If your TV took extra seconds
to boot up, you'd curse at it.
(This has happened to me, I watch broadcast streams on my PC and have
wished for faster booting times on more than one occasion.)

 More generally, the main reason why computing is so slow is that
everyone reasons the same way why bother? Somebody has to start,
and there's no better place than the low level.

 If nothing else, there's the public relations argument. As long as
systemd is able to say we do parallel service startup and boot your
machine in 5 seconds, nobody else does it, then systemd has a real,
valid, technical advantage over other systems, and it makes it harder
to advocate against it.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-12 Thread Laurent Bercot

On 12/06/2015 22:21, marc...@welz.org.za wrote:

The trick is for the daemon process to only background when
it is ready to service requests (ie its parent process exits
when the child is ready).


 You already mentioned it in a reply to me, indeed. I intentionally
did not follow up, and here is why.

 This is bad design, for several reasons. It's actually worse design
than sd_notify(), and I'd rather have every daemon use sd_notify()
and write a sd_notify server myself than recommend your solution.

 The reasons why it's bad are mostly described here:
 
http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/unix-daemon-design-mistakes-to-avoid.html

 Your solution enforces forking, i.e. auto-backgrounding, and
prevents logging to stderr; in doing so, it prevents daemons from
ever being used with a supervision framework. systemd may be able
to supervise it, using cgroups, but that's about it, unless the
admin is using foreground hacks.
 Moreover, the method you are using to transmit one bit of information,
i.e. readiness of the service, is *the death of a process*. This
is pretty heavy on the system; a simple write() does the job just
as well, and hundreds of times faster.

 If auto-backgrounding was an absolute necessity, then sure, the
parent would need to die anyway, and you could time its death so
it transmits readiness. Why not. But auto-backgrounding is not good
design in the first place; it comes from ancient Unix times when we
did not know better, and the daemon() function should either
disappear into oblivion, or have a place in the museum of medieval
programming as an example of how not to write Unix software.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-12 Thread Laurent Bercot

On 12/06/2015 20:09, Tomasz Torcz wrote:

   Hey, it's almost exactly what sd_notify() does.  Instead of one character,
it writes READY=1 to a socket. Nothing more, no D-Bus, no additional
libraries needed.  In basic form it few lines of C code.
   Of course 
https://github.com/systemd/systemd/blob/0e4061c4d5de6b41326740ee8f8f13c168eea28d/src/libsystemd-daemon/sd-daemon.c#L414
looks much worse, but it packs more functionality than simply
readiness signalling.


 Which is exactly the problem: it packs more functionality.
 More functionality, that can be added to at a developer's whim, is
something that a stub has to support - if only to not crash on
functionality it doesn't understand.
 More functionality means that daemon authors will rely on it, and
use the readiness notification mechanism for things entirely unrelated.
 More functionality means feature creep that you will have to follow
to remain compatible.

 It's easy to write a notification server that listens to connections
from sd_notify() and does things when it reads READY=1. It's not so
easy to implement all the sd_notify protocol, even right now, and with
all the random stuff that will inevitably get added over the years,
when daemons start relying on it.

 Instead of those 68 lines of code, I suggest the following:

void notify_readiness (void)
{
  write(1, \n, 1) ;
  close(1) ;
}

 You really don't need anything more.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-12 Thread Laurent Bercot

On 12/06/2015 19:46, Steve Litt wrote:

I agree with every single thing you write above, but have one question
for you: What does Devuan do when daemons like cupsd and sshd make
sd_notify calls, and these don't condition the call on sd_notify being
available, and sd_notify cannot be conditionally recompiled out of the
program? Is Devuan going to rewrite every single daemon?


 Short answer: yes. That's exactly the point of a distribution, as
Katola says.

 A bit longer answer: it doesn't even have to be hard. You don't have
to rewrite anything - just patch; it shouldn't be too difficult to
edit out the parts calling sd_notify.
 Even better, suggest alternative software that you don't have to
patch. cupsd and sshd have been infected by systemd? Well, maybe you
should inform the users, and provide similar functionality in a
systemd-free way. Isn't that the goal of Devuan? sshd servers are
not lacking. As for printing servers, I don't know, but I'd be surprised
if cupsd was the only possibility.

 And if it actually is the only possibility, then we have a bigger
problem than just sd_notify: it means that monopolies exist in free
software - real, existing monopolies, albeit more inconspicuous than
systemd's obvious attempts at a monopoly - and it is taking away from
users' freedom. And that is what needs to be fought first and foremost.

 I firmly believe that in 20ish years, we have lost most of the awareness
of what free software is and what it means. If we cannot actually dive
into the code and take out what we don't want, then it's de facto not
free software anymore, no matter the reason. systemd is proprietary
software despite its licensing because it's too complex to be tinkered
with at an individual level, it's controlled by a company, and it uses
embrace and extend methods to establish market domination. But I don't
think sshd and cupsd are there yet - they can still be worked with. Same
with most daemons that have already succumbed to the sirens of sd_notify.
And I certainly hope that those are few and far between.



By the way, if there *were* a stub sd_notify, I'd have nothing against
it doing nothing but passing the info to S6-notifywhenup or something
like it.


 The issue is complex because it's both technical and ideological.

 Stubbing the sd_notify client, i.e. writing a library that daemons can
link against, is easier because it doesn't depend on anyone else than
the person writing it - but it is ideologically worse because it gives
ground to systemd, and does nothing to incentivize daemon writers to
stop usingthe sd_notify API.

 Encouraging daemon writers to use another API and providing a wrapper
to make daemons using the simpler API work with the sd_notify mechanism
is clearly the better ideological solution, and also technologically
preferable because more compatible with other notification frameworks;
but it's harder, because it requires communication with daemon authors,
and the systemd proponents have more communication power than we do
(read: $$$). But I think authors can be convinced if we show that our API
is simpler to use and will work under any framework, systemd included.

 If there were a stub sd_notify, I'd rather have it output the info in
the simplest possible format so that it can then be used by *any*
notification reader, and so that it's actually easier for a daemon author
to output the notification directly than to call sd_notify(). I'm a bit
uncomfortable treading these grounds, because it's technical talk that
ultimately has a political goal, and I don't like to mix the two, but
when stubbing is involved it's unfortunately unavoidable.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[Dng] Readiness notification (was: One issue with ongoing depoetterization)

2015-06-12 Thread Laurent Bercot

On 12/06/2015 19:01, Steve Litt wrote:

The one thing I *do* know is that we need to provide a sd_notify
interface, even if it does nothing at all and drops passed information
on the floor.


 Please don't do this.
 The more you bend to the systemd interfaces, the more it gets a foot
in the door. By implementing a dummy sd_notify, you acknowledge the
validity of the interface; you accept that the systemd people have
created something worth using, and you validate the existence of
(a part of) systemd.

 The thing is, the sd_notify interface is not *too* bad, but like
the rest of systemd, it is overengineered, too complex, and mixes
several different things in a monolithic way so that people who are
trying to actually implement real support for sd_notify are bound
to reinvent systemd one way or another.

 It's absolutely like the old Microsoft embrace and extend strategy,
you know. They get one foot in the door by providing something useful,
but then pile their own crap onto it, and people are already relying
on it, so you want to implement support for it, and then you're captive,
and the only way to be 100%-compatible is to use the original product.

 There's a much simpler mechanism that can be used to provide
readiness notification, and that I suggest in
   http://skarnet.org/software/s6/notifywhenup.html
and that is: write a given character on a descriptor, then close that
descriptor.
 It doesn't get any simpler than that, it doesn't require linking
daemons against any library, and it can be made to be compatible
with *everything*. If a daemon supports that mechanism, it is very easy
to write a systemd-notify program that would wrap that daemon and
communicate with systemd via sd_notify to inform it of the daemon's
readiness, the way s6-notifywhenup does for s6.

 Please KISS, and design good interfaces to be adopted, instead of
letting yourselves get bullied by systemd forcing its interfaces down
your throats and jumping through hoops to adapt to them. With a
simpler interface and a trivial wrapper program, the influence of
systemd only extends to the wrapper program, and not to the daemons
themselves.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Readiness notification

2015-06-12 Thread Laurent Bercot

On 12/06/2015 19:46, Steve Litt wrote:

I agree with every single thing you write above, but have one question
for you: What does Devuan do when daemons like cupsd and sshd make
sd_notify calls, and these don't condition the call on sd_notify being
available, and sd_notify cannot be conditionally recompiled out of the
program? Is Devuan going to rewrite every single daemon?


 I have a detailed answer to this but I have to go out right now for
a few hours. Sorry about that and stay tuned. ;)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] straw poll, non-free firmware for installers

2015-06-03 Thread Laurent Bercot

On 03/06/2015 19:50, Vince Mulhollon wrote:

Just be careful, the assumption is the user is the installer is the
buyer, and frankly most of the machines I've installed in the last 20
years, that has not been the case.


 My point exactly, and my apology for entertaining the confusion with a
poor choice of words.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] straw poll, non-free firmware for installers

2015-06-03 Thread Laurent Bercot

On 03/06/2015 18:41, hellekin wrote:

*** I must I was almost agreeing until moralistic crap.  This is
your opinion, and in my own, an unfounded one.  What we're talking
about here is about technology, not moralistic anything.

The technology we're building is one that empowers the user, and it
is arguable whether considering the imposition of
freedom-restricting technology empowers the use or not.  The case is
hardware that the user buys and that refuses to work without secret
code from the company.


 Well, when the user buys such a piece of hardware, he is *already*
disempowered. If he (gender chosen by flipping a coin) is technical
enough to perform the Devuan driver installation, chances are he already
knows the kind of hardware he has; and what he now wants is to get the
damn thing working, not get blamed because his hardware sucks. He knows.
He's probably not the decision-maker - think technical people in
companies. He probably did not vote for that hardware but got overruled.
Adding insult to injury by lecturing him is unnecessarily aggravating.

 There is a place and time for everything, including freedom advocacy.
I am all for advocating the use of decent hardware and for throwing
locked in crap into the garbage can. I wish hardware manufacturers
would understand the benefits of open specifications. I wholeheartedly
support advocacy campaigns, including naming and shaming the worst
hardware offenders.
 But machine installation is not the time for advocacy. The decision
has already been made, and at that point, telling users that it sucks
isn't going to help anyone, it's just going to make the distribution
look bad.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] OT: separate GUI from commands

2015-05-31 Thread Laurent Bercot

On 31/05/2015 18:35, Didier Kryn wrote:

AFAIU, this thread has turned to be about interfacing whatever app to
a scripting language. I consider this a very usefull feature for all
but basic applications. In particular, I consider that interfacing
init - The init program which is pid 1 - with a scripting language
would provide ultimate init freedom.


 init is the pinnacle of a basic application. It's not even an
application, it's a system program. There is NO reason why you'd want
to interface init to a scripting language - what are you trying to
accomplish that can't be accomplished outside of process 1 ?

init freedom, whatever that means, is a political goal, and has to be
reached with political tools. Trying to use technical tools to reach a
political goal is not a good idea; it usually produces bad software and
ends up being unsatisfying for everyone.

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] The more things change, the more they remain the same

2015-05-28 Thread Laurent Bercot

On 28/05/2015 11:43, Didier Kryn wrote:

porting to Musl was not finished yet - still problems
with dynamic linking he says. I prefer Musl to uClibc for several
reasons


 I'm using musl too. You can use the Aboriginal toolchains, even if
they're uClibc-based, to compile musl, and then link stuff against musl.
It just requires tweaking the musl-gcc.specs file a bit. More details
offline if you want!



finally bootstrapped my toolchain from
https://github.com/sabotage-linux/sabotage . But, even in Sabotage,
the compiler is not sysrooted :-(


 Yeah, I tried Sabotage too, and it's good, but I prefer the Aboriginal
toolchains, for this reason and other ones (for instance, a fully
static toolchain is much more elegant and easy to move around).

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] OT: separate GUI from commands

2015-05-27 Thread Laurent Bercot

On 27/05/2015 12:12, Hendrik Boom wrote:

I'm in the process of writing (yet) a(nother) editor and output formatter,
and on reading this, I started to wonder -- just how could one separate
a command-line version from the UI?  I can see that the output
formatter can be so separated (and very usefully), but the actual
editing?


 Maybe I'm misunderstanding the question, as I'm unfamiliar with the
internal of editors. But IIUC, an editor is, by definition, a user
interface, so the command-line / UI separation looks impossible or
irrelevant to me here.
 However, there are still some separations that you can perform when
designing an editor. Right off the top of my head:
 - Core functions vs. actual interface, which could be terminal-based
or X-based. Think vim vs. gvim, or emacs -nw.
 - programmable editor commands vs. interface to enter those commands.
Think the sed engine in vi, or the LISP engine in emacs.

 If you factor your work correctly, it should be trivial for a program
to get access to your editor commands via a library function call - and
you can make a command-line tool to wrap useful calls. Also, there could
be an editor-base package with very few dependencies, providing your main
library; and an editor-text package depending on editor-base and ncurses
(for instance), and an editor-gui package depending on editor-base and
your favorite graphical libraries. So text-only users wouldn't have to
install editor-gui and pull the whole plate of spaghetti.

 My 2 cents,

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] The more things change, the more they remain the same

2015-05-27 Thread Laurent Bercot

On 27/05/2015 17:51, Irrwahn wrote:


No intention to lessen your main point, but that last observation
does not come as a surprise. Development systems inherently have
an installation overhead compared to simple runtime environments,
it's always been that way.


 Oh, definitely. My router doesn't need a C compiler, or make, or
even a shell, to run. (True: at no point in its normal operation
does it call a single shell script.) However, I still have those
installed, because keeping the production environment and the build
environment separate is kind of an exercise in futility, and a waste
of time, when you have gigabytes of SSD to spare.
 For my previous router, which didn't have any storage space except
for a 32 MB CompactFlash card, I definitely built the image offline.
It worked well, but the development cycle was a bit uncomfortable :)

 My point was that when you're that close to the kernel, when you're
doing pure C development at a low level, you should need a shell, some
POSIX utilities such as sed/grep, a C toolchain, and make. *That's it*.
Anything more than that is bloat - and in particular, when you're Linux-
specific, i.e. not even trying to be portable, autotools is not an
asset, it's a hindrance. (I won't make a lot of friends by saying that,
but it's okay. Feel free to ignore me.)



However, it amazes me what heaps of
packages one has to wade through to get a minimal usable GNU/Linux
system /capable of replicating itself/. (I'm currently digging my
way through Linux from scratch, as an educational exercise.)


 Yeah, well, the problem with self-replication is that it starts with
a C toolchain, and bootstrapping the GNU toolchain is an adventure
of its own - you can't exactly call that simple or lightweight software.
(Unfortunately, I'm not aware of any working alternative so far. AFAIK,
clang/llvm can't bootstrap itself.)

 I've never delved into the nine circles of toolchain building and
self-replication myself, because another guy has already done all the
hard work: http://landley.net/aboriginal/
 (Yes, I do love that project. It's an awesome time-saver.)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] The more things change, the more they remain the same

2015-05-27 Thread Laurent Bercot

On 27/05/2015 17:46, Anto wrote:

And I have been using OpenWRT for years


 This is exactly akin to using a distribution, even if you
recompile it from source: it hides the real costs such as
software dependencies, because it precisely does all the hard
work for you.
 OpenWRT is a great project, but it wasn't right for me.
(Besides, they're still on uClibc. They should switch to musl.)

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Linux boot documentation

2015-05-05 Thread Laurent Bercot

On 05/05/2015 11:22, Didier Kryn wrote:

I'm not sure what happens if init crashes after other processes have
been started, wether the kernel panics or other processes continue
without init - not a very good situation.


 The Linux kernel panics when init dies. It's the dreaded attempted
to kill init! error message, the same you get at boot time when the
kernel fails to execute init for some reason (often a rootfs not
found).
 This behaviour is debatable - a cleaner solution would probably be
to reboot when init dies. But it's how it has always worked.

 Other OSes may exhibit different behaviours. On an old version of
Solaris, for instance, pid 1 could die like any other process and the
kernel would simply keep running. However, no process would reap
orphans, and zombies would accumulate, eventually making the machine
unusable - full process table with no way to clean up - so it was not
an acceptable design, and I think they changed it later on (but I
stopped using Solaris so I'm not sure how it evolved).



Hope this helps and there are no errors :-)


 AFAIK, it was pretty spot-on :)

--
 Laurent
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Debian Dev: anti-systemd people hate women; thus respectable people should not support anti-systemd stance.

2015-05-03 Thread Laurent Bercot

On 03/05/2015 06:44, Steve Litt wrote:

What is the motivation for a person to join the mailing list
of an anti-systemd, pro-choice distro, and start spouting pro-systemd
stuff? What kind of a use of time is that? Why do several people keep
doing this? What could they possibly gain?


 Be fair. There are also people who go and troll pro-systemd mailing-lists.
Not as many, but the ratio to the total amount of anti-systemd or
pro-systemd people is about the same in both camps.
 Trolling isn't a matter of opinion, but of immaturity, which is shared
equally and cannot be fought with rationality, so don't devote too much
brain time to wondering about its reasons. :)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] [dng] vdev status updates

2015-05-03 Thread Laurent Bercot

On 03/05/2015 10:15, marc...@welz.org.za wrote:

So you have just argued that to hide things from autocompletion,
one should make things 0700. We have also established that
for this scheme to work the shell needs to do a stat() *for* *each*
*candidate* executable. But the my /bin/bash does not do a
stat() - which is sensible, cause that would be slow. And I
couldn't parse if your zsh does or does not. So your proposed
solution does not work for most users. So then you say one shouldn't
rely on the autocompletion, because your advice (of merging /bin, /sbin
and then marking user-uninteresting executables 0700) will make that
mechanism unreliable, where it was quite reliable before. So your
solution breaks autocompletion.


 I am starting to wonder whether I really am that bad at expressing
myself. I'll try to clarify.

 * You said autocompletion with stat() would be slow. Fair enough.
(although profiling this would be interesting. How slow is slow ?
You mention smartphones or wearables, but how often do people run an
interactive shell on those ?)
 * I tested my zsh autocompletion, and observed that it does not
perform stat(), thus leaning your way. OK, 0700 do not hide binaries
from autocompletion, so in the current world, my suggestion does not
work.
 * I then argue that in the current world, autocompletion is not
reliable, because since it does not stat(), it cannot hide filenames
the user cannot execute, such as a 0644 file. What your autocompletion
is currently printing is an approximation of the programs you can run,
not an accurate list - in other words, merging /bin and /sbin would not
break autocompletion, because it is already broken. You are just
not seeing it because the (good) convention of not putting anything
non-executable in /bin is widely followed - but the whole mechanism
is simply relying on a convention, and stricto sensu you should not
entirely trust it. The only way to make autocompletion reliable
would be to stat() every file it scans. Which may or may not be too
slow in practice.

 Note that I do not actually suggest merging /bin and /sbin. I think
that it would be too much effort for too little gain. I only said that
if a directory structure was designed today, without the weight of
historical practice and convention, then /sbin and /usr/sbin would
simply not be needed.



* You have say a setgid executable which probably isn't perfectly
   secure. You trust your admin crowd to run it, but maybe not
   a php script which has escaped apache.

* So you put it into /usr/sbin and do

   chmod 750 /usr/sbin
   chgrp admin /usr/sbin

* And now if an exploit for said setgid tool becomes available
   then the php script won't be able to run it.


 OK, thanks for clarifying. This isn't security through obscurity
indeed. However, you could achieve the exact same result by putting
this tool in /usr/bin - or anywhere else - with chown root:admin
and chmod 2750 - you don't need a separate directory to hold binaries
you only want a specific group to run.

 And chmodding /usr/sbin is less flexible than chmodding individual
executables, because all the executables you put into /usr/sbin will
only be accessible to group admin, whereas chmodding individual
executables allows you to select which group, or which user, will
be able to run them. So if I wanted to restrict a binary's
executability to a group, I would chmod that binary regardless of
its location; I would not set up a specific location to host
restricted binaries.



And in general, the R bit on files is there to hide data. If you
confuse that with obscurity then there would be no confidentiality
at all (and confidentiality, along with availability and integrity
are what make up security).


 I was referring to the lack of an r bit on the /usr/sbin *directory*,
which only hides file names, and is only useful if all your files
have unpredictable names (which is usually not the case at all for
executables).
 The real security in your design does not come from the lack of an
r bit, but from the lack of an x bit on that directory, which makes
the files non-accessible.



Note I didn't say exactly, I implied it was a cheap option to
give cp/tar/rsync to not build the full image and thereby improve
the security and reduce the size of the image. Selecting packages
and files individually is a lot more effort. Think about /sbin
as a tag on the executable saying probably not interesting
to a normal user. This information isn't kept elsewhere,
and so automating this by other means isn't that easy.


 Sure, as an informational tag, /sbin makes sense. I agree that it
is convenient. But more effort must be put into trimming an image
if you want real security. Drop the security argument and I'm with
you.



Mount has an excuse to be a run by a user, given that fstab
supports the user mount option.


 I remember 10ish years ago, mount was actually /sbin/mount.
It migrated to /bin at some point, probably, as you say, when the
user mount option was added. 

Re: [Dng] [dng] vdev status updates

2015-05-02 Thread Laurent Bercot

On 02/05/2015 09:43, marc...@welz.org.za wrote:

  0700 for root-only binaries would hide them from your shell's
  autocompletion.


Which would be lots of stat() system calls.


 If a shell doesn't make them, then it doesn't verify that a file is
executable either. (I just checked with zsh: it doesn't indeed.)
Sure, few people will install non-executable files in PATH search
locations, but if autocompletion doesn't guarantee that a filename
it prints will be executable, then it shouldn't be relied on, and
it's not a good argument for /bin and /sbin separation.



Also on paranoid systems /sbin and /usr/sbin can itself be made 0700 or
0750, so that random users can't even work out what admin commands might
be there (hide suid exploits)


 I don't support security through obscurity, and you shouldn't either.



Or /sbin can be deleted/omitted entirely on containers/virtual images
where all admin has been done already.


 People who tailor images with exactly the binaries they need will do
it regardless of the location of those binaries. If you don't need
/sbin/route, you probably don't need /bin/mount either.



So there are very good reasons for keeping the classic/standard layout.


 The reasons you gave so far are pretty minor.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] [dng] FS structure: Was vdev status updates

2015-05-02 Thread Laurent Bercot

On 02/05/2015 10:52, marc...@welz.org.za wrote:

... confusing - it is unclear if he is arguing that departures from the
standard should be entertained or not.


 I am arguing that FHS includes good things and bad things, and that
good things should be followed and bad things should not. In other
words, I am encouraging the use of critical mind and thought over
blind acceptance or blind rebellion, which are two sides of the
same manicheism.

 I followed by saying that the separation between /bin and /usr/bin
is one of the good recommendations of FHS, and that breaking it is
a horrible idea. That was the important part of my message.



Maybe design is the incorrect phrase, maybe say carefully evolved.


 But it wasn't. It evolved in an empirical, darwinian manner, with
good results for some parts, less good results for others, and a
lot of legacy cruft.

 Look, I don't like the FHS as a whole. It doesn't mean I disagree
with everything it says. It also doesn't mean I agree with people
who want to do stupid things like symlink /bin to /usr/bin.

 We're all here because we disagree with the systemd clique's views,
even if our reasons may differ. Don't think I'm the enemy just because
I may object to something you say.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] [dng] vdev status updates

2015-04-30 Thread Laurent Bercot

On 30/04/2015 22:35, Joerg Reisenweber wrote:

exactly this PATH issue is what I expect and appreciate here: I do NOT expect
command autocompletion of normal user to get confused by command names that
are not supposed to even be in user's PATH


 0700 for root-only binaries would hide them from your shell's autocompletion.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Boot sequence: was vdev status update

2015-04-19 Thread Laurent Bercot

On 19/04/2015 09:19, Didier Kryn wrote:

 Hi Laurent. I suspect, from a recent experience that /linuxrc is tried 
even before /init.


 Yes, but AFAICT, it's only when an initrd is used. (The relevant
piece of code is init/do_mounts_initrd.c)
 And initramfs is not considered as an initrd, so /linuxrc doesn't
apply, unless you explicitly use init=/linuxrc on the boot command
line.

 I'm no kernel expert though, so I may be wrong - but what I'm
reading in the source confirms my experience.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] Boot sequence: was vdev status update

2015-04-18 Thread Laurent Bercot

On 19/04/2015 01:06, Isaac Dunham wrote:

I'm not sure exactly what the priority is, but the kernel searches
/sbin/init, /init, and /linuxrc at least.


 From the kernel source (init/main.c):
if (!try_to_run_init_process(/sbin/init) ||
!try_to_run_init_process(/etc/init) ||
!try_to_run_init_process(/bin/init) ||
!try_to_run_init_process(/bin/sh))
return 0;

with /init being executed prior to any of those lines if it exists.



I've seen a reference to a kernel option that changes the init on the
initramfs recently on this list; I'm not positive what it is, but it may
be rdinit.


 According to the same kernel file: yes, rdinit :)



As far as I know, if the initramfs init program completes, the kernel
panics.
It needs to mount the new / on a mountpoint (traditionally /mnt), move
/dev to /mnt/dev if using a dynamic /dev, change the root (pivot_root()
is the typical syscall, usually used via switch_root or pivot_root
commands), and exec the new init.


 It makes absolutely no difference to the kernel whether init is run from
an initramfs or not. The init process that the kernel runs is pid 1 and the
kernel stops caring what happens at that point - except it doesn't like it
if pid 1 dies. From there on, finalizing the initialization is the job of
userspace; most people will indeed want to pivot_root, but some systems are
happy running with the initramfs as the root filesystem for the whole machine
lifetime.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] dev-list

2015-04-09 Thread Laurent Bercot

On 09/04/2015 10:37, Jaromil wrote:

a -Dev list is there already, just not public and invite only.


 That's really a shame, because I would love to have access to that list -
even read-only. Isn't it possible to open subscriptions while keeping
posts moderated ? (posts from devs would be auto-approved, of course)

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [Dng] dev-list

2015-04-09 Thread Laurent Bercot


 For what is worth - and at risk of adding fuel to the fire, but
I am just voicing my impressions and you guys will do what you want
with it:

 I have subscribed to this list five days ago, hoping to see technical
discussions about how to design a distribution without systemd. I am
the author of an alternative system (s6), and am interested in learning,
among other things:
 - what systemd provides in today's distributions and needs replacing
 - what are the solutions chosen by devuan folks

 So far, in 5 days I've received about 100 messages, of which:
 - seven are of interest to me (the vdev part. I actually learned
something, as I often do when Jude writes.)
 - more than one-third is the current meta discussion about the list
 -more than half of the rest is circlejerking or idle chatter.

 7% is too low for me. Please don't suggest reader-side filters:
 - they are basically an admission of defeat in focusing the list's purpose
 - they still require writer-side effort, and they put burden on people
who actually want to be cooperative.

 Honestly, I have nothing against circlejerking.  It feels good, and I
hate systemd as much as anyone here - probably more than most; so, seeing
likeminded people is heartwarming. But my belief is that one of the main
reasons systemd is winning is that its opponents spend too much energy
talking about it and not enough designing alternatives - and so I'm here
for action, not words.

 Please direct me to the place where the technical discussions are
happening; if they're supposed to happen here, well, sorry but that's not
an efficient working environment, and I'll find information by other means.

--
 Laurent

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng