Everything stated below is "correct", historically as well as for the
immediate for-profit concerns of the vendors (such as IBM RH or
Canonical). The issues with both the (old,"deceased", pre-RBOC) ATT or
the BSD boot, startup, etc., mechanisms are primarily that these were
designed for a very different epoch, long before Gage coined "the
network is the computer" became the reality (not just a slogan) at a
scale considered science fiction at the time of the original Unix/BSD
mechanisms, and long before large scale type 1 hypervisor deployment
resulting in "cloud" computing. Overwhelmingly, there is agreement that
new mechanisms, and in some sense, algorithms, were/are needed beyond
the original Unix or BSD mechanisms.
The question is: what mechanism? The reality today for Linux systems
as deployed at scale mostly is SystemD. The question -- a question that
goes well beyond what started as an exchange about EL 8 -- is what goes
forward? SystemD as it currently stands is too delicate and too
vulnerable to compromise, either within itself or in terms of the
processes/subsystems it "controls", despite the large scale deployment
of SystemD. The reason behind this is in part driven by the monolithic
design (and implementation) of the Linux kernel, and the symptom is
continued SystemD intrusiveness and bloat throughout much (all?) of the
Linux distros that have deployment at scale. This scale is from laptops
and workstations to large network distributed systems, even if such
large systems are in fact deployment controlled by a non-Linux type 1
hypervisor that instantiates Linux systems -- each such Linux system at
some point uses SystemD.
I asked a question to which I have not seen an answer: does a SystemD
configuration (plain text files in the SystemD design) from two similar
hardware platforms but different Linux distros (say, EL and LTS)
interoperate, or require significant rewriting to produce the "same
results"? In other words, are the valuable concepts of portability and
re-usability (do not reinvent the wheel, another engineering turn of
phrase) met in practice with SystemD?
On 1/25/21 6:57 AM, Lamar Owen wrote:
On 1/24/21 10:59 AM, Mark Rousell wrote:
This is undoubtedly the case but of course it doesn't necessarily
follow that SystemD is the correct solution. And that's where the
controversy arises.
What is 'correct?' (THAT is where the controversy actually lies.) The
old engineering mantra is that 'the better is the enemy of the good
enough;' any piece of software can always be made 'better' for various
definitions of 'better.' There comes a time you have to accept the
'good enough' and get on with life; the Linux kernel is not the
be-all-end-all but at the moment it IS 'good enough' for what it does;
it's not going away no matter how badly kernel purists wish it would.
There are many init systems distributions can choose from; the Wikipedia
article on init lists them and I won't repeat the list here.
The definition of 'correct' is fluid and dynamic, changing with the
needs of the users of the system on which the init must run, and
subsumes far more than technical items, as things like license
compatibility must be considered, too. For a long time root-executable
shell scripts were 'good enough' for 'correct' behavior; that time has
passed. Sun scrapped the whole mess in Solaris 10 with SMF; Canonical
wrote Upstart for Ubuntu because old-style SysV Init was no longer
'correct' for their use cases; Red Hat paid for the development of
systemd because nothing was 'correct' for their desired use cases, and,
well, Red Hat wanted their own solution. Other distributions followed
suit because systemd is now 'good enough' for today's 'correct' and got
to that place before its competitors.
Difficult? That is the understatement of the decade. I prefer to
honestly evaluate new technologies with a bit more pragmatism; do I
like having to learn a different way of doing things? Not really;
but after Debian adopted systemd I took more notice.
The thing is, one cannot wisely evaluate something on *purely*
technical grounds because its function in reality may not be entirely
or purely technical. Issues of politics and control of industry and/or
mindshare are relevant too.
I never evaluate something on purely technical grounds; Debian adopting
systemd is not a technical but a political criterion. I do weigh
technical merit highly, but the political fact that systemd is the
current init system for all of the top 5 server distributions has to be
taken into consideration.
Are there technically better init systems? Sure; daemontools is the
first that comes to my mind (as Nico correctly pointed out), but there
are others. Had djb made his license more palatable then we might all
be griping about daemontools instead of systemd. But that ship has sailed.
What you say in your message, especially the parts I didn't quote, is
quite true all around; but it's unfortunately irrelevant, since systemd
achieved critical mass once Debian adopted it and Ubuntu followed suit.
As much as I'm not really fond of it, systemd is the current winner of
the init war, unless something far better, with a good license, and with
a critical mass to support it, comes along OR systemd becomes so
obnoxious that Debian drops it (again, I use Debian as a bellwether
simply because it's a fully openly developed system with no single
company behind it, so no 'corporate agenda' to interfere with open
decision-making).
One thing can be said that I'm sure everyone will agree on: systemd is
definitely the most polarizing component of the typical Linux distribution.