On Fri, Feb 21, 2014 at 2:14 PM, J. Roeleveld <jo...@antarean.org> wrote:
> On Thu, February 20, 2014 06:34, Canek Peláez Valdés wrote:
>> On Wed, Feb 19, 2014 at 3:00 AM, J. Roeleveld <jo...@antarean.org> wrote:
>>> On Tue, February 18, 2014 18:12, Canek Peláez Valdés wrote:
>>
>> [ snip ]
>>
>>>> Of course the larger a project is the *potential* number of bugs
>>>> increases, but so what? With enough developers, users and testers, all
>>>> bugs are *potentially* squashed.
>>>
>>> Agreed, but I know of enough large projects with large development teams
>>> and even more users that don't get the most basic bugs fixed.
>>> Quantity is not equivalent to Quality.
>>
>> I also agree with that. My point is that the systemd project has
>> enough numbers of *talented* developers to do it.
>>
>> You can disagree, of course.
>
> Talented developer, maybe.
> But not talented designers.

That's subjective. For me (and many others), the design of systemd is sound.

>>>> And systemd has a *much* wider community than any other init system.
>>>> So it can handle a larger code base.
>>>
>>> Incorrect. How many people use systemd as opposed to SysV Init?
>>
>> Users? Like five thousand godzillions more.
>
> I tend to disagree.

I meant that SysV has like five thousand godzillions more that
systemd. Sorry for the confussion.

> Systemd is ONLY on Linux.
> SysV init can be found on alot of other platforms used in the world. Think
> Solaris, AIX, HPuX and Linux machines that have not had their init-systems
> changed.
>
>> Developers? It would not surprise me that systemd has several times
>> more developers that SysV ever had.
>
> Maybe, but the developers back then still followed the unix-way: Have a
> tool do one job and do it well.

Again, for many of us that doesn't matter, and we don't take it like
an article of faith.

> From what I see from systemd, it tries to do too much and the single jobs
> suffer from feature-bloat.

Many of us believe they solve real problems, and they make our life easier.

>> What's more, I think those developers are talented enough, to say the
>> least.
>
> I miss talented designers.

Wonder why?

>>>>>> > SysVinit code size is about 10 000 lines of code, OpenRC contains
>>>>>> > about 13 000 lines, systemd   about 200 000 lines.
>>>>>>
>>>>>> If you take into account the thousands of shell code that SysV and
>>>>>> OpenRC need to fill the functionality of systemd, they use even more.
>>>
>>> The shell-code is proven to work though and provided with most of the
>>> software. Where it isn't provided, it can be easily created.
>>> I have seen (and used) complex start-up scripts for large software
>>> implementations which complex dependencies.
>>> Fortunately, later versions of those software packages have fixed that
>>> mess to a large extend, but I wonder how well systemd unit-files can
>>> work
>>> in such an environment.
>>
>> You can read [1]. I think it provides a fair and impartial account of
>> how to use systemd to start a complex service (NFS, by its author).
>
> I would not class NFS as a complex service.
> I am talking about a dozen different services that need to be started in a
> specific order where the next one is not allowed to start before the
> previous one actually responds to TCP/IP connections.

If you had read the link, you would have learned that NFS has 14 unit
files, form a lot of daemons that have to run in concurrent form (and
some of them only when others are not, etc.) It *IS* a complex
service.

> How would I configure that in systemd unit-files?

Read the link

> If I were to have sockets created in advance (does it work with TCP/IP
> sockets?) I would get timeouts on the responses which would lead to some
> services not starting correctly and ending up in limbo...

You don't know how the socket activation works, do you? At boot time,
if a service ask for a socket on port 1234 (and yes, they work on
TCP/IP sockets), systemd opens the socket for the service, and the
service *does not start yet*.

When the *first* connection gets into the socket, systemd starts the
service, and when it finishes starting, systemd passes the opened
socket to it as an fd. Done, now the service has control of the
socket, and it will until the services terminates; not when the
connection closes (although you can configure it that way), when the
*service* terminates.

If several connections arrive to the socket *before* the service
finishes starting up, the kernel automatically queues them, and when
systemd handles the socket to the service, the service does it things
for all of them.

There is *no single* connection lost. Well, if a godzillion
connections arrive before the service finishes starting up, the kernel
queue is finite and some would be lost, but it would have to be a lot
of connections arriving in a window of some microseconds.

>>> Having sockets created prior to service start will not work as
>>> components
>>> will fail due to time-outs, leaving even a bigger mess.
>>
>> I could be wrong, but I believe the use of cgroups takes care of all
>> that. If the service fails, systemd PID 1 can reliable detect it, and
>> force the socket to close, and even reopen it for new connections if
>> so configured by the administrator.
>
> Force the socket to close? That's nice, goodbye connection to one of the
> databases. Hello auto-shutdown of services because something is clearly
> wrong.

You are not understanding me; read above. I meant that if the service
*crashes* (and therefore the connections are gone anyway), using
cgroups systemd can reliable detect it and close the associated
sockets (if not closed already), and (if so configured), open the
socket again and wait for connections while the service is restarted.

systemd is *NOT* xinitd.

> With auto-restart, that will create an interesting sequence of events
> designed to really break the installation.

It does not; thanks mostly due to the kernel, it works pretty great.

>>>>> If that code will fail, this wouldn't be critical at system level.
>>>>> Thus scope of fatal error is limited.
>>>>
>>>> Also in systemd, since most of its code is not critical (again;
>>>> logind, datetimed, localed, etc., failing, has no impact whatsoever on
>>>> the rest of the system).
>>>
>>> I understand the usecase for "logind", but what is the point of a daemon
>>> to supply the time (datetimed)? Is this a full replacement for "ntpd"?
>>> And what does "localed" do? That's configured once in the environment
>>> and
>>> should be handled using environment variables.
>>
>> I'm sorry, but *everything* you are asking for is in the link I gave
>> you that you qualified it of "not necessary for this discussion" (I
>> also pointed someone else to [2]). If you are really interested in the
>> answers, go on and read it there.
>>
>> It's certainly better than hearing it from me.
>
> Maybe, but based on the name, and I am assuming the names have some sort
> of relevance, localed makes no sense.

Roeleveld, if you had bothered to read the several links that I have
provided, you could answer all your question and see that you are
wrong about many things about systemd.

That you haven't done it, and that many times that I answer any of
your points, you ignore that one and go on another tangent trying to
discredit systemd, I believe you are not trying to have an honest
technical conversation.

Therefore, I will stop engaging with you at this point.

Regards.
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México

Reply via email to