Re: interesting claims
I apologize for interrupting, and also make my presence known at the same time, as my level of technical expertise should restrict me to being a silent entry level student, but in all my searches I have not gotten a good answer. (introduction at the end) The Question: As a newbie outsider I wonder, after following the discussion of supervision and tasks on stages (1,2,3), that there is a restrictive linear progression that prevents reversal. In terms of pid1 that I may not totally understand, is there a way that an admin can reduce the system back to pid1 and restart processes instead of taking the system down and restarting? If a glitch is found, usually it is corrected and we find it simple to just do a reboot. What if you can fix the problem and do it on the fly. The question would be why (or why not), and I am not sure I can answer it, but if you theoretically can do so, then can you also kill pid2 while pid10 is still running. With my limited vision I see stages as one-way check valves in a series of fluid linear flow. In reference to the 95% reliability model which I can understand, I believe systemd works on 50% reliability basis. If there is a thing it does well is to clean up the mess its own design constantly creates, without bothering the admin. It is like a wealthy home owner who eats chocolates throwing the wrappers on the floor while walking through the house and having servants cleaning up behind him. He is always in a clean house. The extremes being having the house sealed to prevent dust coming in, or clean up every week or two and let it breath some fresh air. I think the fallacy with supervision is if you try to anticipate anything that can possibly happen when you can't. Can the user without any admin privileges be allowed to compile and run software and have 100% of available resources to do so? How efficient is a system that mandates a cap on resources? -- Introduction: I don't like to eavesdrop and just read/listen discussion without people realizing I am here too, so I am making my presence known. I run a blog sysdfree.wordpress.com and I have been introduced to s6 and runit in the past couple of years through using Obarun, Void, and Artix, and by reading a few articles by Steve Litt. I am fascinated that in the world of open and free software meritocracy is really low when compared to corporate budgets and marketing. My aim is not to write my own init system, not even hack the one I use, but find the reasons why would large corporate projects fund a mediocre system, and promote it, almost by force, while what is superior remains relatively unknown. I understand that there are merits in working quietly and nearly alone, but still. I have a hunch that control, of software design and users, may have something to do with the "source of funding". PS I promise to remain quiet and learn before I speak again.
Re: interesting claims
On Thu, 16 May 2019 01:22:14 +0200 Oliver Schad wrote: > On Wed, 15 May 2019 13:22:48 -0400 > Steve Litt wrote: > > > The preceding's true for you, but not for everyone. Some > > people, like myself, are perfectly happy with a 95% reliable > > system. I reboot once every 2 to 4 weeks to get rid of accumulated > > state, or as a troubleshooting diagnostic test. I don't think I'm > > alone. Some people need 100% reliable, some don't. > > That is a strange point of view: Not strange at all. In a tradeoff between reliability and simplicity, some people will sacrifice some off the former to get some of the latter. > there might be people who doesn't > need computers at all. So we shouldn't program anything? The preceding analogy makes no sense in the current context. > So if there > are people outside who needs a higher quality and Laurant wants to > target them, then he needs to deliver that and it makes sense for Laurant to program to their higher standards because that's what he wants to do. It would also make sense for somebody to make something simpler, but with lower reliability. > argument with that requirement. I don't understand the preceding phrase in the current context. There's a tradeoff between the product A, which has the utmost in reliability and a fairly simple architecture, and product B, which is fairly reliable and has the utmost in simplicity. In contrast to A and B, there's product C whose reliability is between A and B, but which is much less simple than A and B. Then there's productD, which is unreliable and whose architecture is an unholy mess. When viewed over the entire spectrum, the differences in A and B could reasonably be termed a "family quarrel". Absent from the entire discussion are people who don't need A, B, C or D. SteveT
Re: interesting claims
On Wed, 15 May 2019 13:22:48 -0400 Steve Litt wrote: > The preceding's true for you, but not for everyone. Some > people, like myself, are perfectly happy with a 95% reliable system. I > reboot once every 2 to 4 weeks to get rid of accumulated state, or as > a troubleshooting diagnostic test. I don't think I'm alone. Some > people need 100% reliable, some don't. That is a strange point of view: there might be people who doesn't need computers at all. So we shouldn't program anything? So if there are people outside who needs a higher quality and Laurant wants to target them, then he needs to deliver that and it makes sense to argument with that requirement. Best Regards Oli -- Automatic-Server AG • Oliver Schad Geschäftsführer Turnerstrasse 2 9000 St. Gallen | Schweiz www.automatic-server.com | oliver.sc...@automatic-server.com Tel: +41 71 511 31 11 | Mobile: +41 76 330 03 47 pgpg5pz6vlCk6.pgp Description: OpenPGP digital signature
Re: interesting claims
On Wed, 01 May 2019 18:13:53 + "Laurent Bercot" wrote: > >So Laurent's words from http://skarnet.org/software/s6/ were just > >part of a very minor family quarrel, not a big deal, and nothing to > >get worked up over. > > This very minor family quarrel is the whole difference between > having and not having a 100% reliable system, which is the whole > point of supervision. The preceding's true for you, but not for everyone. Some people, like myself, are perfectly happy with a 95% reliable system. I reboot once every 2 to 4 weeks to get rid of accumulated state, or as a troubleshooting diagnostic test. I don't think I'm alone. Some people need 100% reliable, some don't. My liking of supervision is not 100% reliability, but instead 95% reliability that is also simple, understandable, and lets me write daemons that don't have to background themselves. I don't think I'm alone. > Yes, obviously sinit and ewontfix init are greatly superior to > systemd, sysvinit or what have you. Which is why I call it a family quarrel. Some in our family have a strong viewpoint on whether PID1 supervises at least one process, and some don't. But outside our family, most are happy with systemd, which of course makes most of us retch. > That is a low bar to clear. And > the day we're happy with low bars is the day we start getting > complacent and writing mediocre software. I'd call it a not-highest bar, not a low bar. Systemd is a low bar. > > Also, you are misrepresenting my position - this is not the first > time, and it's not the first time I'm asking you to do better. > I've never said that the supervision had to be done by pid 1, actually > I insist on the exact opposite: the supervisor *does not* have to > be pid 1. What I am saying, however, is that pid 1 must supervise > *at least one process*, which is a very different thing. I'm sorry. Either I didn't know the preceding, or I forgot it. And supervising one process in PID1 makes a lot more sense than packing an entire supervisor in PID1. > s6-svscan is not a supervisor. It can supervise s6-supervise > processes, yes - that's a part of being suitable as pid 1 - but it's > not the same as being able to supervise any daemon, which is much > harder because "any daemon" is not a known quantity. I understand now. > Supervising a process you control is simple; supervising a process > you don't know the behaviour of, which is what the job of a > "supervisor" is, is more complex. I understand now. Thanks, SteveT